WorldWideScience

Sample records for validation validation results

  1. Validation Results for LEWICE 3.0

    Science.gov (United States)

    Wright, William B.

    2005-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  2. Roll-up of validation results to a target application.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy

    2013-09-01

    Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.

  3. Results from the First Validation Phase of CAP code

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  4. The Mistra experiment for field containment code validation first results

    International Nuclear Information System (INIS)

    Caron-Charles, M.; Blumenfeld, L.

    2001-01-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  5. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  6. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  7. Urban roughness mapping validation techniques and some first results

    NARCIS (Netherlands)

    Bottema, M; Mestayer, PG

    1998-01-01

    Because of measuring problems related to evaluation of urban roughness parameters, a new approach using a roughness mapping tool has been tested: evaluation of roughness length z(o) and zero displacement z(d) from cadastral databases. Special attention needs to be given to the validation of the

  8. Toward valid and reliable brain imaging results in eating disorders.

    Science.gov (United States)

    Frank, Guido K W; Favaro, Angela; Marsh, Rachel; Ehrlich, Stefan; Lawson, Elizabeth A

    2018-03-01

    Human brain imaging can help improve our understanding of mechanisms underlying brain function and how they drive behavior in health and disease. Such knowledge may eventually help us to devise better treatments for psychiatric disorders. However, the brain imaging literature in psychiatry and especially eating disorders has been inconsistent, and studies are often difficult to replicate. The extent or severity of extremes of eating and state of illness, which are often associated with differences in, for instance hormonal status, comorbidity, and medication use, commonly differ between studies and likely add to variation across study results. Those effects are in addition to the well-described problems arising from differences in task designs, data quality control procedures, image data preprocessing and analysis or statistical thresholds applied across studies. Which of those factors are most relevant to improve reproducibility is still a question for debate and further research. Here we propose guidelines for brain imaging research in eating disorders to acquire valid results that are more reliable and clinically useful. © 2018 Wiley Periodicals, Inc.

  9. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  10. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  11. 42 CFR 476.84 - Changes as a result of DRG validation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO... in DRG assignment as a result of QIO validation activities. ...

  12. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  13. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner dissatisfied with a change to the diagnostic or procedural coding information made by a QIO as a result of DRG...

  14. ValidatorDB: database of up-to-date validation results for ligands and non-standard residues from the Protein Data Bank.

    Science.gov (United States)

    Sehnal, David; Svobodová Vařeková, Radka; Pravda, Lukáš; Ionescu, Crina-Maria; Geidl, Stanislav; Horský, Vladimír; Jaiswal, Deepti; Wimmerová, Michaela; Koča, Jaroslav

    2015-01-01

    Following the discovery of serious errors in the structure of biomacromolecules, structure validation has become a key topic of research, especially for ligands and non-standard residues. ValidatorDB (freely available at http://ncbr.muni.cz/ValidatorDB) offers a new step in this direction, in the form of a database of validation results for all ligands and non-standard residues from the Protein Data Bank (all molecules with seven or more heavy atoms). Model molecules from the wwPDB Chemical Component Dictionary are used as reference during validation. ValidatorDB covers the main aspects of validation of annotation, and additionally introduces several useful validation analyses. The most significant is the classification of chirality errors, allowing the user to distinguish between serious issues and minor inconsistencies. Other such analyses are able to report, for example, completely erroneous ligands, alternate conformations or complete identity with the model molecules. All results are systematically classified into categories, and statistical evaluations are performed. In addition to detailed validation reports for each molecule, ValidatorDB provides summaries of the validation results for the entire PDB, for sets of molecules sharing the same annotation (three-letter code) or the same PDB entry, and for user-defined selections of annotations or PDB entries. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Planck early results. XIV. ERCSC validation and extreme radio sources

    DEFF Research Database (Denmark)

    Lähteenmäki, A.; Lavonen, N.; León-Tavares, J.

    2011-01-01

    Planck's all-sky surveys at 30-857 GHz provide an unprecedented opportunity to follow the radio spectra of a large sample of extragalactic sources to frequencies 2-20 times higher than allowed by past, large-area, ground-based surveys. We combine the results of the Planck Early Release Compact So...

  16. Computational fluid dynamics simulations and validations of results

    CSIR Research Space (South Africa)

    Sitek, MA

    2013-09-01

    Full Text Available Wind flow influence on a high-rise building is analyzed. The research covers full-scale tests, wind-tunnel experiments and numerical simulations. In the present paper computational model used in simulations is described and the results, which were...

  17. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  18. Site characterization and validation - Tracer migration experiment in the validation drift, report 2, part 1: performed experiments, results and evaluation

    International Nuclear Information System (INIS)

    Birgersson, L.; Widen, H.; Aagren, T.; Neretnieks, I.; Moreno, L.

    1992-01-01

    This report is the second of the two reports describing the tracer migration experiment where water and tracer flow has been monitored in a drift at the 385 m level in the Stripa experimental mine. The tracer migration experiment is one of a large number of experiments performed within the Site Characterization and Validation (SCV) project. The upper part of the 50 m long validation drift was covered with approximately 150 plastic sheets, in which the emerging water was collected. The water emerging into the lower part of the drift was collected in short boreholes, sumpholes. Sex different tracer mixtures were injected at distances between 10 and 25 m from the drift. The flowrate and tracer monitoring continued for ten months. Tracer breakthrough curves and flowrate distributions were used to study flow paths, velocities, hydraulic conductivities, dispersivities, interaction with the rock matrix and channelling effects within the rock. The present report describes the structure of the observations, the flowrate measurements and estimated hydraulic conductivities. The main part of this report addresses the interpretation of the tracer movement in fractured rock. The tracer movement as measured by the more than 150 individual tracer curves has been analysed with the traditional advection-dispersion model and a subset of the curves with the advection-dispersion-diffusion model. The tracer experiments have permitted the flow porosity, dispersion and interaction with the rock matrix to be studied. (57 refs.)

  19. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Disclosure of accreditation, State and CMS... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a) Accreditation organization inspection results. CMS may disclose accreditation organization inspection results to...

  20. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Science.gov (United States)

    2010-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  1. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Science.gov (United States)

    2010-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  2. Validation Test Results for Orthogonal Probe Eddy Current Thruster Inspection System

    Science.gov (United States)

    Wincheski, Russell A.

    2007-01-01

    Recent nondestructive evaluation efforts within NASA have focused on an inspection system for the detection of intergranular cracking originating in the relief radius of Primary Reaction Control System (PCRS) Thrusters. Of particular concern is deep cracking in this area which could lead to combustion leakage in the event of through wall cracking from the relief radius into an acoustic cavity of the combustion chamber. In order to reliably detect such defects while ensuring minimal false positives during inspection, the Orthogonal Probe Eddy Current (OPEC) system has been developed and an extensive validation study performed. This report describes the validation procedure, sample set, and inspection results as well as comparing validation flaws with the response from naturally occuring damage.

  3. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  4. Planck intermediate results: IV. the XMM-Newton validation programme for new Planck galaxy clusters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Delabrouille, J.; Ganga, K.

    2013-01-01

    We present the final results from the XMM-Newton validation follow-up of new Planck galaxy cluster candidates. We observed 15 new candidates, detected with signal-to-noise ratios between 4.0 and 6.1 in the 15.5-month nominal Planck survey. The candidates were selected using ancillary data flags d...

  5. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  6. Evaluation of convergent and discriminant validity of the Russian version of MMPI-2: First results

    Directory of Open Access Journals (Sweden)

    Emma I. Mescheriakova

    2015-06-01

    Full Text Available The paper presents the results of construct validity testing for a new version of the MMPI-2 (Minnesota Multiphasic Personality Inventory, which restandardization started in 1982 (J.N. Butcher, W.G. Dahlstrom, J.R. Graham, A. Tellegen, B. Kaemmer and is still going on. The professional community’s interest in this new version of the Inventory is determined by its advantage over the previous one in restructuring the inventory and adding new items which offer additional opportunities for psychodiagnostics and personality assessment. The construct validity testing was carried out using three up-to-date techniques, namely the Quality of Life and Satisfaction with Life questionnaire (a short version of Ritsner’s instrument adapted by E.I. Rasskazova, Janoff-Bulman’s World Assumptions Scale (adapted by O. Kravtsova, and the Character Strengths Assessment questionnaire developed by E. Osin based on Peterson and Seligman’s Values in Action Inventory of Strengths. These psychodiagnostic techniques were selected in line with the current trends in psychology, such as its orientation to positive phenomena as well as its interpretation of subjectivity potential as the need for self-determined, self-organized, self-realized and self-controlled behavior and the ability to accomplish it. The procedure of construct validity testing involved the «norm» group respondents, with the total sample including 205 people (62% were females, 32% were males. It was focused on the MMPI-2 additional and expanded scales (FI, BF, FP, S и К and six of its ten basic ones (D, Pd, Pa, Pt, Sc, Si. The results obtained confirmed construct validity of the scales concerned, and this allows the MMPI-2 to be applied to examining one’s personal potential instead of a set of questionnaires, facilitating, in turn, the personality researchers’ objectives. The paper discusses the first stage of this construct validity testing, the further stage highlighting the factor

  7. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  8. Transient FDTD simulation validation

    OpenAIRE

    Jauregui Tellería, Ricardo; Riu Costa, Pere Joan; Silva Martínez, Fernando

    2010-01-01

    In computational electromagnetic simulations, most validation methods have been developed until now to be used in the frequency domain. However, the EMC analysis of the systems in the frequency domain many times is not enough to evaluate the immunity of current communication devices. Based on several studies, in this paper we propose an alternative method of validation of the transients in time domain allowing a rapid and objective quantification of the simulations results.

  9. Validation results of satellite mock-up capturing experiment using nets

    Science.gov (United States)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  10. Method validation in plasma source optical emission spectroscopy (ICP-OES) - From samples to results

    International Nuclear Information System (INIS)

    Pilon, Fabien; Vielle, Karine; Birolleau, Jean-Claude; Vigneau, Olivier; Labet, Alexandre; Arnal, Nadege; Adam, Christelle; Camilleri, Virginie; Amiel, Jeanine; Granier, Guy; Faure, Joel; Arnaud, Regine; Beres, Andre; Blanchard, Jean-Marc; Boyer-Deslys, Valerie; Broudic, Veronique; Marques, Caroline; Augeray, Celine; Bellefleur, Alexandre; Bienvenu, Philippe; Delteil, Nicole; Boulet, Beatrice; Bourgarit, David; Brennetot, Rene; Fichet, Pascal; Celier, Magali; Chevillotte, Rene; Klelifa, Aline; Fuchs, Gilbert; Le Coq, Gilles; Mermet, Jean-Michel

    2017-01-01

    Even though ICP-OES (Inductively Coupled Plasma - Optical Emission Spectroscopy) is now a routine analysis technique, requirements for measuring processes impose a complete control and mastering of the operating process and of the associated quality management system. The aim of this (collective) book is to guide the analyst during all the measurement validation procedure and to help him to guarantee the mastering of its different steps: administrative and physical management of samples in the laboratory, preparation and treatment of the samples before measuring, qualification and monitoring of the apparatus, instrument setting and calibration strategy, exploitation of results in terms of accuracy, reliability, data covariance (with the practical determination of the accuracy profile). The most recent terminology is used in the book, and numerous examples and illustrations are given in order to a better understanding and to help the elaboration of method validation documents

  11. Cultural adaptation and validation of an instrument on barriers for the use of research results.

    Science.gov (United States)

    Ferreira, Maria Beatriz Guimarães; Haas, Vanderlei José; Dantas, Rosana Aparecida Spadoti; Felix, Márcia Marques Dos Santos; Galvão, Cristina Maria

    2017-03-02

    to culturally adapt The Barriers to Research Utilization Scale and to analyze the metric validity and reliability properties of its Brazilian Portuguese version. methodological research conducted by means of the cultural adaptation process (translation and back-translation), face and content validity, construct validity (dimensionality and known groups) and reliability analysis (internal consistency and test-retest). The sample consisted of 335 nurses, of whom 43 participated in the retest phase. the validity of the adapted version of the instrument was confirmed. The scale investigates the barriers for the use of the research results in clinical practice. Confirmatory factorial analysis demonstrated that the Brazilian Portuguese version of the instrument is adequately adjusted to the dimensional structure the scale authors originally proposed. Statistically significant differences were observed among the nurses holding a Master's or Doctoral degree, with characteristics favorable to Evidence-Based Practice, and working at an institution with an organizational cultural that targets this approach. The reliability showed a strong correlation (r ranging between 0.77 and 0.84, pcultura organizacional dirigida hacia tal aproximación. La fiabilidad presentó correlación fuerte (r variando entre 0,77 y 0,84, pcultura organizacional direcionada para tal abordagem. A confiabilidade apresentou correlação forte (r variando entre 0,77e 0,84, p<0,001) e a consistência interna foi adequada (alfa de Cronbach variando entre 0,77 e 0,82) . a versão para o português brasileiro do instrumento The Barriers Scale demonstrou-se válida e confiável no grupo estudado.

  12. Validation and results of a questionnaire for functional bowel disease in out-patients

    Directory of Open Access Journals (Sweden)

    Skordilis Panagiotis

    2002-05-01

    Full Text Available Abstract Background The aim was to evaluate and validate a bowel disease questionnaire in patients attending an out-patient gastroenterology clinic in Greece. Methods This was a prospective study. Diagnosis was based on detailed clinical and laboratory evaluation. The questionnaire was tested on a pilot group of patients. Interviewer-administration technique was used. One-hundred-and-forty consecutive patients attending the out-patient clinic for the first time and fifty healthy controls selected randomly participated in the study. Reliability (kappa statistics and validity of the questionnaire were tested. We used logistic regression models and binary recursive partitioning for assessing distinguishing ability among irritable bowel syndrome (IBS, functional dyspepsia and organic disease patients. Results Mean time for questionnaire completion was 18 min. In test-retest procedure a good agreement was obtained (kappa statistics 0.82. There were 55 patients diagnosed as having IBS, 18 with functional dyspepsia (Rome I criteria, 38 with organic disease. Location of pain was a significant distinguishing factor, patients with functional dyspepsia having no lower abdominal pain (p Conclusions This questionnaire for functional bowel disease is a valid and reliable instrument that can distinguish satisfactorily between organic and functional disease in an out-patient setting.

  13. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  14. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  15. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  16. Shift Verification and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Gregory G [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  17. Results and validity of renal blood flow measurements using Xenon 133

    International Nuclear Information System (INIS)

    Serres, P.; Danet, B.; Guiraud, R.; Durand, D.; Ader, J.L.

    1975-01-01

    The renal blood flow was measured by external recording of the xenon 133 excretion curve. The study involved 45 patients with permanent high blood pressure and 7 transplant patients. The validity of the method was checked on 10 dogs. From the results it seems that the cortical blood flow, its fraction and the mean flow rate are the most representative of the renal haemodynamics parameters, from which may be established the repercussions of blood pressure on kidney vascularisation. Experiments are in progress on animals to check the compartment idea by comparing injections into the renal artery and into various kidney tissues in situ [fr

  18. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  19. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  20. Furthering our Understanding of Land Surface Interactions using SVAT modelling: Results from SimSphere's Validation

    Science.gov (United States)

    North, Matt; Petropoulos, George; Ireland, Gareth; Rendal, Daisy; Carlson, Toby

    2015-04-01

    With current predicted climate change, there is an increased requirement to gain knowledge on the terrestrial biosphere, for numerous agricultural, hydrological and meteorological applications. To this end, Soil Vegetation Atmospheric Transfer (SVAT) models are quickly becoming the preferred scientific tool to monitor, at fine temporal and spatial resolutions, detailed information on numerous parameters associated with Earth system interactions. Validation of any model is critical to assess its accuracy, generality and realism to distinctive ecosystems and subsequently acts as important step before its operational distribution. In this study, the SimSphere SVAT model has been validated to fifteen different sites of the FLUXNET network, where model performance was statistically evaluated by directly comparing the model predictions vs in situ data, for cloud free days with a high energy balance closure. Specific focus is given to the models ability to simulate parameters associated with the energy balance, namely Shortwave Incoming Solar Radiation (Rg), Net Radiation (Rnet), Latent Heat (LE), Sensible Heat (H), Air Temperature at 1.3m (Tair 1.3m) and Air temperature at 50m (Tair 50m). Comparisons were performed for a number distinctive ecosystem types and for 150 days in total using in-situ data from ground observational networks acquired from the year 2011 alone. Evaluation of the models' coherence to reality was evaluated on the basis of a series of statistical parameters including RMSD, R2, Scatter, Bias, MAE , NASH index, Slope and Intercept. Results showed good to very good agreement between predicted and observed datasets, particularly so for LE, H, Tair 1.3m and Tair 50m where mean error distribution values indicated excellent model performance. Due to the systematic underestimation, poorer simulation accuracies were exhibited for Rg and Rnet, yet all values reported are still analogous to other validatory studies of its kind. In overall, the model

  1. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  2. Lesson 6: Signature Validation

    Science.gov (United States)

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  3. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    Science.gov (United States)

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  4. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  5. The Chimera of Validity

    Science.gov (United States)

    Baker, Eva L.

    2013-01-01

    Background/Context: Education policy over the past 40 years has focused on the importance of accountability in school improvement. Although much of the scholarly discourse around testing and assessment is technical and statistical, understanding of validity by a non-specialist audience is essential as long as test results drive our educational…

  6. Validating year 2000 compliance

    NARCIS (Netherlands)

    A. van Deursen (Arie); P. Klint (Paul); M.P.A. Sellink

    1997-01-01

    textabstractValidating year 2000 compliance involves the assessment of the correctness and quality of a year 2000 conversion. This entails inspecting both the quality of the conversion emph{process followed, and of the emph{result obtained, i.e., the converted system. This document provides an

  7. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  8. Clinical validation of an epigenetic assay to predict negative histopathological results in repeat prostate biopsies.

    Science.gov (United States)

    Partin, Alan W; Van Neste, Leander; Klein, Eric A; Marks, Leonard S; Gee, Jason R; Troyer, Dean A; Rieger-Christ, Kimberly; Jones, J Stephen; Magi-Galluzzi, Cristina; Mangold, Leslie A; Trock, Bruce J; Lance, Raymond S; Bigley, Joseph W; Van Criekinge, Wim; Epstein, Jonathan I

    2014-10-01

    The DOCUMENT multicenter trial in the United States validated the performance of an epigenetic test as an independent predictor of prostate cancer risk to guide decision making for repeat biopsy. Confirming an increased negative predictive value could help avoid unnecessary repeat biopsies. We evaluated the archived, cancer negative prostate biopsy core tissue samples of 350 subjects from a total of 5 urological centers in the United States. All subjects underwent repeat biopsy within 24 months with a negative (controls) or positive (cases) histopathological result. Centralized blinded pathology evaluation of the 2 biopsy series was performed in all available subjects from each site. Biopsies were epigenetically profiled for GSTP1, APC and RASSF1 relative to the ACTB reference gene using quantitative methylation specific polymerase chain reaction. Predetermined analytical marker cutoffs were used to determine assay performance. Multivariate logistic regression was used to evaluate all risk factors. The epigenetic assay resulted in a negative predictive value of 88% (95% CI 85-91). In multivariate models correcting for age, prostate specific antigen, digital rectal examination, first biopsy histopathological characteristics and race the test proved to be the most significant independent predictor of patient outcome (OR 2.69, 95% CI 1.60-4.51). The DOCUMENT study validated that the epigenetic assay was a significant, independent predictor of prostate cancer detection in a repeat biopsy collected an average of 13 months after an initial negative result. Due to its 88% negative predictive value adding this epigenetic assay to other known risk factors may help decrease unnecessary repeat prostate biopsies. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  9. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  10. Results from the radiometric validation of Sentinel-3 optical sensors using natural targets

    Science.gov (United States)

    Fougnie, Bertrand; Desjardins, Camille; Besson, Bruno; Bruniquel, Véronique; Meskini, Naceur; Nieke, Jens; Bouvet, Marc

    2016-09-01

    The recently launched SENTINEL-3 mission measures sea surface topography, sea/land surface temperature, and ocean/land surface colour with high accuracy. The mission provides data continuity with the ENVISAT mission through acquisitions by multiple sensing instruments. Two of them, OLCI (Ocean and Land Colour Imager) and SLSTR (Sea and Land Surface Temperature Radiometer) are optical sensors designed to provide continuity with Envisat's MERIS and AATSR instruments. During the commissioning, in-orbit calibration and validation activities are conducted. Instruments are in-flight calibrated and characterized primarily using on-board devices which include diffusers and black body. Afterward, vicarious calibration methods are used in order to validate the OLCI and SLSTR radiometry for the reflective bands. The calibration can be checked over dedicated natural targets such as Rayleigh scattering, sunglint, desert sites, Antarctica, and tentatively deep convective clouds. Tools have been developed and/or adapted (S3ETRAC, MUSCLE) to extract and process Sentinel-3 data. Based on these matchups, it is possible to provide an accurate checking of many radiometric aspects such as the absolute and interband calibrations, the trending correction, the calibration consistency within the field-of-view, and more generally this will provide an evaluation of the radiometric consistency for various type of targets. Another important aspect will be the checking of cross-calibration between many other instruments such as MERIS and AATSR (bridge between ENVISAT and Sentinel-3), MODIS (bridge to the GSICS radiometric standard), as well as Sentinel-2 (bridge between Sentinel missions). The early results, based on the available OLCI and SLSTR data, will be presented and discussed.

  11. Validating a dance-specific screening test for balance: preliminary results from multisite testing.

    Science.gov (United States)

    Batson, Glenna

    2010-09-01

    Few dance-specific screening tools adequately capture balance. The aim of this study was to administer and modify the Star Excursion Balance Test (oSEBT) to examine its utility as a balance screen for dancers. The oSEBT involves standing on one leg while lightly targeting with the opposite foot to the farthest distance along eight spokes of a star-shaped grid. This task simulates dance in the spatial pattern and movement quality of the gesturing limb. The oSEBT was validated for distance on athletes with history of ankle sprain. Thirty-three dancers (age 20.1 +/- 1.4 yrs) participated from two contemporary dance conservatories (UK and US), with or without a history of lower extremity injury. Dancers were verbally instructed (without physical demonstration) to execute the oSEBT and four modifications (mSEBT): timed (speed), timed with cognitive interference (answering questions aloud), and sensory disadvantaging (foam mat). Stepping strategies were tracked and performance strategies video-recorded. Unlike the oSEBT results, distances reached were not significant statistically (p = 0.05) or descriptively (i.e., shorter) for either group. Performance styles varied widely, despite sample homogeneity and instructions to control for strategy. Descriptive analysis of mSEBT showed an increased number of near-falls and decreased timing on the injured limb. Dancers appeared to employ variable strategies to keep balance during this test. Quantitative analysis is warranted to define balance strategies for further validation of SEBT modifications to determine its utility as a balance screening tool.

  12. Challenges of forest landscape modeling - simulating large landscapes and validating results

    Science.gov (United States)

    Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson

    2011-01-01

    Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...

  13. The Arabic Scale of Death Anxiety (ASDA): Its Development, Validation, and Results in Three Arab Countries

    Science.gov (United States)

    Abdel-Khalek, Ahmed M.

    2004-01-01

    The Arabic Scale of Death Anxiety (ASDA) was constructed and validated in a sample of undergraduates (17-33 yrs) in 3 Arab countries, Egypt (n = 418), Kuwait (n = 509), and Syria (n = 709). In its final form, the ASDA consists of 20 statements. Each item is answered on a 5-point intensity scale anchored by 1: No, and 5: Very much. Alpha…

  14. Validity in Qualitative Evaluation

    OpenAIRE

    Vasco Lub

    2015-01-01

    This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate), the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of con...

  15. Quality of life and hormone use: new validation results of MRS scale

    Directory of Open Access Journals (Sweden)

    Heinemann Lothar AJ

    2006-05-01

    Full Text Available Abstract Background The Menopause Rating Scale is a health-related Quality of Life scale developed in the early 1990s and step-by-step validated since then. Recently the MRS scale was validated as outcomes measure for hormone therapy. The suspicion however was expressed that the data were too optimistic due to methodological problems of the study. A new study became available to check how founded this suspicion was. Method An open post-marketing study of 3282 women with pre- and post- treatment data of the self-administered version of the MRS scale was analyzed to evaluate the capacity of the scale to detect hormone treatment related effects with the MRS scale. The main results were then compared with the old study where the interview-based version of the MRS scale was used. Results The hormone-therapy related improvement of complaints relative to the baseline score was about or less than 30% in total or domain scores, whereas it exceeded 30% improvement in the old study. Similarly, the relative improvement after therapy, stratified by the degree of severity at baseline, was lower in the new than in the old study, but had the same slope. Although we cannot exclude different treatment effects with the study method used, this supports our hypothesis that the individual MRS interviews performed by the physician biased the results towards over-estimation of the treatment effects. This hypothesis is underlined by the degree of concordance of physician's assessment and patient's perception of treatment success (MRS results: Sensitivity (correct prediction of the positive assessment by the treating physician of the MRS and specificity (correct prediction of a negative assessment by the physician were lower than the results obtained with the interview-based MRS scale in the previous publication. Conclusion The study confirmed evidence for the capacity of the MRS scale to measure treatment effects on quality of life across the full range of severity of

  16. CIPS Validation Data Plan

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2012-03-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  17. CIPS Validation Data Plan

    International Nuclear Information System (INIS)

    Dinh, Nam

    2012-01-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  18. Validating MEDIQUAL Constructs

    Science.gov (United States)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  19. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  20. Utilization of paleoclimate results to validate projections of a future greenhouse warming

    International Nuclear Information System (INIS)

    Crowley, T.J.

    1990-01-01

    Paleoclimate data provide a rich source of information for testing projections of future greenhouse trends. This paper summarizes the present state-of-the-art as to assessments of two important climate problems. (1) Validation of climate models - The same climate models that have been used to make greenhouse forecasts have also been used for paleoclimate simulations. Comparisons of model results and observations indicate some impressive successes but also some cases where there are significant divergences between models and observations. However, special conditions associated with the impressive successes could lead to a false confidence in the models; disagreements are a topic of greater concern. It remains to be determined whether the disagreements are due to model limitations or uncertainties in geologic data. (2) Role of CO 2 as a significant climate feedback: Paleoclimate studies indicate that the climate system is generally more sensitive than our ability to model it. Addition or subtraction of CO 2 leads to a closer agreement between models and observations. In this respect paleoclimate results in general support the conclusion that CO 2 is an important climate feedback, with the magnitude of the feedback approximately comparable to the sensitivity of present climate models. If the CO 2 projections are correct, comparison of the future warming with past warm periods indicate that there may be no geologic analogs for a future warming; the future greenhouse climate may represent a unique climate realization in earth history

  1. Development of a validation test for self-reported abstinence from smokeless tobacco products: preliminary results

    International Nuclear Information System (INIS)

    Robertson, J.B.; Bray, J.T.

    1988-01-01

    Using X-ray fluorescence spectrometry, 11 heavy elements at concentrations that are easily detectable have been identified in smokeless tobacco products. These concentrations were found to increase in cheek epithelium samples of the user after exposure to smokeless tobacco. This feasibility study suggests that the level of strontium in the cheek epithelium could be a valid measure of recent smokeless tobacco use. It also demonstrates that strontium levels become undetectable within several days of smokeless tobacco cessation. This absence of strontium could validate a self-report of abstinence from smokeless tobacco. Finally, the X-ray spectrum of heavy metal content of cheek epithelium from smokeless tobacco users could itself provide a visual stimulus to further motivate the user to terminate the use of smokeless tobacco products

  2. Validity of proposed DSM-5 diagnostic criteria for nicotine use disorder: results from 734 Israeli lifetime smokers

    Science.gov (United States)

    Shmulewitz, D.; Wall, M.M.; Aharonovich, E.; Spivak, B.; Weizman, A.; Frisch, A.; Grant, B. F.; Hasin, D.

    2013-01-01

    Background The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) proposes aligning nicotine use disorder (NUD) criteria with those for other substances, by including the current DSM fourth edition (DSM-IV) nicotine dependence (ND) criteria, three abuse criteria (neglect roles, hazardous use, interpersonal problems) and craving. Although NUD criteria indicate one latent trait, evidence is lacking on: (1) validity of each criterion; (2) validity of the criteria as a set; (3) comparative validity between DSM-5 NUD and DSM-IV ND criterion sets; and (4) NUD prevalence. Method Nicotine criteria (DSM-IV ND, abuse and craving) and external validators (e.g. smoking soon after awakening, number of cigarettes per day) were assessed with a structured interview in 734 lifetime smokers from an Israeli household sample. Regression analysis evaluated the association between validators and each criterion. Receiver operating characteristic analysis assessed the association of the validators with the DSM-5 NUD set (number of criteria endorsed) and tested whether DSM-5 or DSM-IV provided the most discriminating criterion set. Changes in prevalence were examined. Results Each DSM-5 NUD criterion was significantly associated with the validators, with strength of associations similar across the criteria. As a set, DSM-5 criteria were significantly associated with the validators, were significantly more discriminating than DSM-IV ND criteria, and led to increased prevalence of binary NUD (two or more criteria) over ND. Conclusions All findings address previous concerns about the DSM-IV nicotine diagnosis and its criteria and support the proposed changes for DSM-5 NUD, which should result in improved diagnosis of nicotine disorders. PMID:23312475

  3. Apar-T: code, validation, and physical interpretation of particle-in-cell results

    Science.gov (United States)

    Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter

    2013-10-01

    simulations. The other is that the level of electric field fluctuations scales as 1/ΛPIC ∝ p. We provide a corresponding exact expression, taking into account the finite superparticle size. We confirm both expectations with simulations. Fourth, we compare the Vlasov-Maxwell theory, often used for code benchmarking, to the PIC model. The former describes a phase-space fluid with Λ = + ∞ and no correlations, while the PIC plasma features a small Λ and a high level of correlations when compared to a real plasma. These differences have to be kept in mind when interpreting and validating PIC results against the Vlasov-Maxwell theory and when modeling real physical plasmas.

  4. Satisfaction with information provided to Danish cancer patients: validation and survey results.

    Science.gov (United States)

    Ross, Lone; Petersen, Morten Aagaard; Johnsen, Anna Thit; Lundstrøm, Louise Hyldborg; Groenvold, Mogens

    2013-11-01

    To validate five items (CPWQ-inf) regarding satisfaction with information provided to cancer patients from health care staff, assess the prevalence of dissatisfaction with this information, and identify factors predicting dissatisfaction. The questionnaire was validated by patient-observer agreement and cognitive interviews. The prevalence of dissatisfaction was assessed in a cross-sectional sample of all cancer patients in contact with hospitals during the past year in three Danish counties. The validation showed that the CPWQ performed well. Between 3 and 23% of the 1490 participating patients were dissatisfied with each of the measured aspects of information. The highest level of dissatisfaction was reported regarding the guidance, support and help provided when the diagnosis was given. Younger patients were consistently more dissatisfied than older patients. The brief CPWQ performs well for survey purposes. The survey depicts the heterogeneous patient population encountered by hospital staff and showed that younger patients probably had higher expectations or a higher need for information and that those with more severe diagnoses/prognoses require extra care in providing information. Four brief questions can efficiently assess information needs. With increasing demands for information, a wide range of innovative initiatives is needed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  6. THE GLOBAL TANDEM-X DEM: PRODUCTION STATUS AND FIRST VALIDATION RESULTS

    Directory of Open Access Journals (Sweden)

    M. Huber

    2012-07-01

    Full Text Available The TanDEM-X mission will derive a global digital elevation model (DEM with satellite SAR interferometry. Two radar satellites (TerraSAR-X and TanDEM-X will map the Earth in a resolution and accuracy with an absolute height error of 10m and a relative height error of 2m for 90% of the data. In order to fulfill the height requirements in general two global coverages are acquired and processed. Besides the final TanDEM-X DEM, an intermediate DEM with reduced accuracy is produced after the first coverage is completed. The last step in the whole workflow for generating the TanDEM-X DEM is the calibration of remaining systematic height errors and the merge of single acquisitions to 1°x1° DEM tiles. In this paper the current status of generating the intermediate DEM and first validation results based on GPS tracks, laser scanning DEMs, SRTM data and ICESat points are shown for different test sites.

  7. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  8. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  9. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  10. Thermodynamic properties of 1-naphthol: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range 5 K to 445 K. • Vapor pressures were measured for the temperature range 370 K to 570 K. • Computed and derived properties for ideal gas entropies are in excellent accord. • The enthalpy of combustion was measured and shown to be consistent with reliable literature values. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Thermodynamic properties for 1-naphthol (Chemical Abstracts registry number [90-15-3]) in the ideal-gas state are reported based on both experimental and computational methods. Measured properties included the triple-point temperature, enthalpy of fusion, and heat capacities for the crystal and liquid phases by adiabatic calorimetry; vapor pressures by inclined-piston manometry and comparative ebulliometry; and the enthalpy of combustion of the crystal phase by oxygen bomb calorimetry. Critical properties were estimated. Entropies for the ideal-gas state were derived from the experimental studies for the temperature range 298.15 ⩽ T/K ⩽ 600, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. The mutual validation of the independent experimental and computed results is achieved with a scaling factor of 0.975 applied to the calculated vibrational frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in a series of recent articles by this research group. This article reports the first extension of this approach to a hydroxy-aromatic compound. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous. The enthalpy of combustion for 1-naphthol was also measured in this research, and excellent

  11. A Mathematical Model for Reactions During Top-Blowing in the AOD Process: Validation and Results

    Science.gov (United States)

    Visuri, Ville-Valtteri; Järvinen, Mika; Kärnä, Aki; Sulasalmi, Petri; Heikkinen, Eetu-Pekka; Kupari, Pentti; Fabritius, Timo

    2017-06-01

    In earlier work, a fundamental mathematical model was proposed for side-blowing operation in the argon oxygen decarburization (AOD) process. In the preceding part "Derivation of the Model," a new mathematical model was proposed for reactions during top-blowing in the AOD process. In this model it was assumed that reactions occur simultaneously at the surface of the cavity caused by the gas jet and at the surface of the metal droplets ejected from the metal bath. This paper presents validation and preliminary results with twelve industrial heats. In the studied heats, the last combined-blowing stage was altered so that oxygen was introduced from the top lance only. Four heats were conducted using an oxygen-nitrogen mixture (1:1), while eight heats were conducted with pure oxygen. Simultaneously, nitrogen or argon gas was blown via tuyères in order to provide mixing that is comparable to regular practice. The measured carbon content varied from 0.4 to 0.5 wt pct before the studied stage to 0.1 to 0.2 wt pct after the studied stage. The results suggest that the model is capable of predicting changes in metal bath composition and temperature with a reasonably high degree of accuracy. The calculations indicate that the top slag may supply oxygen for decarburization during top-blowing. Furthermore, it is postulated that the metal droplets generated by the shear stress of top-blowing create a large mass exchange area, which plays an important role in enabling the high decarburization rates observed during top-blowing in the AOD process. The overall rate of decarburization attributable to top-blowing in the last combined-blowing stage was found to be limited by the mass transfer of dissolved carbon.

  12. Hospital blood bank information systems accurately reflect patient transfusion: results of a validation study.

    Science.gov (United States)

    McQuilten, Zoe K; Schembri, Nikita; Polizzotto, Mark N; Akers, Christine; Wills, Melissa; Cole-Sinclair, Merrole F; Whitehead, Susan; Wood, Erica M; Phillips, Louise E

    2011-05-01

    Hospital transfusion laboratories collect information regarding blood transfusion and some registries gather clinical outcomes data without transfusion information, providing an opportunity to integrate these two sources to explore effects of transfusion on clinical outcomes. However, the use of laboratory information system (LIS) data for this purpose has not been validated previously. Validation of LIS data against individual patient records was undertaken at two major centers. Data regarding all transfusion episodes were analyzed over seven 24-hour periods. Data regarding 596 units were captured including 399 red blood cell (RBC), 95 platelet (PLT), 72 plasma, and 30 cryoprecipitate units. They were issued to: inpatient 221 (37.1%), intensive care 109 (18.3%), outpatient 95 (15.9%), operating theater 45 (7.6%), emergency department 27 (4.5%), and unrecorded 99 (16.6%). All products recorded by LIS as issued were documented as transfused to intended patients. Median time from issue to transfusion initiation could be calculated for 535 (89.8%) components: RBCs 16 minutes (95% confidence interval [CI], 15-18 min; interquartile range [IQR], 7-30 min), PLTs 20 minutes (95% CI, 15-22 min; IQR, 10-37 min), fresh-frozen plasma 33 minutes (95% CI, 14-83 min; IQR, 11-134 min), and cryoprecipitate 3 minutes (95% CI, -10 to 42 min; IQR, -15 to 116 min). Across a range of blood component types and destinations comparison of LIS data with clinical records demonstrated concordance. The difference between LIS timing data and patient clinical records reflects expected time to transport, check, and prepare transfusion but does not affect the validity of linkage for most research purposes. Linkage of clinical registries with LIS data can therefore provide robust information regarding individual patient transfusion. This enables analysis of joint data sets to determine the impact of transfusion on clinical outcomes. © 2010 American Association of Blood Banks.

  13. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  14. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1995-01-01

    This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community

  15. Effect of Changes in Prolactin RIA Reactants on the Validity of the Results

    International Nuclear Information System (INIS)

    Ahmed, A.M.; Megahed, Y.M.; El Mosallamy, M.A.F.; El-Khoshnia, R.A.M.

    1998-01-01

    Human prolactin plays an essential role in the secretion of milk and has the ability to suppress gonadal function. This study is considered as atrial to discuss some technical problems which made by operator in the RIA technique to select an optimized reliable and valid parameters for the measurement of prolactin concentration in human sera. Prolactin concentration was measured in normal control group and chronic renal failure group using the optimized technique. Finally the present optimized technique is very suitable selected one for measurement of prolactin

  16. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  17. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  18. Validity of the Framingham point scores in the elderly: results from the Rotterdam study.

    Science.gov (United States)

    Koller, Michael T; Steyerberg, Ewout W; Wolbers, Marcel; Stijnen, Theo; Bucher, Heiner C; Hunink, M G Myriam; Witteman, Jacqueline C M

    2007-07-01

    The National Cholesterol Education Program recommends assessing 10-year risk of coronary heart disease (CHD) in individuals free of established CHD with the Framingham Point Scores (FPS). Individuals with a risk >20% are classified as high risk and are candidates for preventive intervention. We aimed to validate the FPS in a European population of elderly subjects. Subjects free of established CHD at baseline were selected from the Rotterdam study, a population-based cohort of subjects 55 years or older in The Netherlands. We studied calibration, discrimination (c-index), and the accuracy of high-risk classifications. Events consisted of fatal CHD and nonfatal myocardial infarction. Among 6795 subjects, 463 died because of CHD and 336 had nonfatal myocardial infarction. Predicted 10-year risk of CHD was on average well calibrated for women (9.9% observed vs 10.1% predicted) but showed substantial overestimation in men (14.3% observed vs 19.8% predicted), particularly with increasing age. This resulted in substantial number of false-positive classifications (specificity 70%) in men. In women, discrimination of the FPS was better than that in men (c-index 0.73 vs 0.63, respectively). However, because of the low baseline risk of CHD and limited discriminatory power, only 33% of all CHD events occurred in women classified as high risk. The FPS need recalibration for elderly men with better incorporation of the effect of age. In elderly women, FPS perform reasonably well. However, maintaining the rational of the high-risk threshold requires better performing models for a population with low incidence of CHD.

  19. Validity testing and neuropsychology practice in the VA healthcare system: results from recent practitioner survey (.).

    Science.gov (United States)

    Young, J Christopher; Roper, Brad L; Arentsen, Timothy J

    2016-05-01

    A survey of neuropsychologists in the Veterans Health Administration examined symptom/performance validity test (SPVT) practices and estimated base rates for patient response bias. Invitations were emailed to 387 psychologists employed within the Veterans Affairs (VA), identified as likely practicing neuropsychologists, resulting in 172 respondents (44.4% response rate). Practice areas varied, with 72% at least partially practicing in general neuropsychology clinics and 43% conducting VA disability exams. Mean estimated failure rates were 23.0% for clinical outpatient, 12.9% for inpatient, and 39.4% for disability exams. Failure rates were the highest for mTBI and PTSD referrals. Failure rates were positively correlated with the number of cases seen and frequency and number of SPVT use. Respondents disagreed regarding whether one (45%) or two (47%) failures are required to establish patient response bias, with those administering more measures employing the more stringent criterion. Frequency of the use of specific SPVTs is reported. Base rate estimates for SPVT failure in VA disability exams are comparable to those in other medicolegal settings. However, failure in routine clinical exams is much higher in the VA than in other settings, possibly reflecting the hybrid nature of the VA's role in both healthcare and disability determination. Generally speaking, VA neuropsychologists use SPVTs frequently and eschew pejorative terms to describe their failure. Practitioners who require only one SPVT failure to establish response bias may overclassify patients. Those who use few or no SPVTs may fail to identify response bias. Additional clinical and theoretical implications are discussed.

  20. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  1. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  2. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  3. Validity in Qualitative Evaluation

    Directory of Open Access Journals (Sweden)

    Vasco Lub

    2015-12-01

    Full Text Available This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate, the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of connecting them with aspects of evaluation in social policy. It argues that different purposes of qualitative evaluations can be linked with different scientific paradigms and perspectives, thus transcending unproductive paradigmatic divisions as well as providing a flexible yet rigorous validity framework for researchers and reviewers of qualitative evaluations.

  4. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  5. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  6. Precise orbit determination for quad-constellation satellites at Wuhan University: strategy, result validation, and comparison

    Science.gov (United States)

    Guo, Jing; Xu, Xiaolong; Zhao, Qile; Liu, Jingnan

    2016-02-01

    This contribution summarizes the strategy used by Wuhan University (WHU) to determine precise orbit and clock products for Multi-GNSS Experiment (MGEX) of the International GNSS Service (IGS). In particular, the satellite attitude, phase center corrections, solar radiation pressure model developed and used for BDS satellites are addressed. In addition, this contribution analyzes the orbit and clock quality of the quad-constellation products from MGEX Analysis Centers (ACs) for a common time period of 1 year (2014). With IGS final GPS and GLONASS products as the reference, Multi-GNSS products of WHU (indicated by WUM) show the best agreement among these products from all MGEX ACs in both accuracy and stability. 3D Day Boundary Discontinuities (DBDs) range from 8 to 27 cm for Galileo-IOV satellites among all ACs' products, whereas WUM ones are the largest (about 26.2 cm). Among three types of BDS satellites, MEOs show the smallest DBDs from 10 to 27 cm, whereas the DBDs for all ACs products are at decimeter to meter level for GEOs and one to three decimeter for IGSOs, respectively. As to the satellite laser ranging (SLR) validation for Galileo-IOV satellites, the accuracy evaluated by SLR residuals is at the one decimeter level with the well-known systematic bias of about -5 cm for all ACs. For BDS satellites, the accuracy could reach decimeter level, one decimeter level, and centimeter level for GEOs, IGSOs, and MEOs, respectively. However, there is a noticeable bias in GEO SLR residuals. In addition, systematic errors dependent on orbit angle related to mismodeled solar radiation pressure (SRP) are present for BDS GEOs and IGSOs. The results of Multi-GNSS combined kinematic PPP demonstrate that the best accuracy of position and fastest convergence speed have been achieved using WUM products, particularly in the Up direction. Furthermore, the accuracy of static BDS only PPP degrades when the BDS IGSO and MEO satellites switches to orbit-normal orientation

  7. [Critical reading of articles about diagnostic tests (part I): Are the results of the study valid?].

    Science.gov (United States)

    Arana, E

    2015-01-01

    In the era of evidence-based medicine, one of the most important skills a radiologist should have is the ability to analyze the diagnostic literature critically. This tutorial aims to present guidelines for determining whether primary diagnostic articles are valid for clinical practice. The following elements should be evaluated: whether the study can be applied to clinical practice, whether the technique was compared to the reference test, whether an appropriate spectrum of patients was included, whether expectation bias and verification bias were limited, the reproducibility of the study, the practical consequences of the study, the confidence intervals for the parameters analyzed, the normal range for continuous variables, and the placement of the test in the context of other diagnostic tests. We use elementary practical examples to illustrate how to select and interpret the literature on diagnostic imaging and specific references to provide more details. Copyright © 2014 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  8. Computer-aided test selection and result validation-opportunities and pitfalls

    DEFF Research Database (Denmark)

    McNair, P; Brender, J; Talmon, J

    1998-01-01

    /or to increase cost-efficiency). Our experience shows that there is a practical limit to the extent of exploitation of the principle of dynamic test scheduling, unless it is automated in one way or the other. This paper analyses some issues of concern related to the profession of clinical biochemistry, when......Dynamic test scheduling is concerned with pre-analytical preprocessing of the individual samples within a clinical laboratory production by means of decision algorithms. The purpose of such scheduling is to provide maximal information with minimal data production (to avoid data pollution and...... implementing such dynamic test scheduling within a Laboratory Information System (and/or an advanced analytical workstation). The challenge is related to 1) generation of appropriately validated decision models, and 2) mastering consequences of analytical imprecision and bias....

  9. The development, validation and initial results of an integrated model for determining the environmental sustainability of biogas production pathways

    NARCIS (Netherlands)

    Pierie, Frank; van Someren, Christian; Benders, René M.J.; Bekkering, Jan; van Gemert, Wim; Moll, Henri C.

    2016-01-01

    Biogas produced through Anaerobic Digestion can be seen as a flexible and storable energy carrier. However, the environmental sustainability and efficiency of biogas production is not fully understood. Within this article the use, operation, structure, validation, and results of a model for the

  10. Pooled results from five validation studies of dietary self-report instruments using recovery biomarkers for potassium and sodium intake

    Science.gov (United States)

    We have pooled data from five large validation studies of dietary self-report instruments that used recovery biomarkers as referents to assess food frequency questionnaires (FFQs) and 24-hour recalls. We reported on total potassium and sodium intakes, their densities, and their ratio. Results were...

  11. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  12. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  13. An assessment of the validity of inelastic design analysis methods by comparisons of predictions with test results

    International Nuclear Information System (INIS)

    Corum, J.M.; Clinard, J.A.; Sartory, W.K.

    1976-01-01

    The use of computer programs that employ relatively complex constitutive theories and analysis procedures to perform inelastic design calculations on fast reactor system components introduces questions of validation and acceptance of the analysis results. We may ask ourselves, ''How valid are the answers.'' These questions, in turn, involve the concepts of verification of computer programs as well as qualification of the computer programs and of the underlying constitutive theories and analysis procedures. This paper addresses the latter - the qualification of the analysis methods for inelastic design calculations. Some of the work underway in the United States to provide the necessary information to evaluate inelastic analysis methods and computer programs is described, and typical comparisons of analysis predictions with inelastic structural test results are presented. It is emphasized throughout that rather than asking ourselves how valid, or correct, are the analytical predictions, we might more properly question whether or not the combination of the predictions and the associated high-temperature design criteria leads to an acceptable level of structural integrity. It is believed that in this context the analysis predictions are generally valid, even though exact correlations between predictions and actual behavior are not obtained and cannot be expected. Final judgment, however, must be reserved for the design analyst in each specific case. (author)

  14. The ASCAT soil moisture product. A Review of its specifications, validation results, and emerging applications

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Wolfgang; Hahn, Sebastian; Kidd, Richard [Vienna Univ. of Technology (Austria). Dept. of Geodesy and Geoinformation] [and others

    2013-02-15

    provide a comprehensive overview of the major characteristics and caveats of the ASCAT soil moisture product, this paper describes the ASCAT instrument and the soil moisture processor and near-real-time distribution service implemented by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). A review of the most recent validation studies shows that the quality of ASCAT soil moisture product is - with the exception of arid environments -comparable to, and over some regions (e.g. Europe) even better than currently available soil moisture data derived from passive microwave sensors. Further, a review of applications studies shows that the use of the ASCAT soil moisture product is particularly advanced in the fields of numerical weather prediction and hydrologic modelling. But also in other application areas such as yield monitoring, epidemiologic modelling, or societal risks assessment some first progress can be noted. Considering the generally positive evaluation results, it is expected that the ASCAT soil moisture product will increasingly be used by a growing number of rather diverse land applications. (orig.)

  15. The ASCAT Soil Moisture Product: A Review of its Specifications, Validation Results, and Emerging Applications

    Directory of Open Access Journals (Sweden)

    Wolfgang Wagner

    2013-02-01

    applications. To provide a comprehensive overview of the major characteristics and caveats of the ASCAT soil moisture product, this paper describes the ASCAT instrument and the soil moisture processor and near-real-time distribution service implemented by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT. A review of the most recent validation studies shows that the quality of ASCAT soil moisture product is - with the exception of arid environments -comparable to, and over some regions (e.g. Europe even better than currently available soil moisture data derived from passive microwave sensors. Further, a review of applications studies shows that the use of the ASCAT soil moisture product is particularly advanced in the fields of numerical weather prediction and hydrologic modelling. But also in other application areas such as yield monitoring, epidemiologic modelling, or societal risks assessment some first progress can be noted. Considering the generally positive evaluation results, it is expected that the ASCAT soil moisture product will increasingly be used by a growing number of rather diverse land applications.

  16. Validation of EAF-2005 data

    International Nuclear Information System (INIS)

    Kopecky, J.

    2005-01-01

    Full text: Validation procedures applied on EAF-2003 starter file, which lead to the production of EAF-2005 library, are described. The results in terms of reactions with assigned quality scores in EAF-20005 are given. Further the extensive validation against the recent integral data is discussed together with the status of the final report 'Validation of EASY-2005 using integral measurements'. Finally, the novel 'cross section trend analysis' is presented with some examples of its use. This action will lead to the release of improved library EAF-2005.1 at the end of 2005, which shall be used as the starter file for EAF-2007. (author)

  17. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  18. Validation Process Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  19. Design description and validation results for the IFMIF High Flux Test Module as outcome of the EVEDA phase

    Directory of Open Access Journals (Sweden)

    F. Arbeiter

    2016-12-01

    Full Text Available During the Engineering Validation and Engineering Design Activities (EVEDA phase (2007-2014 of the International Fusion Materials Irradiation Facility (IFMIF, an advanced engineering design of the High Flux Test Module (HFTM has been developed with the objective to facilitate the controlled irradiation of steel samples in the high flux area directly behind the IFMIF neutron source. The development process addressed included manufacturing techniques, CAD, neutronic, thermal-hydraulic and mechanical analyses complemented by a series of validation activities. Validation included manufacturing of 1:1 parts and mockups, test of prototypes in the FLEX and HELOKA-LP helium loops of KIT for verification of the thermal and mechanical properties, and irradiation of specimen filled capsule prototypes in the BR2 test reactor. The prototyping activities were backed by several R&D studies addressing focused issues like handling of liquid NaK (as filling medium and insertion of Small Specimen Test Technique (SSTT specimens into the irradiation capsules. This paper provides an up-todate design description of the HFTM irradiation device, and reports on the achieved performance criteria related to the requirements. Results of the validation activities are accounted for and the most important issues for further development are identified.

  20. Radionuclide migration in forest ecosystems - results of a model validation study

    International Nuclear Information System (INIS)

    Shaw, G.; Venter, A.; Avila, R.; Bergman, R.; Bulgakov, A.; Calmon, P.; Fesenko, S.; Frissel, M.; Goor, F.; Konoplev, A.; Linkov, I.; Mamikhin, S.; Moberg, L.; Orlov, A.; Rantavaara, A.; Spiridonov, S.; Thiry, Y.

    2005-01-01

    The primary objective of the IAEA's BIOMASS Forest Working Group (FWG) was to bring together experimental radioecologists and modellers to facilitate the exchange of information which could be used to improve our ability to understand and forecast radionuclide transfers within forests. This paper describes a blind model validation exercise which was conducted by the FWG to test nine models which members of the group had developed in response to the need to predict the fate of radiocaesium in forests in Europe after the Chernobyl accident. The outcomes and conclusions of this exercise are summarised. It was concluded that, as a group, the models are capable of providing an envelope of predictions which can be expected to enclose experimental data for radiocaesium contamination in forests over the time scale tested. However, the models are subject to varying degrees of conceptual uncertainty which gives rise to a very high degree of divergence between individual model predictions, particularly when forecasting edible mushroom contamination. Furthermore, the forecasting capability of the models over future decades currently remains untested

  1. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  2. The dialogic validation

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2005-01-01

    This paper is inspired by dialogism and the title is a paraphrase on Bakhtin's (1981) "The Dialogic Imagination". The paper investigates how dialogism can inform the process of validating inquiry-based qualitative research. The paper stems from a case study on the role of recognition...

  3. A valid licence

    NARCIS (Netherlands)

    Spoolder, H.A.M.; Ingenbleek, P.T.M.

    2010-01-01

    A valid licence Tuesday, April 20, 2010 Dr Hans Spoolder and Dr Paul Ingenbleek, of Wageningen University and Research Centres, share their thoughts on improving farm animal welfare in Europe At the presentation of the European Strategy 2020 on 3rd March, President Barroso emphasised the need for

  4. Validation and test report

    DEFF Research Database (Denmark)

    Pedersen, Jens Meldgaard; Andersen, T. Bull

    2012-01-01

    . As a consequence of extensive movement artefacts seen during dynamic contractions, the following validation and test report consists of a report that investigates the physiological responses to a static contraction in a standing and a supine position. Eight subjects performed static contractions of the ankle...

  5. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  6. Validity and Fairness

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…

  7. Validation of administrative and clinical case definitions for gestational diabetes mellitus against laboratory results.

    Science.gov (United States)

    Bowker, S L; Savu, A; Donovan, L E; Johnson, J A; Kaul, P

    2017-06-01

    To examine the validity of International Classification of Disease, version 10 (ICD-10) codes for gestational diabetes mellitus in administrative databases (outpatient and inpatient), and in a clinical perinatal database (Alberta Perinatal Health Program), using laboratory data as the 'gold standard'. Women aged 12-54 years with in-hospital, singleton deliveries between 1 October 2008 and 31 March 2010 in Alberta, Canada were included in the study. A gestational diabetes diagnosis was defined in the laboratory data as ≥2 abnormal values on a 75-g oral glucose tolerance test or a 50-g glucose screen ≥10.3 mmol/l. Of 58 338 pregnancies, 2085 (3.6%) met gestational diabetes criteria based on laboratory data. The gestational diabetes rates in outpatient only, inpatient only, outpatient or inpatient combined, and Alberta Perinatal Health Program databases were 5.2% (3051), 4.8% (2791), 5.8% (3367) and 4.8% (2825), respectively. Although the outpatient or inpatient combined data achieved the highest sensitivity (92%) and specificity (97%), it was associated with a positive predictive value of only 57%. The majority of the false-positives (78%), however, had one abnormal value on oral glucose tolerance test, corresponding to a diagnosis of impaired glucose tolerance in pregnancy. The ICD-10 codes for gestational diabetes in administrative databases, especially when outpatient and inpatient databases are combined, can be used to reliably estimate the burden of the disease at the population level. Because impaired glucose tolerance in pregnancy and gestational diabetes may be managed similarly in clinical practice, impaired glucose tolerance in pregnancy is often coded as gestational diabetes. © 2016 Diabetes UK.

  8. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  9. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  10. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  11. Flight code validation simulator

    Science.gov (United States)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  12. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  13. DDML Schema Validation

    Science.gov (United States)

    2016-02-08

    XML schema govern DDML instance documents. For information about XML, refer to RCC 125-15, XML Style Guide.2 Figure 4 provides an XML snippet of a...we have documented three main types of information .  User Stories: A user story describes a specific requirement of the schema in the terms of a...instance document is a schema -valid XML file that completely describes the information in the test case in a manner that satisfies the user story

  14. What is validation

    International Nuclear Information System (INIS)

    Clark, H.K.

    1985-01-01

    Criteria for establishing the validity of a computational method to be used in assessing nuclear criticality safety, as set forth in ''American Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors,'' ANSI/ANS-8.1-1983, are examined and discussed. Application of the criteria is illustrated by describing the procedures followed in deriving subcritical limits that have been incorporated in the Standard

  15. Site characterization and validation

    International Nuclear Information System (INIS)

    Olsson, O.; Eriksson, J.; Falk, L.; Sandberg, E.

    1988-04-01

    The borehole radar investigation program of the SCV-site (Site Characterization and Validation) has comprised single hole reflection measurements with centre frequencies of 22, 45, and 60 MHz. The radar range obtained in the single hole reflection measurements was approximately 100 m for the lower frequency (22 MHz) and about 60 m for the centre frequency 45 MHz. In the crosshole measurements transmitter-receiver separations from 60 to 200 m have been used. The radar investigations have given a three dimensional description of the structure at the SCV-site. A generalized model of the site has been produced which includes three major zones, four minor zones and a circular feature. These features are considered to be the most significant at the site. Smaller features than the ones included in the generalized model certainly exist but no additional features comparable to the three major zones are thought to exist. The results indicate that the zones are not homogeneous but rather that they are highly irregular containing parts of considerably increased fracturing and parts where their contrast to the background rock is quite small. The zones appear to be approximately planar at least at the scale of the site. At a smaller scale the zones can appear quite irregular. (authors)

  16. Congruent Validity of the Rathus Assertiveness Schedule.

    Science.gov (United States)

    Harris, Thomas L.; Brown, Nina W.

    1979-01-01

    The validity of the Rathus Assertiveness Schedule (RAS) was investigated by correlating it with the six Class I scales of the California Psychological Inventory on a sample of undergraduate students. Results supported the validity of the RAS. (JKS)

  17. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    Science.gov (United States)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  18. Results of a monitoring programme in the environs of Berkeley aimed at collecting Chernobyl data for foodchain model validation

    International Nuclear Information System (INIS)

    Nair, S.; Darley, P.J.; Shaer, J.

    1989-03-01

    The results of a fallout measurement programme which was carried out in the environs of Berkeley Nuclear Laboratory in the United Kingdom following the Chernobyl reactor accident in April 1986 are presented in this report. The programme was aimed at establishing a time-dependent data base of concentrations of Chernobyl fallout radionuclides in selected agricultural products. Results were obtained for milk, grass, silage, soil and wheat over an eighteen month period from May 1986. It is intended to use the data to validate the CEGB's dynamic foodchain model, which is incorporated in the FOODWEB module of the NECTAR environmental code. (author)

  19. Validering av vattenkraftmodeller i ARISTO

    OpenAIRE

    Lundbäck, Maja

    2013-01-01

    This master thesis was made to validate hydropower models of a turbine governor, Kaplan turbine and a Francis turbine in the power system simulator ARISTO at Svenska Kraftnät. The validation was made in three steps. The first step was to make sure the models was implement correctly in the simulator. The second was to compare the simulation results from the Kaplan turbine model to data from a real hydropower plant. The comparison was made to see how the models could generate simulation result ...

  20. Validering av Evolution 220

    OpenAIRE

    Krakeli, Tor-Arne

    2013-01-01

    - Det har blitt kjøpt inn et nytt spektrofotometer (Evolution 220, Thermo Scientific) til BioLab Nofima. I den forbindelsen har det blitt utført en validering som involverer kalibreringsstandarder fra produsenten og en test på normal distribusjon (t-test) på to metoder (Total fosfor, Tryptofan). Denne valideringen fant Evolution 220 til å være et akseptabelt alternativ til det allerede benyttede spektrofotometeret (Helios Beta). På bakgrunn av noen instrumentbegrensninger må de aktuelle an...

  1. Simulation Validation for Societal Systems

    National Research Council Canada - National Science Library

    Yahja, Alex

    2006-01-01

    .... There are however, substantial obstacles to validation. The nature of modeling means that there are implicit model assumptions, a complex model space and interactions, emergent behaviors, and uncodified and inoperable simulation and validation knowledge...

  2. Audit Validation Using Ontologies

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2015-01-01

    Full Text Available Requirements to increase quality audit processes in enterprises are defined. It substantiates the need for assessment and management audit processes using ontologies. Sets of rules, ways to assess the consistency of rules and behavior within the organization are defined. Using ontologies are obtained qualifications that assess the organization's audit. Elaboration of the audit reports is a perfect algorithm-based activity characterized by generality, determinism, reproducibility, accuracy and a well-established. The auditors obtain effective levels. Through ontologies obtain the audit calculated level. Because the audit report is qualitative structure of information and knowledge it is very hard to analyze and interpret by different groups of users (shareholders, managers or stakeholders. Developing ontology for audit reports validation will be a useful instrument for both auditors and report users. In this paper we propose an instrument for validation of audit reports contain a lot of keywords that calculates indicators, a lot of indicators for each key word there is an indicator, qualitative levels; interpreter who builds a table of indicators, levels of actual and calculated levels.

  3. The greek translation of the symptoms rating scale for depression and anxiety: preliminary results of the validation study

    Directory of Open Access Journals (Sweden)

    Gougoulias Kyriakos

    2003-12-01

    Full Text Available Abstract Background The aim of the current study was to assess the reliability, validity and the psychometric properties of the Greek translation of the Symptoms Rating Scale For Depression and Anxiety. The scale consists of 42 items and permits the calculation of the scores of the Beck Depression Inventory (BDI-21, the BDI 13, the Melancholia Subscale, the Asthenia Subscale, the Anxiety Subscale and the Mania Subscale Methods 29 depressed patients 30.48 ± 9.83 years old, and 120 normal controls 27.45 ± 10.85 years old entered the study. In 20 of them (8 patients and 12 controls the instrument was re-applied 1–2 days later. Translation and Back Translation was made. Clinical Diagnosis was reached by consensus of two examiners with the use of the SCAN v.2.0 and the IPDE. CES-D and ZDRS were used for cross-validation purposes. The Statistical Analysis included ANOVA, the Spearman Correlation Coefficient, Principal Components Analysis and the calculation of Cronbach's alpha. Results The optimal cut-off points were: BDI-21: 14/15, BDI-13: 7/8, Melancholia: 8/9, Asthenia: 9/10, Anxiety: 10/11. Chronbach's alpha ranged between 0.86 and 0.92 for individual scales. Only the Mania subscale had very low alpha (0.12. The test-retest reliability was excellent for all scales with Spearman's Rho between 0.79 and 0.91. Conclusions The Greek translation of the SRSDA and the scales that consist it are both reliable and valid and are suitable for clinical and research use with satisfactory properties. Their properties are close to those reported in the international literature. However one should always have in mind the limitations inherent in the use of self-report scales.

  4. Validation of thermohydraulic codes by comparison of experimental results with computer simulations

    International Nuclear Information System (INIS)

    Madeira, A.A.; Galetti, M.R.S.; Pontedeiro, A.C.

    1989-01-01

    The results obtained by simulation of three cases from CANON depressurization experience, using the TRAC-PF1 computer code, version 7.6, implanted in the VAX-11/750 computer of Brazilian CNEN, are presented. The CANON experience was chosen as first standard problem in thermo-hydraulic to be discussed at ENFIR for comparing results from different computer codes with results obtained experimentally. The ability of TRAC-PF1 code to prevent the depressurization phase of a loss of primary collant accident in pressurized water reactors is evaluated. (M.C.K.) [pt

  5. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  6. Validation of natural language processing to extract breast cancer pathology procedures and results

    Directory of Open Access Journals (Sweden)

    Arika E Wieneke

    2015-01-01

    Full Text Available Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%, and evaluation (324, 10% purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related. Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity, but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance.

  7. Non-invasive transcranial ultrasound therapy based on a 3D CT scan: protocol validation and in vitro results

    International Nuclear Information System (INIS)

    Marquet, F; Pernot, M; Aubry, J-F; Montaldo, G; Tanter, M; Fink, M; Marsac, L

    2009-01-01

    A non-invasive protocol for transcranial brain tissue ablation with ultrasound is studied and validated in vitro. The skull induces strong aberrations both in phase and in amplitude, resulting in a severe degradation of the beam shape. Adaptive corrections of the distortions induced by the skull bone are performed using a previous 3D computational tomography scan acquisition (CT) of the skull bone structure. These CT scan data are used as entry parameters in a FDTD (finite differences time domain) simulation of the full wave propagation equation. A numerical computation is used to deduce the impulse response relating the targeted location and the ultrasound therapeutic array, thus providing a virtual time-reversal mirror. This impulse response is then time-reversed and transmitted experimentally by a therapeutic array positioned exactly in the same referential frame as the one used during CT scan acquisitions. In vitro experiments are conducted on monkey and human skull specimens using an array of 300 transmit elements working at a central frequency of 1 MHz. These experiments show a precise refocusing of the ultrasonic beam at the targeted location with a positioning error lower than 0.7 mm. The complete validation of this transcranial adaptive focusing procedure paves the way to in vivo animal and human transcranial HIFU investigations.

  8. Non-invasive transcranial ultrasound therapy based on a 3D CT scan: protocol validation and in vitro results

    Energy Technology Data Exchange (ETDEWEB)

    Marquet, F; Pernot, M; Aubry, J-F; Montaldo, G; Tanter, M; Fink, M [Laboratoire Ondes et Acoustique, ESPCI, Universite Paris VII, UMR CNRS 7587, 10 rue Vauquelin, 75005 Paris (France); Marsac, L [Supersonic Imagine, Les Jardins de la Duranne, 510 rue Rene Descartes, 13857 Aix-en-Provence (France)], E-mail: fabrice.marquet@espci.org

    2009-05-07

    A non-invasive protocol for transcranial brain tissue ablation with ultrasound is studied and validated in vitro. The skull induces strong aberrations both in phase and in amplitude, resulting in a severe degradation of the beam shape. Adaptive corrections of the distortions induced by the skull bone are performed using a previous 3D computational tomography scan acquisition (CT) of the skull bone structure. These CT scan data are used as entry parameters in a FDTD (finite differences time domain) simulation of the full wave propagation equation. A numerical computation is used to deduce the impulse response relating the targeted location and the ultrasound therapeutic array, thus providing a virtual time-reversal mirror. This impulse response is then time-reversed and transmitted experimentally by a therapeutic array positioned exactly in the same referential frame as the one used during CT scan acquisitions. In vitro experiments are conducted on monkey and human skull specimens using an array of 300 transmit elements working at a central frequency of 1 MHz. These experiments show a precise refocusing of the ultrasonic beam at the targeted location with a positioning error lower than 0.7 mm. The complete validation of this transcranial adaptive focusing procedure paves the way to in vivo animal and human transcranial HIFU investigations.

  9. [Validity of axis III "Conflicts" of Operationalized Psychodynamic Diagnostics (OPD-1)--empirical results and conclusions for OPD-2].

    Science.gov (United States)

    Schneider, Gudrun; Mendler, Till; Heuft, Gereon; Burgmer, Markus

    2008-01-01

    Using specific psychometric instruments, we investigate criteria-related validity of axis III ("conflicts") of OPD-1 by a priori formulated hypotheses concerning the relations to the main conflict/mode. A consecutive sample of 105 psychotherapy inpatients was examined using self-assessment scales (Inventory of Interpersonal Problems; Rosenberg Self-Esteem Scale, Test of Self-Conscious Affect; Toronto Alexithymia-Scale; Frankfurt Self Concept Scales) and videotaped OPD research interviews in the first week after admission to the hospital. Two OPD-certified raters first rated the interviews independently, then in a consensus rating. Due to the different frequency of the main conflict and mode, evaluation of 4 of 7 conflicts was possible. The a priori hypotheses could be confirmed for the conflicts Dependence versus Autonomy (both modes), Submission versus Control (active mode), Desire for Care versus Autarchy (active mode), and Self-Value (passive mode). Confirmation of the a priori hypotheses indicates validity of axis III (Conflicts) of OPD. We discuss the small numbers of some conflicts, the comparison of expert rating OPD with self-assessment and the meaning of the results for OPD-2.

  10. Technicians or patient advocates?--still a valid question (results of focus group discussions with pharmacists)

    DEFF Research Database (Denmark)

    Almarsdóttir, Anna Birna; Morgall, Janine Marie

    1999-01-01

    discussions with community pharmacists in the capital area Reykjavík and rural areas were employed to answer the research question: How has the pharmacists' societal role evolved after the legislation and what are the implications for pharmacy practice? The results showed firstly that the public image...... and the self-image of the pharmacist has changed in the short time since the legislative change. The pharmacists generally said that their patient contact is deteriorating due to the discount wars, the rural pharmacists being more optimistic, and believing in a future competition based on quality. Secondly......, the results showed that the pharmacists have difficulties reconciling their technical paradigm with a legislative and professional will specifying customer and patient focus. This study describes the challenges of a new legislation with a market focus for community pharmacists whose education emphasized...

  11. Army Synthetic Validity Project Report of Phase 2 Results. Volume 2. Appendixes

    Science.gov (United States)

    1990-10-01

    to Equipment & Food o Personal Hygine - Field & Garrison (4) o Kitchen Equipment - Garrison o Field Preparation of Foods & Equipment o Food, Field...Results: Volume II: Appendi i 12. PERSONAL AUTHOR(S) Wise, Lauress L. (AIR); Peterson, Norman G.; Houston, Janis (PDRI); Hoffman, R. Gene Campbell, John...o Handling KIA o Personal Hygiene & Preventive Medicine Numbers in parentheses indicate the number of participants that identified the task as

  12. Out-of-plane buckling of pantographic fabrics in displacement-controlled shear tests: experimental results and model validation

    Science.gov (United States)

    Barchiesi, Emilio; Ganzosch, Gregor; Liebold, Christian; Placidi, Luca; Grygoruk, Roman; Müller, Wolfgang H.

    2018-01-01

    Due to the latest advancements in 3D printing technology and rapid prototyping techniques, the production of materials with complex geometries has become more affordable than ever. Pantographic structures, because of their attractive features, both in dynamics and statics and both in elastic and inelastic deformation regimes, deserve to be thoroughly investigated with experimental and theoretical tools. Herein, experimental results relative to displacement-controlled large deformation shear loading tests of pantographic structures are reported. In particular, five differently sized samples are analyzed up to first rupture. Results show that the deformation behavior is strongly nonlinear, and the structures are capable of undergoing large elastic deformations without reaching complete failure. Finally, a cutting edge model is validated by means of these experimental results.

  13. Labtracker+, a medical smartphone app for the interpretation of consecutive laboratory results: an external validation study.

    Science.gov (United States)

    Hilderink, Judith M; Rennenberg, Roger J M W; Vanmolkot, Floris H M; Bekers, Otto; Koopmans, Richard P; Meex, Steven J R

    2017-09-01

    When monitoring patients over time, clinicians may struggle to distinguish 'real changes' in consecutive blood parameters from so-called natural fluctuations. In practice, they have to do so by relying on their clinical experience and intuition. We developed Labtracker+ , a medical app that calculates the probability that an increase or decrease over time in a specific blood parameter is real, given the time between measurements. We presented patient cases to 135 participants to examine whether there is a difference between medical students, residents and experienced clinicians when it comes to interpreting changes between consecutive laboratory results. Participants were asked to interpret if changes in consecutive laboratory values were likely to be 'real' or rather due to natural fluctuations. The answers of the study participants were compared with the calculated probabilities by the app Labtracker+ and the concordance rates were assessed. Medical students (n=92), medical residents from the department of internal medicine (n=19) and internists (n=24) at a Dutch University Medical Centre. Concordance rates between the study participants and the calculated probabilities by the app Labtracker+ were compared. Besides, we tested whether physicians with clinical experience scored better concordance rates with the app Labtracker+ than inexperienced clinicians. Medical residents and internists showed significantly better concordance rates with the calculated probabilities by the app Labtracker+ than medical students, regarding their interpretation of differences between consecutive laboratory results (p=0.009 and p<0.001, respectively). The app Labtracker+ could serve as a clinical decision tool in the interpretation of consecutive laboratory test results and could contribute to rapid recognition of parameter changes by physicians. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial

  14. Translation and validation of Convergence Insufficiency Symptom Survey (CISS to Portuguese - psychometric results

    Directory of Open Access Journals (Sweden)

    Catarina Tavares

    2014-01-01

    Full Text Available Purpose: Translate and adapt the Convergence Insuficiency Symptom Survey (CISS questionnaire to the Portuguese language and culture and assess the psychometric properties of the translated questionnaire (CISSvp. Methods: The CISS questionnaire was adapted according to the methodology recommended by some authors. The process involved two translations and back-translations performed by independent evaluators, evaluation of these versions, preparation of a synthesis version and its pre-test. The final version (CISSvp was applied in 70 patients (21.79 ± 2.42 years students in higher education, and at two different times, by two observers, to assess its reliability. Results: The results showed good internal consistency of the CISSvp (Cronbach's alpha - α=0.893. The test re-test revealed an average of the differences between the first and second evaluation of 0.75 points (SD ± 3.53, which indicates a minimum bias between the two administrations. The interrater reliability assessed by intraclass correlation coefficient ranged from 0.880 to 0.952, revealing that the CISSvp represents an appropriate tool for measuring the visual discomfort associated with near vision tasks with a high level of reproducibility. Conclusions: The CISS Portuguese version, showed good psychometric properties and has been sown to be applicable to the Portuguese population, to quantify the visual discomfort associated with near vision, in higher education students.

  15. Comparative Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow r...... is that the comparative validation can be regarded as the main argument to continue the validation of the building simulation software for the buildings with the double skin façade with the empirical validation test cases.......The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow...

  16. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    Science.gov (United States)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  17. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    Science.gov (United States)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from

  18. Thermodynamic properties of 9-fluorenone: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Kazakov, Andrei F.; Steele, William V.

    2012-01-01

    Highlights: ► Heat capacities were measured for the temperature range 5 K to 520 K. ► Vapor pressures were measured for the temperature range 368 K to 668 K. ► The enthalpy of combustion was measured and the enthalpy of formation was derived. ► Calculated and derived properties for the ideal gas are in excellent accord. ► Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Measurements leading to the calculation of thermodynamic properties for 9-fluorenone (IUPAC name 9H-fluoren-9-one and Chemical Abstracts registry number [486-25-9]) in the ideal-gas state are reported. Experimental methods were adiabatic heat-capacity calorimetry, inclined-piston manometry, comparative ebulliometry, and combustion calorimetry. Critical properties were estimated. Molar entropies for the ideal-gas state were derived from the experimental studies at selected temperatures T between T = 298.15 K and T = 600 K, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6 − 31 + G(d,p) level of theory. Values derived with the independent methods are shown to be in excellent accord with a scaling factor of 0.975 applied to the calculated frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in recent articles by this research group. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous.

  19. Medical biomodelling in surgical applications: results of a multicentric European validation of 466 cases.

    Science.gov (United States)

    Wulf, J; Vitt, K D; Erben, C M; Bill, J S; Busch, L C

    2003-01-01

    The study started in September 1999 and ended in April 2002. It is based on a questionnaire [www.phidias.org] assessing case-related questions due to the application of stereolithographic models. Each questionnaire contains over 50 items. These variables take into account diagnosis, indications and benefits of stereolithographic models with view on different steps of the surgical procedures: preoperative planning, intraoperative application and overall outcome after surgical intervervention. These questionnaires were completed by the surgeons who performed operation. Over the time course of our multicentric study (30 months), we evaluated 466 cases. The study population consists of n=231 male and n= 235 female patients. 54 surgeons from 9 European countries were involved. There are main groups of diagnosis that related to the use of a model. Most models were used in maxillofacial surgery. The operative planning may help to determine the resection line of tumor and optimize reconstructive procedures. Correction of large calvarian defects can be simulated and implants can be produced preoperatively. Overall in 58 % of all cases a time- saving effect was reported. The study strongly suggests, that medical modeling has utility in surgical specialities, especially in the craniofacial and maxillofacial area, however increasingly in the orthopedic field. Due to our results, medical modeling optimizes the preoperative surgical planning. Surgeons are enabeled to perform realistic and interactive simulations. The fabrication of implants, its design and fit on the model, allow to reduce operation time and in consequence risk and cost of operation. In addition, the understanging of volumetric data is improved, especially if medical models are combined with standart imaging modalities. Finally, surgeons are able to improve communication between their patientents and colleagues.

  20. SMOS near-real-time soil moisture product: processor overview and first validation results

    Directory of Open Access Journals (Sweden)

    N. J. Rodríguez-Fernández

    2017-10-01

    Full Text Available Measurements of the surface soil moisture (SM content are important for a wide range of applications. Among them, operational hydrology and numerical weather prediction, for instance, need SM information in near-real-time (NRT, typically not later than 3 h after sensing. The European Space Agency (ESA Soil Moisture and Ocean Salinity (SMOS satellite is the first mission specifically designed to measure SM from space. The ESA Level 2 SM retrieval algorithm is based on a detailed geophysical modelling and cannot provide SM in NRT. This paper presents the new ESA SMOS NRT SM product. It uses a neural network (NN to provide SM in NRT. The NN inputs are SMOS brightness temperatures for horizontal and vertical polarizations and incidence angles from 30 to 45°. In addition, the NN uses surface soil temperature from the European Centre for Medium-Range Weather Forecasts (ECMWF Integrated Forecast System (IFS. The NN was trained on SMOS Level 2 (L2 SM. The swath of the NRT SM retrieval is somewhat narrower (∼ 915 km than that of the L2 SM dataset (∼ 1150 km, which implies a slightly lower revisit time. The new SMOS NRT SM product was compared to the SMOS Level 2 SM product. The NRT SM data show a standard deviation of the difference with respect to the L2 data of < 0.05 m3 m−3 in most of the Earth and a Pearson correlation coefficient higher than 0.7 in large regions of the globe. The NRT SM dataset does not show a global bias with respect to the L2 dataset but can show local biases of up to 0.05 m3 m−3 in absolute value. The two SMOS SM products were evaluated against in situ measurements of SM from more than 120 sites of the SCAN (Soil Climate Analysis Network and the USCRN (US Climate Reference Network networks in North America. The NRT dataset obtains similar but slightly better results than the L2 data. In summary, the NN SMOS NRT SM product exhibits performances similar to those of the Level 2 SM product

  1. Assessment of juveniles testimonies’ validity

    Directory of Open Access Journals (Sweden)

    Dozortseva E.G.

    2015-12-01

    Full Text Available The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA by means of Criteria-Based Content Analysis (CBCA and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice

  2. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  3. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    Science.gov (United States)

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  4. Spare Items validation

    International Nuclear Information System (INIS)

    Fernandez Carratala, L.

    1998-01-01

    There is an increasing difficulty for purchasing safety related spare items, with certifications by manufacturers for maintaining the original qualifications of the equipment of destination. The main reasons are, on the top of the logical evolution of technology, applied to the new manufactured components, the quitting of nuclear specific production lines and the evolution of manufacturers quality systems, originally based on nuclear codes and standards, to conventional industry standards. To face this problem, for many years different Dedication processes have been implemented to verify whether a commercial grade element is acceptable to be used in safety related applications. In the same way, due to our particular position regarding the spare part supplies, mainly from markets others than the american, C.N. Trillo has developed a methodology called Spare Items Validation. This methodology, which is originally based on dedication processes, is not a single process but a group of coordinated processes involving engineering, quality and management activities. These are to be performed on the spare item itself, its design control, its fabrication and its supply for allowing its use in destinations with specific requirements. The scope of application is not only focussed on safety related items, but also to complex design, high cost or plant reliability related components. The implementation in C.N. Trillo has been mainly curried out by merging, modifying and making the most of processes and activities which were already being performed in the company. (Author)

  5. NVN 5694 intra laboratory validation. Feasibility study for interlaboratory- validation

    International Nuclear Information System (INIS)

    Voors, P.I.; Baard, J.H.

    1998-11-01

    Within the project NORMSTAR 2 a number of Dutch prenormative protocols have been defined for radioactivity measurements. Some of these protocols, e.g. the Dutch prenormative protocol NVN 5694, titled Methods for radiochemical determination of polonium-210 and lead-210, have not been validated, neither by intralaboratory nor interlaboratory studies. Validation studies are conducted within the framework of the programme 'Normalisatie and Validatie van Milieumethoden 1993-1997' (Standardization and Validation of test methods for environmental parameters) of the Dutch Ministry of Housing, Physical Planning and the Environment (VROM). The aims of this study were (a) a critical evaluation of the protocol, (b) investigation on the feasibility of an interlaboratory study, and (c) the interlaboratory validation of NVN 5694. The evaluation of the protocol resulted in a list of deficiencies varying from missing references to incorrect formulae. From the survey by interview it appeared that for each type of material, there are 4 to 7 laboratories willing to participate in a interlaboratory validation study. This reflects the situation in 1997. Consequently, if 4 or 6 (the minimal number) laboratories are participating and each laboratory analyses 3 subsamples, the uncertainty in the repeatability standard deviation is 49 or 40 %, respectively. If the ratio of reproducibility standard deviation to the repeatability standard deviation is equal to 1 or 2, then the uncertainty in the reproducibility standard deviation increases from 42 to 67 % and from 34 to 52 % for 4 or 6 laboratories, respectively. The intralaboratory validation was established on four different types of materials. Three types of materials (milkpowder condensate and filter) were prepared in the laboratory using the raw material and certified Pb-210 solutions, and one (sediment) was obtained from the IAEA. The ECN-prepared reference materials were used after testing on homogeneity. The pre-normative protocol can

  6. SHIELD verification and validation report

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    This document outlines the verification and validation effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system code. Along with its predecessors, SHIELD has been in use at the Savannah River Site (SRS) for more than ten years. During this time the code has been extensively tested and a variety of validation documents have been issued. The primary function of this report is to specify the features and capabilities for which SHIELD is to be considered validated, and to reference the documents that establish the validation

  7. Validation of dose-response calibration curve for X-Ray field of CRCN-NE/CNEN: preliminary results

    International Nuclear Information System (INIS)

    Silva, Laís Melo; Mendonç, Julyanne Conceição de Goes; Andrade, Aida Mayra Guedes de; Hwang, Suy F.; Mendes, Mariana Esposito; Lima, Fabiana F.; Melo, Ana Maria M.A.

    2017-01-01

    It is very important in accident investigations that accurate estimating of absorbed dose takes place, so that it contributes to medical decisions and overall assessment of long-term health consequences. Analysis of chromosome aberrations is the most developed method for biological monitoring, and frequencies of dicentric chromosomes are related to absorbed dose of human peripheral blood lymphocytes using calibration curves. International Atomic Energy Agency (IAEA) recommends that each biodosimetry laboratory sets its own calibration curves, given that there are intrinsic differences in protocols and dose interpretations when using calibration curves produced in other laboratories, which could add further uncertainties to dose estimations. The Laboratory for Biological Dosimetry CRCN-NE recently completed dose-response calibration curves for X ray field. Curves of chromosomes dicentrics and dicentrics plus rings were made using Dose Estimate. This study aimed to validate the calibration curves dose-response for X ray with three irradiated samples. Blood was obtained by venipuncture from healthy volunteer and three samples were irradiated by x-rays of 250 kVp with different absorbed doses (0,5Gy, 1Gy and 2Gy). The irradiation was performed at the CRCN-NE/CNEN Metrology Service with PANTAK X-ray equipment, model HF 320. The frequency of dicentric and centric rings chromosomes were determined in 500 metaphases per sample after cultivation of lymphocytes, and staining with Giemsa 5%. Results showed that the estimated absorbed doses are included in the confidence interval of 95% of real absorbed dose. These Dose-response calibration curves (dicentrics and dicentrics plus rings) seems valid, therefore other tests will be done with different volunteers. (author)

  8. Validation of dose-response calibration curve for X-Ray field of CRCN-NE/CNEN: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Laís Melo; Mendonç, Julyanne Conceição de Goes; Andrade, Aida Mayra Guedes de; Hwang, Suy F.; Mendes, Mariana Esposito; Lima, Fabiana F., E-mail: falima@cnen.gov.br, E-mail: mendes_sb@hotmail.com [Centro Regional de Ciências Nucleares, (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Melo, Ana Maria M.A., E-mail: july_cgm@yahoo.com.br [Universidade Federal de Pernambuco (UFPE), Vitória de Santo Antão, PE (Brazil). Centro Acadêmico de Vitória

    2017-07-01

    It is very important in accident investigations that accurate estimating of absorbed dose takes place, so that it contributes to medical decisions and overall assessment of long-term health consequences. Analysis of chromosome aberrations is the most developed method for biological monitoring, and frequencies of dicentric chromosomes are related to absorbed dose of human peripheral blood lymphocytes using calibration curves. International Atomic Energy Agency (IAEA) recommends that each biodosimetry laboratory sets its own calibration curves, given that there are intrinsic differences in protocols and dose interpretations when using calibration curves produced in other laboratories, which could add further uncertainties to dose estimations. The Laboratory for Biological Dosimetry CRCN-NE recently completed dose-response calibration curves for X ray field. Curves of chromosomes dicentrics and dicentrics plus rings were made using Dose Estimate. This study aimed to validate the calibration curves dose-response for X ray with three irradiated samples. Blood was obtained by venipuncture from healthy volunteer and three samples were irradiated by x-rays of 250 kVp with different absorbed doses (0,5Gy, 1Gy and 2Gy). The irradiation was performed at the CRCN-NE/CNEN Metrology Service with PANTAK X-ray equipment, model HF 320. The frequency of dicentric and centric rings chromosomes were determined in 500 metaphases per sample after cultivation of lymphocytes, and staining with Giemsa 5%. Results showed that the estimated absorbed doses are included in the confidence interval of 95% of real absorbed dose. These Dose-response calibration curves (dicentrics and dicentrics plus rings) seems valid, therefore other tests will be done with different volunteers. (author)

  9. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  10. Pooled results from 5 validation studies of dietary self-report instruments using recovery biomarkers for energy and protein intake

    Science.gov (United States)

    We pooled data from 5 large validation studies of dietary self-report instruments that used recovery biomarkers as references to clarify the measurement properties of food frequency questionnaires (FFQs) and 24-hour recalls. The studies were conducted in widely differing U.S. adult populations from...

  11. Comparison of gamma knife validation film's analysis results of different film dose analysis software

    International Nuclear Information System (INIS)

    Cheng Xiaojun; Zhang Conghua; Liu Han; Dai Fuyou; Hu Chuanpeng; Liu Cheng; Yao Zhongfu

    2011-01-01

    Objective: To compare the analytical result of different kinds of film dose analysis software for the same gamma knife, analyze the reasons of difference caused, and explore the measurements and means for quality control and quality assurance during testing gamma knife and analyzing its result. Methods: To test the Moon Deity gamma knife with Kodak EDR2 film and γ-Star gamma knife with GAFCHROMIC® EBT film, respectively. All the validation films are scanned to proper imagine format for dose analysis software by EPSON PERFECTION V750 PRO scanner. Then imagines of Moon Deity gamma knife are analyzed with Robot Knife Adjuvant 1.09 and Fas-09 1.0, and imagines of γ-Star gamma knife with Fas-09 and MATLAB 7.0. Results: There is no significant difference in the maximum deviation of radiation field size (Full Width at Half Maximum, FWHM) and its nominal value between Robot Knife Adjuvant and Fas-09 for Moon Deity gamma knife (t=-2.133, P>0.05). The analysis on the radiation field's penumbra region width of collimators which have different sizes indicated that the differences are significant (t=-8.154, P<0.05). There is no significant difference in the maximum deviation of FWHM and its nominal value between Fas-09 and MATLAB for γ-Star gamma knife (t=-1.384, P>0.05). However, following national standards,analysis of φ4 mm width of collimators can obtain different results according to the two kinds software, and the result of Fas-09 is not qualified while MATLAB is qualified. The analysis on the radiation field's penumbra region width of collimators which have different sizes indicates that the differences are significant (t=3.074, P<0.05). The imagines are processed with Fas-09. The analysis of imagine in the pre-and the post-processing indicates that there is no significant difference in the maximum deviation of FWHM and its nominal value (t=0.647, P>0.05), and the analytical result of the radiation field's penumbra region width indicates that there is

  12. Experimental validation of UTDefect

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, A.S. [ABB Tekniska Roentgencentralen AB, Taeby (Sweden); Bostroem, A.; Wirdelius, H. [Chalmers Univ. of Technology, Goeteborg (Sweden). Div. of Mechanics

    1997-01-01

    This study reports on conducted experiments and computer simulations of ultrasonic nondestructive testing (NDT). Experiments and simulations are compared with the purpose of validating the simulation program UTDefect. UTDefect simulates ultrasonic NDT of cracks and some other defects in isotropic and homogeneous materials. Simulations for the detection of surface breaking cracks are compared with experiments in pulse-echo mode on surface breaking cracks in carbon steel plates. The echo dynamics are plotted and compared with the simulations. The experiments are performed on a plate with thickness 36 mm and the crack depths are 7.2 mm and 18 mm. L- and T-probes with frequency 1, 2 and 4 MHz and angels 45, 60 and 70 deg are used. In most cases the probe and the crack is on opposite sides of the plate, but in some cases they are on the same side. Several cracks are scanned from two directions. In total 53 experiments are reported for 33 different combinations. Generally the simulations agree well with the experiments and UTDefect is shown to be able to, within certain limits, perform simulations that are close to experiments. It may be concluded that: For corner echoes the eight 45 deg cases and the eight 60 deg cases show good agreement between experiments and UTDefect, especially for the 7.2 mm crack. The amplitudes differ more for some cases where the defect is close to the probe and for the corner of the 18 mm crack. For the two 70 deg cases there are too few experimental values to compare the curve shapes, but the amplitudes do not differ too much. The tip diffraction echoes also agree well in general. For some cases, where the defect is close to the probe, the amplitudes differ more than 10-15 dB, but for all but two cases the difference in amplitude is less than 7 dB. 6 refs.

  13. Psychometric validation of the SF-36® Health Survey in ulcerative colitis: results from a systematic literature review.

    Science.gov (United States)

    Yarlas, Aaron; Bayliss, Martha; Cappelleri, Joseph C; Maher, Stephen; Bushmakin, Andrew G; Chen, Lea Ann; Manuchehri, Alireza; Healey, Paul

    2018-02-01

    To conduct a systematic literature review of the reliability, construct validity, and responsiveness of the SF-36 ® Health Survey (SF-36) in patients with ulcerative colitis (UC). We performed a systematic search of electronic medical databases to identify published peer-reviewed studies which reported scores from the eight scales and/or two summary measures of the SF-36 collected from adult patients with UC. Study findings relevant to reliability, construct validity, and responsiveness were reviewed. Data were extracted and summarized from 43 articles meeting inclusion criteria. Convergent validity was supported by findings that 83% (197/236) of correlations between SF-36 scales and measures of disease symptoms, disease activity, and functioning exceeded the prespecified threshold (r ≥ |0.40|). Known-groups validity was supported by findings of clinically meaningful differences in SF-36 scores between subgroups of patients when classified by disease activity (i.e., active versus inactive), symptom status, and comorbidity status. Responsiveness was supported by findings of clinically meaningful changes in SF-36 scores following treatment in non-comparative trials, and by meaningfully larger improvements in SF-36 scores in treatment arms relative to controls in randomized controlled trials. The sole study of SF-36 reliability found evidence supporting internal consistency (Cronbach's α ≥ 0.70) for all SF-36 scales and test-retest reliability (intraclass correlation coefficient ≥0.70) for six of eight scales. Evidence from this systematic literature review indicates that the SF-36 is reliable, valid, and responsive when used with UC patients, supporting the inclusion of the SF-36 as an endpoint in clinical trials for this patient population.

  14. Validation of satellite SAR offshore wind speed maps to in-situ data, microscala and mesoscale model results

    Energy Technology Data Exchange (ETDEWEB)

    Hasager, C B; Astrup, P; Barthelmie, R; Dellwik, E; Hoffmann Joergensen, B; Gylling Mortensen, N; Nielsen, M; Pryor, S; Rathmann, O

    2002-05-01

    A validation study has been performed in order to investigate the precision and accuracy of the satellite-derived ERS-2 SAR wind products in offshore regions. The overall project goal is to develop a method for utilizing the satellite wind speed maps for offshore wind resources, e.g. in future planning of offshore wind farms. The report describes the validation analysis in detail for three sites in Denmark, Italy and Egypt. The site in Norway is analyzed by the Nansen Environmental and Remote Sensing Centre (NERSC). Wind speed maps and wind direction maps from Earth Observation data recorded by the ERS-2 SAR satellite have been obtained from the NERSC. For the Danish site the wind speed and wind direction maps have been compared to in-situ observations from a met-mast at Horns Rev in the North Sea located 14 km offshore. The SAR wind speeds have been area-averaged by simple and advanced footprint modelling, ie. the upwind conditions to the meteorological mast are explicitly averaged in the SAR wind speed maps before comparison. The comparison results are very promising with a standard error of {+-} 0.61 m s{sup -1}, a bias {approx}2 m s{sup -1} and R{sup 2} {approx}0.88 between in-situ wind speed observations and SAR footprint averaged values at 10 m level. Wind speeds predicted by the local scale model LINCOM and the mesoscale model KAMM2 have been compared to the spatial variations in the SAR wind speed maps. The finding is a good correspondence between SAR observations and model results. Near the coast is an 800 m wide band in which the SAR wind speed observations have a strong negative bias. The bathymetry of Horns Rev combined with tidal currents give rise to bias in the SAR wind speed maps near areas of shallow, complex bottom topography in some cases. A total of 16 cases were analyzed for Horns Rev. For Maddalena in Italy five cases were analyzed. At the Italian site the SAR wind speed maps were compared to WAsP and KAMM2 model results. The WAsP model

  15. Cleaning Validation of Fermentation Tanks

    DEFF Research Database (Denmark)

    Salo, Satu; Friis, Alan; Wirtanen, Gun

    2008-01-01

    Reliable test methods for checking cleanliness are needed to evaluate and validate the cleaning process of fermentation tanks. Pilot scale tanks were used to test the applicability of various methods for this purpose. The methods found to be suitable for validation of the clenlinees were visula...

  16. Validity in SSM: neglected areas

    NARCIS (Netherlands)

    Pala, O.; Vennix, J.A.M.; Mullekom, T.L. van

    2003-01-01

    Contrary to the prevailing notion in hard OR, in soft system methodology (SSM), validity seems to play a minor role. The primary reason for this is that SSM models are of a different type, they are not would-be descriptions of real-world situations. Therefore, establishing their validity, that is

  17. The Consequences of Consequential Validity.

    Science.gov (United States)

    Mehrens, William A.

    1997-01-01

    There is no agreement at present about the importance or meaning of the term "consequential validity." It is important that the authors of revisions to the "Standards for Educational and Psychological Testing" recognize the debate and relegate discussion of consequences to a context separate from the discussion of validity.…

  18. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  19. A Simulation Tool for Geometrical Analysis and Optimization of Fuel Cell Bipolar Plates: Development, Validation and Results

    Directory of Open Access Journals (Sweden)

    Javier Pino

    2009-07-01

    Full Text Available Bipolar plates (BPs are one of the most important components in Proton Exchange Membrane Fuel Cells (PEMFC due to the numerous functions they perform. The objective of the research work described in this paper was to develop a simplified and validated method based on Computational Fluid Dynamics (CFD, aimed at the analysis and study of the influence of geometrical parameters of BPs on the operation of a cell. A complete sensibility analysis of the influence of dimensions and shape of the BP can be obtained through a simplified CFD model without including the complexity of other components of the PEMFC. This model is compared with the PEM Fuel Cell Module of the FLUENT software, which includes the physical and chemical phenomena relevant in PEMFCs. Results with both models regarding the flow field inside the channels and local current densities are obtained and compared. The results show that it is possible to use the simple model as a standard tool for geometrical analysis of BPs, and results of a sensitivity analysis using the simplified model are presented and discussed.

  20. A validation of direct grey Dancoff factors results for cylindrical cells in cluster geometry by the Monte Carlo method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Bogado, Sergio; Vilhena, Marco T.

    2008-01-01

    The WIMS code is a well known and one of the most used codes to handle nuclear core physics calculations. Recently, the PIJM module of the WIMS code was modified in order to allow the calculation of Grey Dancoff factors, for partially absorbing materials, using the alternative definition in terms of escape and collision probabilities. Grey Dancoff factors for the Canadian CANDU-37 and CANFLEX assemblies were calculated with PIJM at five symmetrically distinct fuel pin positions. The results, obtained via Direct Method, i.e., by direct calculation of escape and collision probabilities, were satisfactory when compared with the ones of literature. On the other hand, the PIJMC module was developed to calculate escape and collision probabilities using Monte Carlo method. Modifications in this module were performed to determine Black Dancoff factors, considering perfectly absorbing fuel rods. In this work, we proceed further in the task of validating the Direct Method by the Monte Carlo approach. To this end, the PIJMC routine is modified to compute Grey Dancoff factors using the cited alternative definition. Results are reported for the mentioned CANDU-37 and CANFLEX assemblies obtained with PIJMC, at the same fuel pin positions as with PIJM. A good agreement is observed between the results from the Monte Carlo and Direct methods

  1. Validation of the Danish PAROLE lexicon (upubliceret)

    DEFF Research Database (Denmark)

    Møller, Margrethe; Christoffersen, Ellen

    2000-01-01

    This validation is based on the Danish PAROLE lexicon dated June 20, 1998, downloaded on March 16, 1999. Subsequently, the developers of the lexicon have informed us that they have been revising the lexicon, in particular the morphological level. Morphological entries were originally generated...... automatically from a machine-readable version of the Official Danish Spelling Dictionary (Retskrivningsordbogen 1986, in the following RO86), and this resulted in some overgeneration, which the developers started eliminating after submitting the Danish PAROLE lexicon for validation. The present validation is......, however, based on the January 1997 version of the lexicon. The validation as such complies with the specifications described in ELRA validation manuals for lexical data, i.e. Underwood and Navaretta: "A Draft Manual for the Validation of Lexica, Final Report" [Underwood & Navaretta1997] and Braasch: "A...

  2. Physical standards and valid caibration

    International Nuclear Information System (INIS)

    Smith, D.B.

    1975-01-01

    The desire for improved nuclear material safeguards has led to the development and use of a number and techniques and instruments for the nondestructive assay (NDA) of special nuclear material. Sources of potential bias in NDA measurements are discussed and methods of eliminating the effects of bias in assay results are suggested. Examples are given of instruments in which these methods have been successfully applied. The results of careful attention to potential sources of assay bias are a significant reduction in the number and complexity of standards required for valid instrument calibration and more credible assay results. (auth)

  3. Assessing generalized anxiety disorder in elderly people using the GAD-7 and GAD-2 scales: results of a validation study.

    Science.gov (United States)

    Wild, Beate; Eckl, Anne; Herzog, Wolfgang; Niehoff, Dorothea; Lechner, Sabine; Maatouk, Imad; Schellberg, Dieter; Brenner, Hermann; Müller, Heiko; Löwe, Bernd

    2014-10-01

    The aim of this study was to evaluate the validity of the seven-item Generalized Anxiety Disorder scale (GAD-7) and its two core items (GAD-2) for detecting GAD in elderly people. A criterion-standard study was performed between May and December of 2010 on a general elderly population living at home. A subsample of 438 elderly persons (ages 58-82) of the large population-based German ESTHER study was included in the study. The GAD-7 was administered to participants as part of a home visit. A telephone-administered structured clinical interview was subsequently conducted by a blinded interviewer. The structured clinical (SCID) interview diagnosis of GAD constituted the criterion standard to determine sensitivity and specificity of the GAD-7 and the GAD-2 scales. Twenty-seven participants met the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for current GAD according to the SCID interview (6.2%; 95% confidence interval [CI]: 3.9%-8.2%). For the GAD-7, a cut point of five or greater appeared to be optimal for detecting GAD. At this cut point the sensitivity of the GAD-7 was 0.63 and the specificity was 0.9. Correspondingly, the optimal cut point for the GAD-2 was two or greater with a sensitivity of 0.67 and a specificity of 0.90. The areas under the curve were 0.88 (95% CI: 0.83-0.93) for the GAD-7 and 0.87 (95% CI: 0.80-0.94) for the GAD-2. The increased scores on both GAD scales were strongly associated with mental health related quality of life (p <0.0001). Our results establish the validity of both the GAD-7 and the GAD-2 in elderly persons. Results of this study show that the recommended cut points of the GAD-7 and the GAD-2 for detecting GAD should be lowered for the elderly general population. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Non-Rhabdomyosarcoma Soft Tissue Sarcomas in Children: A Surveillance, Epidemiology, and End Results Analysis Validating COG Risk Stratifications

    Energy Technology Data Exchange (ETDEWEB)

    Waxweiler, Timothy V., E-mail: timothy.waxweiler@ucdenver.edu [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado (United States); Rusthoven, Chad G. [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado (United States); Proper, Michelle S. [Department of Radiation Oncology, Billings Clinic, Billings, Montana (United States); Cost, Carrye R. [Division of Hematology and Oncology, Department of Pediatrics, University of Colorado Denver School of Medicine, Aurora, Colorado (United States); Cost, Nicholas G. [Division of Urology, Department of Surgery, University of Colorado Denver School of Medicine, Aurora, Colorado (United States); Donaldson, Nathan [Department of Orthopedics, University of Colorado Denver School of Medicine, Aurora, Colorado (United States); Garrington, Timothy; Greffe, Brian S. [Division of Hematology and Oncology, Department of Pediatrics, University of Colorado Denver School of Medicine, Aurora, Colorado (United States); Heare, Travis [Department of Orthopedics, University of Colorado Denver School of Medicine, Aurora, Colorado (United States); Macy, Margaret E. [Division of Hematology and Oncology, Department of Pediatrics, University of Colorado Denver School of Medicine, Aurora, Colorado (United States); Liu, Arthur K. [Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado (United States)

    2015-06-01

    Purpose: Non-rhabdomyosarcoma soft tissue sarcomas (NRSTS) are a heterogeneous group of sarcomas that encompass over 35 histologies. With an incidence of ∼500 cases per year in the United States in those <20 years of age, NRSTS are rare and therefore difficult to study in pediatric populations. We used the large Surveillance, Epidemiology, and End Results (SEER) database to validate the prognostic ability of the Children's Oncology Group (COG) risk classification system and to define patient, tumor, and treatment characteristics. Methods and Materials: From SEER data from 1988 to 2007, we identified patients ≤18 years of age with NRSTS. Data for age, sex, year of diagnosis, race, registry, histology, grade, primary size, primary site, stage, radiation therapy, and survival outcomes were analyzed. Patients with nonmetastatic grossly resected low-grade tumors of any size or high-grade tumors ≤5 cm were considered low risk. Cases of nonmetastatic tumors that were high grade, >5 cm, or unresectable were considered intermediate risk. Patients with nodal or distant metastases were considered high risk. Results: A total of 941 patients met the review criteria. On univariate analysis, black race, malignant peripheral nerve sheath (MPNST) histology, tumors >5 cm, nonextremity primary, lymph node involvement, radiation therapy, and higher risk group were associated with significantly worse overall survival (OS) and cancer-specific survival (CSS). On multivariate analysis, MPNST histology, chemotherapy-resistant histology, and higher risk group were significantly poor prognostic factors for OS and CSS. Compared to low-risk patients, intermediate patients showed poorer OS (hazard ratio [HR]: 6.08, 95% confidence interval [CI]: 3.53-10.47, P<.001) and CSS (HR: 6.27; 95% CI: 3.44-11.43, P<.001), and high-risk patients had the worst OS (HR: 13.35, 95% CI: 8.18-21.76, P<.001) and CSS (HR: 14.65, 95% CI: 8.49-25.28, P<.001). Conclusions: The current COG risk group

  5. The measurement of instrumental ADL: content validity and construct validity

    DEFF Research Database (Denmark)

    Avlund, K; Schultz-Larsen, K; Kreiner, S

    1993-01-01

    do not depend on help. It is also possible to add the items in a valid way. However, to obtain valid IADL-scales, we omitted items that were highly relevant to especially elderly women, such as house-work items. We conclude that the criteria employed for this IADL-measure are somewhat contradictory....... showed that 14 items could be combined into two qualitatively different additive scales. The IADL-measure complies with demands for content validity, distinguishes between what the elderly actually do, and what they are capable of doing, and is a good discriminator among the group of elderly persons who...

  6. Validation and verification of MCNP6 against intermediate and high-energy experimental data and results by other codes

    International Nuclear Information System (INIS)

    Mashnik, Stepan G.

    2011-01-01

    MCNP6, the latest and most advanced LANL transport code representing a recent merger of MCNP5 and MCNPX, has been Validated and Verified (V and V) against a variety of intermediate and high-energy experimental data and against results by different versions of MCNPX and other codes. In the present work, we V and V MCNP6 using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.02 and LAQGSM03.03. We found that MCNP6 describes reasonably well various reactions induced by particles and nuclei at incident energies from 18 MeV to about 1 TeV per nucleon measured on thin and thick targets and agrees very well with similar results obtained with MCNPX and calculations by CEM03.02, LAQGSM03.01 (03.03), INCL4 + ABLA, and Bertini INC + Dresner evaporation, EPAX, ABRABLA, HIPSE, and AMD, used as stand alone codes. Most of several computational bugs and more serious physics problems observed in MCNP6/X during our V and V have been fixed; we continue our work to solve all the known problems before MCNP6 is distributed to the public. (author)

  7. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  8. Overview of results of the first phase of validation activities for the IFMIF High Flux Test Module

    Energy Technology Data Exchange (ETDEWEB)

    Arbeiter, Frederik, E-mail: frederik.arbeiter@kit.edu [Karlsruhe Institute of Technology, Karlsruhe (Germany); Chen Yuming; Dolensky, Bernhard; Freund, Jana; Heupel, Tobias; Klein, Christine; Scheel, Nicola; Schlindwein, Georg [Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer Validation of computational fluid dynamics (CFD) modeling approach for application in the IFMIF High Flux Test Module. Black-Right-Pointing-Pointer Fabrication of prototypes of the irradiation capsules of the IFMIF High Flux Test Module. - Abstract: The international fusion materials irradiation facility (IFMIF) is projected to create an experimentally validated database of material properties relevant for fusion reactor designs. The IFMIF High Flux Test Module is the dedicated experiment to irradiate alloys in the temperature range 250-550 Degree-Sign C and up to 50 displacements per atom per irradiation cycle. The High Flux Test Module is developed to maximize the specimen payload in the restricted irradiation volume, and to minimize the temperature spread within each specimen bundle. Low pressure helium mini-channel cooling is used to offer a high integration density. Due to the demanding thermo-hydraulic and mechanical conditions, the engineering design process (involving numerical neutronic, thermo-hydraulic and mechanical analyses) is supported by extensive experimental validation activities. This paper reports on the prototype manufacturing, thermo-hydraulic modeling experiments and component tests, as well as on mechanical testing. For the testing of the 1:1 prototype of the High Flux Test Module, a dedicated test facility, the Helium Loop Karlsruhe-Low Pressure (HELOKA-LP) has been taken into service.

  9. Overview of results of the first phase of validation activities for the IFMIF High Flux Test Module

    International Nuclear Information System (INIS)

    Arbeiter, Frederik; Chen Yuming; Dolensky, Bernhard; Freund, Jana; Heupel, Tobias; Klein, Christine; Scheel, Nicola; Schlindwein, Georg

    2012-01-01

    Highlights: ► Validation of computational fluid dynamics (CFD) modeling approach for application in the IFMIF High Flux Test Module. ► Fabrication of prototypes of the irradiation capsules of the IFMIF High Flux Test Module. - Abstract: The international fusion materials irradiation facility (IFMIF) is projected to create an experimentally validated database of material properties relevant for fusion reactor designs. The IFMIF High Flux Test Module is the dedicated experiment to irradiate alloys in the temperature range 250–550 °C and up to 50 displacements per atom per irradiation cycle. The High Flux Test Module is developed to maximize the specimen payload in the restricted irradiation volume, and to minimize the temperature spread within each specimen bundle. Low pressure helium mini-channel cooling is used to offer a high integration density. Due to the demanding thermo-hydraulic and mechanical conditions, the engineering design process (involving numerical neutronic, thermo-hydraulic and mechanical analyses) is supported by extensive experimental validation activities. This paper reports on the prototype manufacturing, thermo-hydraulic modeling experiments and component tests, as well as on mechanical testing. For the testing of the 1:1 prototype of the High Flux Test Module, a dedicated test facility, the Helium Loop Karlsruhe-Low Pressure (HELOKA-LP) has been taken into service.

  10. Are measurements of patient safety culture and adverse events valid and reliable? Results from a cross sectional study.

    Science.gov (United States)

    Farup, Per G

    2015-05-02

    The association between measurements of the patient safety culture and the "true" patient safety has been insufficiently documented, and the validity of the tools used for the measurements has been questioned. This study explored associations between the patient safety culture and adverse events, and evaluated the validity of the tools. In 2008/2009, a survey on patient safety culture was performed with Hospital Survey on Patient Safety Culture (HSOPSC) in two medical departments in two geographically separated hospitals of Innlandet Hospital Trust. Later, a retrospective analysis of adverse events during the same period was performed with the Global Trigger Tool (GTT). The safety culture and adverse events were compared between the departments. 185 employees participated in the study, and 272 patient records were analysed. The HSOPSC scores were lower and adverse events less prevalent in department 1 than in department 2. In departments 1 and 2 the mean HSOPSC scores (SD) were at the unit level 3.62 (0.42) and 3.90 (0.37) (p culture and adverse events. Until the criterion validity of the tools for measuring patient safety culture and tracking of adverse events have been further evaluated, measurement of patient safety culture could not be used as a proxy for the "true" safety.

  11. THE VALIDATION OF THE RESULTS OF MICROARRAY STUDIES OF ASSOCIATION BETWEEN GENE POLYMORPHISMS AND THE FREQUENCY OF RADIATION EXPOSURE MARKERS

    Directory of Open Access Journals (Sweden)

    M. V. Khalyuzova

    2014-01-01

    Full Text Available The results from the selective validation research into the association between genetic polymorphisms and the frequency of cytogenetic abnormalities on a large independent sample are analyzed. These polymorphisms have been identified previously during own microarray studies. It has been shown an association with the frequency of dicentric and ring chromosomes induced by radiation exposure. The study was conducted among Siberian Group of Chemical Enterprises healthy employees (n = 573 exposed to professional irradiation in a dose range of 40–400 mSv. We have found that 5 SNP are confirmed to be associated with the frequency of dicentric and ring: INSR rs1051690 – insulin receptor gene; WRNrs2725349 – Werner syndrome gene, RecQ helicase-like; VCAM1 rs1041163 – vascular cell adhesion molecule 1 gene; PCTP rs2114443 – phosphatidylcholine transfer protein gene; TNKS rs7462102 – tankyrase gene; TRF1-interacting ankyrin-related ADP-ribose polymerase. IGF1 rs2373721 – insulin-like growth factor 1 gene has not confirmed to be associated with the frequency of dicentric and ring chromosomes.

  12. Presal36: a high resolution ocean current model for Brazilian pre-salt area: implementation and validation results

    Energy Technology Data Exchange (ETDEWEB)

    Schoellkopf, Jacques P. [Advanced Subsea do Brasil Ltda., Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The PRESAL 36 JIP is a project for the development of a powerful Ocean Current Model of 1/36 of a degree resolution, nested in an existing Global Ocean global Model, Mercator PSY4 (1/12-a-degree resolution ), with tide corrections, improved bathymetry accuracy and high frequency atmospheric forcing (every 3 hours). The simulation outputs will be the 3 dimensional structure of the velocity fields (u,v,w) at 50 vertical levels over the water column, including geostrophic, Ekman and tidal currents, together with Temperature, Salinity and sea surface height at a sub-mesoscale spatial resolution. Simulations will run in hindcast, nowcast and forecast modes, with a temporal resolution of 3 hours . This Ocean current model will allow to perform detailed statistical studies on various areas using conditions analysed using hindcast mode, short term operational condition prediction for various surface and sub sea operations using realtime and Forecast modes. The paper presents a publication of significant results of the project, in term of pre-sal zoomed model implementation, and high resolution model validation. It demonstrate the capability to properly describe ocean current phenomenon at beyond mesoscale frontier. This project demonstrate the feasibility of obtaining accurate information for engineering studies and operational conditions, based on a 'zoom technique' starting from global ocean models. (author)

  13. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  14. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    Science.gov (United States)

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Internal Validity: A Must in Research Designs

    Science.gov (United States)

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  16. Construct Validity of Neuropsychological Tests in Schizophrenia.

    Science.gov (United States)

    Allen, Daniel N.; Aldarondo, Felito; Goldstein, Gerald; Huegel, Stephen G.; Gilbertson, Mark; van Kammen, Daniel P.

    1998-01-01

    The construct validity of neuropsychological tests in patients with schizophrenia was studied with 39 patients who were evaluated with a battery of six tests assessing attention, memory, and abstract reasoning abilities. Results support the construct validity of the neuropsychological tests in patients with schizophrenia. (SLD)

  17. The Treatment Validity of Autism Screening Instruments

    Science.gov (United States)

    Livanis, Andrew; Mouzakitis, Angela

    2010-01-01

    Treatment validity is a frequently neglected topic of screening instruments used to identify autism spectrum disorders. Treatment validity, however, should represent an important aspect of these instruments to link the resulting data to the selection of interventions as well as make decisions about treatment length and intensity. Research…

  18. P185-M Protein Identification and Validation of Results in Workflows that Integrate over Various Instruments, Datasets, Search Engines

    Science.gov (United States)

    Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.

    2007-01-01

    Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.

  19. Accuracy of postpartum haemorrhage data in the 2011 Victorian Perinatal Data Collection: Results of a validation study.

    Science.gov (United States)

    Flood, Margaret; Pollock, Wendy; McDonald, Susan J; Davey, Mary-Ann

    2018-04-01

    The postpartum haemorrhage (PPH) rate in Victoria in 2009 for women having their first birth, based on information reported to the Victorian Perinatal Data Collection (VPDC), was 23.6% (primiparas). Prior to 2009 PPH was collected via a tick box item on the perinatal form. Estimated blood loss (EBL) volume is now collected and it is from this item the PPH rate is calculated. Periodic assessment of data accuracy is essential to inform clinicians and others who rely on these data of their quality and limitations. This paper describes the results of a state-wide validation study of the accuracy of EBL volume and EBL-related data items reported to VPDC. PPH data from a random sample of 1% of births in Victoria in 2011 were extracted from source medical records and compared with information submitted to the VPDC. Accuracy was determined, together with sensitivity, specificity, positive predictive value and negative predictive value for dichotomous items. Accuracy of reporting for EBL ≥ 500 mL was 97.2% and for EBL ≥ 1500 mL was 99.7%. Sensitivity for EBL ≥ 500 mL was 89.0% (CI 83.1-93.0) and for EBL ≥ 1500 mL was 71.4% (CI 35.9-91.8). Blood product transfusion, peripartum hysterectomy and procedures to control bleeding were all accurately reported in >99% of cases. Most PPH-related data items in the 2011 VPDC may be considered reliable. Our results suggest EBL ≥ 1500 mL is likely to be under-reported. Changes to policies and practices of recording blood loss could further increase accuracy of reporting. © 2017 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  20. Validation of one-dimensional module of MARS 2.1 computer code by comparison with the RELAP5/MOD3.3 developmental assessment results

    International Nuclear Information System (INIS)

    Lee, Y. J.; Bae, S. W.; Chung, B. D.

    2003-02-01

    This report records the results of the code validation for the one-dimensional module of the MARS 2.1 thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 code development assessment problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS 2.1 code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The results suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  1. Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE-2009: overview of campaign operations and results

    Directory of Open Access Journals (Sweden)

    T. Leblanc

    2011-12-01

    Full Text Available The Measurements of Humidity in the Atmosphere and Validation Experiment (MOHAVE 2009 campaign took place on 11–27 October 2009 at the JPL Table Mountain Facility in California (TMF. The main objectives of the campaign were to (1 validate the water vapor measurements of several instruments, including, three Raman lidars, two microwave radiometers, two Fourier-Transform spectrometers, and two GPS receivers (column water, (2 cover water vapor measurements from the ground to the mesopause without gaps, and (3 study upper tropospheric humidity variability at timescales varying from a few minutes to several days.

    A total of 58 radiosondes and 20 Frost-Point hygrometer sondes were launched. Two types of radiosondes were used during the campaign. Non negligible differences in the readings between the two radiosonde types used (Vaisala RS92 and InterMet iMet-1 made a small, but measurable impact on the derivation of water vapor mixing ratio by the Frost-Point hygrometers. As observed in previous campaigns, the RS92 humidity measurements remained within 5% of the Frost-point in the lower and mid-troposphere, but were too dry in the upper troposphere.

    Over 270 h of water vapor measurements from three Raman lidars (JPL and GSFC were compared to RS92, CFH, and NOAA-FPH. The JPL lidar profiles reached 20 km when integrated all night, and 15 km when integrated for 1 h. Excellent agreement between this lidar and the frost-point hygrometers was found throughout the measurement range, with only a 3% (0.3 ppmv mean wet bias for the lidar in the upper troposphere and lower stratosphere (UTLS. The other two lidars provided satisfactory results in the lower and mid-troposphere (2–5% wet bias over the range 3–10 km, but suffered from contamination by fluorescence (wet bias ranging from 5 to 50% between 10 km and 15 km, preventing their use as an independent measurement in the UTLS.

    The comparison between all available stratospheric

  2. Validation of Autonomous Space Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — System validation addresses the question "Will the system do the right thing?" When system capability includes autonomy, the question becomes more pointed. As NASA...

  3. Magnetic Signature Analysis & Validation System

    National Research Council Canada - National Science Library

    Vliet, Scott

    2001-01-01

    The Magnetic Signature Analysis and Validation (MAGSAV) System is a mobile platform that is used to measure, record, and analyze the perturbations to the earth's ambient magnetic field caused by object such as armored vehicles...

  4. Mercury and Cyanide Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  5. ICP-MS Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  6. Contextual Validity in Hybrid Logic

    DEFF Research Database (Denmark)

    Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin

    2013-01-01

    interpretations. Moreover, such indexicals give rise to a special kind of validity—contextual validity—that interacts with ordinary logi- cal validity in interesting and often unexpected ways. In this paper we model these interactions by combining standard techniques from hybrid logic with insights from the work...... of Hans Kamp and David Kaplan. We introduce a simple proof rule, which we call the Kamp Rule, and first we show that it is all we need to take us from logical validities involving now to contextual validities involving now too. We then go on to show that this deductive bridge is strong enough to carry us...... to contextual validities involving yesterday, today and tomorrow as well....

  7. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    runs in real-time by assimilating weather data and uses Monte Carlo simulation techniques to manage the geotechnical and hydrological input parameters. In this context, an assessment of the factors controlling the geotechnical and hydrological features is crucial in order to understand the occurrence of slope instability mechanisms and to provide reliable forecasting of the hydrogeological hazard occurrence, especially in relation to weather events. In particular, the model and the soil characterization were applied in back analysis, in order to assess the reliability of the model through validation of the results with landslide events that occurred during the period. The validation was performed on four past events of intense rainfall that have affected Valle d'Aosta region between 2008 and 2010 years triggering fast shallows landslides. The simulations show substantial improvement of the reliability of the results compared to the use of literature parameters. A statistical analysis of the HIRESSS outputs in terms of failure probability has been carried out in order to define reliable alert levels for regional landslide early warning systems.

  8. MARS Validation Plan and Status

    International Nuclear Information System (INIS)

    Ahn, Seung-hoon; Cho, Yong-jin

    2008-01-01

    The KINS Reactor Thermal-hydraulic Analysis System (KINS-RETAS) under development is directed toward a realistic analysis approach of best-estimate (BE) codes and realistic assumptions. In this system, MARS is pivoted to provide the BE Thermal-Hydraulic (T-H) response in core and reactor coolant system to various operational transients and accidental conditions. As required for other BE codes, the qualification is essential to ensure reliable and reasonable accuracy for a targeted MARS application. Validation is a key element of the code qualification, and determines the capability of a computer code in predicting the major phenomena expected to occur. The MARS validation was made by its developer KAERI, on basic premise that its backbone code RELAP5/MOD3.2 is well qualified against analytical solutions, test or operational data. A screening was made to select the test data for MARS validation; some models transplanted from RELAP5, if already validated and found to be acceptable, were screened out from assessment. It seems to be reasonable, but does not demonstrate whether code adequacy complies with the software QA guidelines. Especially there may be much difficulty in validating the life-cycle products such as code updates or modifications. This paper presents the plan for MARS validation, and the current implementation status

  9. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  10. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  11. Development and Validation of a Scale to Measure Adolescent Sexual and Reproductive Health Stigma: Results From Young Women in Ghana

    Science.gov (United States)

    Hall, Kelli Stidham; Manu, Abubakar; Morhe, Emmanuel; Harris, Lisa H.; Loll, Dana; Ela, Elizabeth; Kolenic, Giselle; Dozier, Jessica L.; Challa, Sneha; Zochowski, Melissa K.; Boakye, Andrew; Adanu, Richard; Dalton, Vanessa K.

    2018-01-01

    Young women’s experiences with sexual and reproductive health (SRH) stigma may contribute to unintended pregnancy. Thus, stigma interventions and rigorous measures to assess their impact are needed. Based on formative work, we generated a pool of 51 items on perceived stigma around different dimensions of adolescent SRH and family planning (sex, contraception, pregnancy, child-bearing, abortion). We tested items in a survey study of 1,080 women ages 15 to 24 recruited from schools, health facilities, and universities in Ghana. Confirmatory factor analysis (CFA) identified the most conceptually and statistically relevant scale, and multivariable regression established construct validity via associations between stigma and contraceptive use. CFA provided strong support for our hypothesized Adolescent SRH Stigma Scale (chi-square p value stigma (six items), enacted stigma (seven items), and stigmatizing lay attitudes (seven items). The scale demonstrated good internal consistency (α = 0.74) and strong subscale correlations (α = 0.82 to 0.93). Higher SRH stigma scores were inversely associated with ever having used modern contraception (adjusted odds ratio [AOR] = 0.96, confidence interval [CI] = 0.94 to 0.99, p value = 0.006). A valid, reliable instrument for assessing SRH stigma and its impact on family planning, the Adolescent SRH Stigma Scale can inform and evaluate interventions to reduce/manage stigma and foster resilience among young women in Africa and beyond. PMID:28266874

  12. Further validation of the peripheral artery questionnaire: results from a peripheral vascular surgery survey in the Netherlands.

    Science.gov (United States)

    Smolderen, K G; Hoeks, S E; Aquarius, A E; Scholte op Reimer, W J; Spertus, J A; van Urk, H; Denollet, J; Poldermans, D

    2008-11-01

    Peripheral arterial disease (PAD) is associated with adverse cardiovascular events and can significantly impair patients' health status. Recently, marked methodological improvements in the measurement of PAD patients' health status have been made. The Peripheral Artery Questionnaire (PAQ) was specifically developed for this purpose. We validated a Dutch version of the PAQ in a large sample of PAD patients. Cross-sectional study. The Dutch PAQ was completed by 465 PAD patients (70% men, mean age 65+/-10 years) participating in the Euro Heart Survey Programme. Principal components analysis and reliability analyses were performed. Convergent validity was documented by comparing the PAQ with EQ-5D scales. Three factors were discerned; Physical Function, Perceived Disability, and Treatment Satisfaction (factor loadings between 0.50 and 0.90). Cronbach's alpha values were excellent (mean alpha=0.94). Shared variance of the PAQ domains with EQ-5D scales ranged from 3 to 50%. The Dutch PAQ proved to have good measurement qualities; assessment of Physical Function, Perceived Disability, and Treatment Satisfaction facilitates the monitoring of patients' perceived health in clinical research and practice. Measuring disease-specific health status in a reliable way becomes essential in times were a wide array of treatment options are available for PAD patients.

  13. Validation of activity determination codes and nuclide vectors by using results from processing of retired components and operational waste

    International Nuclear Information System (INIS)

    Lundgren, Klas; Larsson, Arne

    2012-01-01

    Decommissioning studies for nuclear power reactors are performed in order to assess the decommissioning costs and the waste volumes as well as to provide data for the licensing and construction of the LILW repositories. An important part of this work is to estimate the amount of radioactivity in the different types of decommissioning waste. Studsvik ALARA Engineering has performed such assessments for LWRs and other nuclear facilities in Sweden. These assessments are to a large content depending on calculations, senior experience and sampling on the facilities. The precision in the calculations have been found to be relatively high close to the reactor core. Of natural reasons the precision will decline with the distance. Even if the activity values are lower the content of hard to measure nuclides can cause problems in the long term safety demonstration of LLW repositories. At the same time Studsvik is processing significant volumes of metallic and combustible waste from power stations in operation and in decommissioning phase as well as from other nuclear facilities such as research and waste treatment facilities. Combining the unique knowledge in assessment of radioactivity inventory and the large data bank the waste processing represents the activity determination codes can be validated and the waste processing analysis supported with additional data. The intention with this presentation is to highlight how the European nuclear industry jointly could use the waste processing data for validation of activity determination codes. (authors)

  14. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    Science.gov (United States)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  15. MODIS Hotspot Validation over Thailand

    Directory of Open Access Journals (Sweden)

    Veerachai Tanpipat

    2009-11-01

    Full Text Available To ensure remote sensing MODIS hotspot (also known as active fire products or hotspots quality and precision in forest fire control and management in Thailand, an increased level of confidence is needed. Accuracy assessment of MODIS hotspots utilizing field survey data validation is described. A quantitative evaluation of MODIS hotspot products has been carried out since the 2007 forest fire season. The carefully chosen hotspots were scattered throughout the country and within the protected areas of the National Parks and Wildlife Sanctuaries. Three areas were selected as test sites for validation guidelines. Both ground and aerial field surveys were also conducted in this study by the Forest Fire Control Division, National Park, Wildlife and Plant Conversation Department, Ministry of Natural Resources and Environment, Thailand. High accuracy of 91.84 %, 95.60% and 97.53% for the 2007, 2008 and 2009 fire seasons were observed, resulting in increased confidence in the use of MODIS hotspots for forest fire control and management in Thailand.

  16. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, Ahmed; Bleyer, Alexandre; Schwarz, Siegfried

    2009-01-01

    The ASTEC code development by IRSN and GRS is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this validation, only the thermal-hydraulic module of ASTEC code is used. ASTEC is a lumped-parameter code able to represent multi-compartment containments. It uses the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. This paper presents the validation of ASTEC V1.3 on the tests T9 and T9bis, of the PANDA OECD/SETH experimental program, investigating the impact of injection velocity and steam condensation on the plume shape and on the gas distribution. Dedicated meshes were developed to simulate the test facility with the two vessels DW1, DW2 and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows a good agreement between experiments and calculations. (author)

  17. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  18. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  19. Site characterization and validation - Inflow to the validation drift

    International Nuclear Information System (INIS)

    Harding, W.G.C.; Black, J.H.

    1992-01-01

    Hydrogeological experiments have had an essential role in the characterization of the drift site on the Stripa project. This report focuses on the methods employed and the results obtained from inflow experiments performed on the excavated drift in stage 5 of the SCV programme. Inflows were collected in sumps on the floor, in plastic sheeting on the upper walls and ceiling, and measured by means of differential humidity of ventilated air at the bulkhead. Detailed evaporation experiments were also undertaken on uncovered areas of the excavated drift. The inflow distribution was determined on the basis of a system of roughly equal sized grid rectangles. The results have highlighted the overriding importance of fractures in the supply of water to the drift site. The validation drift experiment has revealed that in excess of 99% of inflow comes from a 5 m section corresponding to the 'H' zone, and that as much as 57% was observed coming from a single grid square (267). There was considerable heterogeneity even within the 'H' zone, with 38% of such samples areas yielding no flow at all. Model predictions in stage 4 underestimated the very substantial declines in inflow observed in the validation drift when compared to the SDE; this was especially so in the 'good' rock areas. Increased drawdowns in the drift have generated less flow and reduced head responses in nearby boreholes by a similar proportion. This behaviour has been the focus for considerable study in the latter part of the SCV project, and a number of potential processes have been proposed. These include 'transience', stress redistribution resulting from the creation of the drift, chemical precipitation, blast-induced dynamic unloading and related gas intrusion, and degassing. (au)

  20. Work limitations among working persons with rheumatoid arthritis: results, reliability, and validity of the work limitations questionnaire in 836 patients.

    Science.gov (United States)

    Walker, Nancy; Michaud, Kaleb; Wolfe, Frederick

    2005-06-01

    To describe workplace limitations and the validity and reliability of the Work Limitations Questionnaire (WLQ) in persons with rheumatoid arthritis (RA). A total of 836 employed persons with RA reported clinical and work related measures and completed the WLQ, a 25 item questionnaire that assesses the impact of chronic health conditions on job performance and productivity. Limitations are categorized into 4 domains: physical demands (PDS), mental demands (MDS), time management demands (TMS), and output demands (ODS), which are then used to calculate the WLQ index. Of the 836 completed WLQ, about 10% (85) could not be scored, as more than half the items in each domain were not applicable to the patient's job. Demographic and clinical variables were associated with missing WLQ scores including older age (OR 1.7, 95% CI 1.3-2.1), male sex (OR 1.9, 95% CI 1.2-3.0), and Health Assessment Questionnaire (HAQ) scores (OR 1.4, 95% CI 1.0-2.0). Work limitations were present in all work domains: PDS (27.5%), MDS (15.7%), ODS (19.4%), and TMS (28.6%), resulting in a mean WLQ index of 5.9 (SD 5.6), which corresponds to a 4.9% decrease in productivity and a 5.1% increase in work hours to compensate for productivity loss. The WLQ index was inversely associated with Medical Outcomes Study Short Form 36 (SF-36) Mental Component Score (MCS; r = -0.60) and Physical Component Score (PCS; r = -0.49). Fatigue (0.5), pain (0.46), and HAQ (0.56) were also significantly associated with the WLQ index. Weaker associations were seen with days unable to perform (0.29), days activities cut down (0.38), and annual income (-0.10). The WLQ is a reliable tool for assessing work productivity. However, persons with RA tend to select jobs that they can do with their RA limitations, with the result that the WLQ does not detect functional limitations as well as the HAQ and SF-36. The WLQ provides special information that is not available using conventional measures of assessment, and can provide helpful

  1. The validation of an infrared simulation system

    CSIR Research Space (South Africa)

    De Waal, A

    2013-08-01

    Full Text Available theoretical validation framework. This paper briefly describes the procedure used to validate software models in an infrared system simulation, and provides application examples of this process. The discussion includes practical validation techniques...

  2. The fish sexual development test: an OECD test guideline proposal with possible relevance for environmental risk assessment. Results from the validation programme

    DEFF Research Database (Denmark)

    Holbech, Henrik; Brande-Lavridsen, Nanna; Kinnberg, Karin Lund

    2010-01-01

    The Fish Sexual Development Test (FSDT) has gone through two validations as an OECD test guideline for the detection of endocrine active chemicals with different modes of action. The validation has been finalized on four species: Zebrafish (Danio rerio), Japanese medaka (Oryzias latipes), three s...... as a population relevant endpoint and the results of the two validation rounds will be discussed in relation to environmental risk assessment and species selection....... for histology. For all three methods, the fish parts were numbered and histology could therefore be linked to the vitellogenin concentration in individual fish. The two core endocrine relevant endpoints were vitellogenin concentrations and phenotypic sex ratio. Change in the sex ratio is presented...

  3. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  4. Validation of satellite SAR offshore wind speed maps to in-situ data, microscale and mesoscale model results

    DEFF Research Database (Denmark)

    Hasager, C.B.; Astrup, Poul; Barthelmie, R.J.

    2002-01-01

    the assumption of no error in the SAR wind speed maps and for an uncertainty of ± 10% at a confidence level of 90%. Around 100 satellite SAR scenes may be available for some sites on Earth but far few at other sites. Currently the numberof available satellite SAR scenes is increasing rapidly with ERS-2, RADARSAT......A validation study has been performed in order to investigate the precision and accuracy of the satellite-derived ERS-2 SAR wind products in offshore regions. The overall project goal is to develop a method for utilizing the satellite wind speed maps foroffshore wind resources, e.g. in future...... band in which the SAR wind speed observations have a strong negative bias. The bathymetry of Horns Rev combined with tidal currents give rise to bias in the SAR wind speed maps near areas of shallow, complex bottom topography in some cases. Atotal of 16 cases were analyzed for Horns Rev. For Maddalena...

  5. Data Quality in Institutional Arthroplasty Registries: Description of a Model of Validation and Report of Preliminary Results.

    Science.gov (United States)

    Bautista, Maria P; Bonilla, Guillermo A; Mieth, Klaus W; Llinás, Adolfo M; Rodríguez, Fernanda; Cárdenas, Laura L

    2017-07-01

    Arthroplasty registries are a relevant source of information for research and quality improvement in patient care and its value depends on the quality of the recorded data. The purpose of this study is to describe a model of validation and present the findings of validation of an Institutional Arthroplasty Registry (IAR). Information from 209 primary arthroplasties and revision surgeries of the hip, knee, and shoulder recorded in the IAR between March and September 2015 were analyzed in the following domains. Adherence is defined as the proportion of patients included in the registry, completeness is defined as the proportion of data effectively recorded, and accuracy is defined as the proportion of data consistent with medical records. A random sample of 53 patients (25.4%) was selected to assess the latest 2 domains. A direct comparison between the registry's database and medical records was performed. In total, 324 variables containing information on demographic data, surgical procedure, clinical outcomes, and key performance indicators were analyzed. Two hundred nine of 212 patients who underwent surgery during the study period were included in the registry, accounting for an adherence of 98.6%. Completeness was 91.7% and accuracy was 85.8%. Most errors were found in the preoperative range of motion and timely administration of prophylactic antibiotics and thromboprophylaxis. This model provides useful information regarding the quality of the recorded data since it identified deficient areas within the IAR. We recommend that institutional arthroplasty registries be constantly monitored for data quality before using their information for research or quality improvement purposes. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Development and Validation of a Scale to Measure Adolescent Sexual and Reproductive Health Stigma: Results From Young Women in Ghana.

    Science.gov (United States)

    Hall, Kelli Stidham; Manu, Abubakar; Morhe, Emmanuel; Harris, Lisa H; Loll, Dana; Ela, Elizabeth; Kolenic, Giselle; Dozier, Jessica L; Challa, Sneha; Zochowski, Melissa K; Boakye, Andrew; Adanu, Richard; Dalton, Vanessa K

    2018-01-01

    Young women's experiences with sexual and reproductive health (SRH) stigma may contribute to unintended pregnancy. Thus, stigma interventions and rigorous measures to assess their impact are needed. Based on formative work, we generated a pool of 51 items on perceived stigma around different dimensions of adolescent SRH and family planning (sex, contraception, pregnancy, childbearing, abortion). We tested items in a survey study of 1,080 women ages 15 to 24 recruited from schools, health facilities, and universities in Ghana. Confirmatory factor analysis (CFA) identified the most conceptually and statistically relevant scale, and multivariable regression established construct validity via associations between stigma and contraceptive use. CFA provided strong support for our hypothesized Adolescent SRH Stigma Scale (chi-square p value < 0.001; root mean square error of approximation [RMSEA] = 0.07; standardized root mean square residual [SRMR] = 0.06). The final 20-item scale included three subscales: internalized stigma (six items), enacted stigma (seven items), and stigmatizing lay attitudes (seven items). The scale demonstrated good internal consistency (α = 0.74) and strong subscale correlations (α = 0.82 to 0.93). Higher SRH stigma scores were inversely associated with ever having used modern contraception (adjusted odds ratio [AOR] = 0.96, confidence interval [CI] = 0.94 to 0.99, p value = 0.006). A valid, reliable instrument for assessing SRH stigma and its impact on family planning, the Adolescent SRH Stigma Scale can inform and evaluate interventions to reduce/manage stigma and foster resilience among young women in Africa and beyond.

  7. Instrument validation project

    International Nuclear Information System (INIS)

    Reynolds, B.A.; Daymo, E.A.; Geeting, J.G.H.; Zhang, J.

    1996-06-01

    Westinghouse Hanford Company Project W-211 is responsible for providing the system capabilities to remove radioactive waste from ten double-shell tanks used to store radioactive wastes on the Hanford Site in Richland, Washington. The project is also responsible for measuring tank waste slurry properties prior to injection into pipeline systems, including the Replacement of Cross-Site Transfer System. This report summarizes studies of the appropriateness of the instrumentation specified for use in Project W-211. The instruments were evaluated in a test loop with simulated slurries that covered the range of properties specified in the functional design criteria. The results of the study indicate that the compact nature of the baseline Project W-211 loop does not result in reduced instrumental accuracy resulting from poor flow profile development. Of the baseline instrumentation, the Micromotion densimeter, the Moore Industries thermocouple, the Fischer and Porter magnetic flow meter, and the Red Valve Pressure transducer meet the desired instrumental accuracy. An alternate magnetic flow meter (Yokagawa) gave nearly identical results as the baseline fischer and Porter. The Micromotion flow meter did not meet the desired instrument accuracy but could potentially be calibrated so that it would meet the criteria. The Nametre on-line viscometer did not meet the desired instrumental accuracy and is not recommended as a quantitative instrument although it does provide qualitative information. The recommended minimum set of instrumentation necessary to ensure the slurry meets the Project W-058 acceptance criteria is the Micromotion mass flow meter and delta pressure cells

  8. Rapid Robot Design Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies will create a comprehensive software infrastructure for rapid validation of robot designs. The software will support push-button validation...

  9. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  10. Contrast-enhanced spectral mammography in recalls from the Dutch breast cancer screening program : validation of results in a large multireader, multicase study

    NARCIS (Netherlands)

    Lalji, U C; Houben, I P L; Prevos, R; Gommers, S; van Goethem, M; Vanwetswinkel, S; Pijnappel, R; Steeman, R; Frotscher, C; Mok, W; Nelemans, P; Smidt, M L; Beets-Tan, R G; Wildberger, J E; Lobbes, M B I

    2016-01-01

    OBJECTIVES: Contrast-enhanced spectral mammography (CESM) is a promising problem-solving tool in women referred from a breast cancer screening program. We aimed to study the validity of preliminary results of CESM using a larger panel of radiologists with different levels of CESM experience.

  11. Results of the investigation on validity of Japanese seismic design guidelines of nuclear facilities, based on the 1995 Hyogoken-Nanbu Earthquake

    International Nuclear Information System (INIS)

    Watabe, Makoto

    1997-01-01

    This paper describes the reviewed results and main discussions on some items thought to be problems in the 'Examination Guide for Aseismatic Design of the Nuclear Power Reactor Facilities' of Japan, based on knowledge from the 1995 Hyogoken-Nanbu Earthquake, and the conclusion that validity of the Guideline was confirmed. (J.P.N.)

  12. Results of the investigation on validity of Japanese seismic design guidelines of nuclear facilities, based on the 1995 Hyogoken-Nanbu Earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Watabe, Makoto [Keio Univ., Fujisawa, Kanagawa (Japan). Fac. of Environment and Information Engineering

    1997-03-01

    This paper describes the reviewed results and main discussions on some items thought to be problems in the `Examination Guide for Aseismatic Design of the Nuclear Power Reactor Facilities` of Japan, based on knowledge from the 1995 Hyogoken-Nanbu Earthquake, and the conclusion that validity of the Guideline was confirmed. (J.P.N.)

  13. Convergent Validity of the PUTS

    Directory of Open Access Journals (Sweden)

    Valerie Cathérine Brandt

    2016-04-01

    Full Text Available Premonitory urges are a cardinal feature in Gilles de la Tourette syndrome. Severity of premonitory urges can be assessed with the Premonitory Urge for Tic Disorders Scale (PUTS. However, convergent validity of the measure has been difficult to assess due to the lack of other urge measures.We investigated the relationship between average real-time urge intensity assessed by an in-house developed real-time urge monitor, measuring urge intensity continuously for 5mins on a visual analogue scale, and general urge intensity assessed by the PUTS in 22 adult Tourette patients (mean age 29.8+/- 10.3; 19 male. Additionally, underlying factors of premonitory urges assessed by the PUTS were investigated in the adult sample using factor analysis and were replicated in 40 children and adolescents diagnosed with Tourette syndrome (mean age 12.05 +/- 2.83 SD, 31 male.Cronbach’s alpha for the PUTS10 was acceptable (α = .79 in the adult sample. Convergent validity between average real-time urge intensity scores (as assessed with the real-time urge monitor and the 10-item version of the PUTS (r = .64 and the 9-item version of the PUTS (r = .66 was good. A factor analysis including the 10 items of the PUTS and average real-time urge intensity scores revealed three factors. One factor included the average real-time urge intensity score and appeared to measure urge intensity, while the other two factors can be assumed to reflect the (sensory quality of urges and subjective control, respectively. The factor structure of the 10 PUTS items alone was replicated in a sample of children and adolescents.The results indicate that convergent validity between the PUTS and the real-time urge assessment monitor is good. Furthermore, the results suggest that the PUTS might assess more than one dimension of urges and it may be worthwhile developing different sub-scales of the PUTS assessing premonitory urges in terms of intensity and quality, as well as subjectively

  14. All Validity Is Construct Validity. Or Is It?

    Science.gov (United States)

    Kane, Michael

    2012-01-01

    Paul E. Newton's article on the consensus definition of validity tackles a number of big issues and makes a number of strong claims. I agreed with much of what he said, and I disagreed with a number of his claims, but I found his article to be consistently interesting and thought provoking (whether I agreed or not). I will focus on three general…

  15. Empirical Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  16. Turbine-99 unsteady simulations - Validation

    International Nuclear Information System (INIS)

    Cervantes, M J; Andersson, U; Loevgren, H M

    2010-01-01

    The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.

  17. Ultrasonic techniques validation on shell

    International Nuclear Information System (INIS)

    Navarro, J.; Gonzalez, E.

    1998-01-01

    Due to the results obtained in several international RRT during the 80's, it has been necessary to prove the effectiveness of the NDT techniques. For this reason it has been imperative to verify the goodness of the Inspection Procedure over different mock-ups, representative of the inspection area and with real defects. Prior to the revision of the inspection procedure and with the aim of updating the techniques used, it is a good practice to perform different scans on the mock-ups until the validation is achieved. It is at this point, where all the parameters of the inspection at hands are defined; transducer, step, scan direction,... and what it's more important, it will be demonstrated that the technique to be used for the area required to inspection is suitable to evaluate the degradation phenomena that could appear. (Author)

  18. Turbine-99 unsteady simulations - Validation

    Science.gov (United States)

    Cervantes, M. J.; Andersson, U.; Lövgren, H. M.

    2010-08-01

    The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.

  19. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  20. PIV Data Validation Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  1. IMRT plan validation

    International Nuclear Information System (INIS)

    Mijnheer, Ben

    2008-01-01

    The lecture encompassed the following topics: Utility of radiographic and radiochromic film dosimetry; Diode and chamber arrays; 3D gel dosimetry; 4D dosimetry; Experimental design for dosimetry; In vivo measurements. and Portal dosimetry. In conclusion, the following pitfalls, potential errors and possible actions are pointed to: (i) Lacking algorithm in the TPS for tongue-and-groove effect. Action: Design and verify a new plan in which the tongue-and-groove effect plays a minor role. Discuss the issue with the TPS manufacturer. (ii) Systematic deviations between TPS calculations and ionisation chamber measurements at the isocentre for plans with many small segments due to uncertainties in the output factor calculation. Action: Rescale the number of MUs. Discuss the issue with the TPS manufacturer. (iii) Large regions with gamma values larger than one during repeated film measurements, while ionisation chamber measurements are correct. Action: Check if the film batch is not expired and if so repeat the measurement with a new batch. (iv) Missing significant errors, e.g., resulting from MLC displacements, due to the limited resolution of the measuring device. Action: Move the device in different directions and repeat the measurement. (v) Missing errors at other parts of the PTV or in OARs by performing only one ionisation chamber measurement or an independent MU calculation at a point. Action: Perform also measurements in a plane for representative clinical cases. (vi) Wrong parameter in the TPS for the definition of leaf position. Action: Understand and verify the definition of leaf position in your TPS. (P.A.)

  2. Verification, validation, and reliability of predictions

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1987-04-01

    The objective of predicting long-term performance should be to make reliable determinations of whether the prediction falls within the criteria for acceptable performance. Establishing reliable predictions of long-term performance of a waste repository requires emphasis on valid theories to predict performance. The validation process must establish the validity of the theory, the parameters used in applying the theory, the arithmetic of calculations, and the interpretation of results; but validation of such performance predictions is not possible unless there are clear criteria for acceptable performance. Validation programs should emphasize identification of the substantive issues of prediction that need to be resolved. Examples relevant to waste package performance are predicting the life of waste containers and the time distribution of container failures, establishing the criteria for defining container failure, validating theories for time-dependent waste dissolution that depend on details of the repository environment, and determining the extent of congruent dissolution of radionuclides in the UO 2 matrix of spent fuel. Prediction and validation should go hand in hand and should be done and reviewed frequently, as essential tools for the programs to design and develop repositories. 29 refs

  3. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  4. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  5. Validity of Management Control Topoi

    DEFF Research Database (Denmark)

    Nørreklit, Lennart; Nørreklit, Hanne; Israelsen, Poul

    2004-01-01

    The validity of research and company topoi for constructing/analyzing relaity is analyzed as the integration of the four aspects (dimensions): fact, possibility (logic), value and comunication. Main stream, agency theory and social constructivism are critizied for reductivism (incomplete integrat...

  6. DTU PMU Laboratory Development - Testing and Validation

    OpenAIRE

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.; Nielsen, Arne Hejde; Østergaard, Jacob

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to foll...

  7. A Complete Reporting of MCNP6 Validation Results for Electron Energy Deposition in Single-Layer Extended Media for Source Energies <= 1-MeV

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, David A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hughes, Henry Grady [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-04

    In this paper, we expand on previous validation work by Dixon and Hughes. That is, we present a more complete suite of validation results with respect to to the well-known Lockwood energy deposition experiment. Lockwood et al. measured energy deposition in materials including beryllium, carbon, aluminum, iron, copper, molybdenum, tantalum, and uranium, for both single- and multi-layer 1-D geometries. Source configurations included mono-energetic, mono-directional electron beams with energies of 0.05-MeV, 0.1-MeV, 0.3- MeV, 0.5-MeV, and 1-MeV, in both normal and off-normal angles of incidence. These experiments are particularly valuable for validating electron transport codes, because they are closely represented by simulating pencil beams incident on 1-D semi-infinite slabs with and without material interfaces. Herein, we include total energy deposition and energy deposition profiles for the single-layer experiments reported by Lockwood et al. (a more complete multi-layer validation will follow in another report).

  8. [MusiQol: international questionnaire investigating quality of life in multiple sclerosis: validation results for the German subpopulation in an international comparison].

    Science.gov (United States)

    Flachenecker, P; Vogel, U; Simeoni, M C; Auquier, P; Rieckmann, P

    2011-10-01

    The existing health-related quality of life questionnaires on multiple sclerosis (MS) only partially reflect the patient's point of view on the reduction of activities of daily living. Their development and validation was not performed in different languages. That is what prompted the development of the Multiple Sclerosis International Quality of Life (MusiQoL) Questionnaire as an international multidimensional measurement instrument. This paper presents this new development and the results of the German subgroup versus the total international sample. A total of 1,992 MS patients from 15 countries, including 209 German patients, took part in the study between January 2004 and February 2005. The patients took the MusiQoL survey at baseline and at 21±7 days as well as completing a symptom-related checklist and the SF-36 short form survey. Demographics, history and MS classification data were also generated. Reproducibility, sensitivity, convergent and discriminant validity were analysed. Convergent and discriminant validity and reproducibility were satisfactory for all dimensions of the MusiQoL. The dimensional scores correlated moderately but significantly with the SF-36 scores, but showed a discriminant validity in terms of gender, socioeconomic status and health status that was more pronounced in the overall population than in the German subpopulation. The highest correlations were observed between the MusiQoL dimension of activities of daily living and the Expanded Disability Status Scale (EDSS). The results of this study confirm the validity and reliability of MusiQoL as an instrument for measuring the quality of life of German and international MS patients.

  9. Regulatory perspectives on human factors validation

    International Nuclear Information System (INIS)

    Harrison, F.; Staples, L.

    2001-01-01

    Validation is an important avenue for controlling the genesis of human error, and thus managing loss, in a human-machine system. Since there are many ways in which error may intrude upon system operation, it is necessary to consider the performance-shaping factors that could introduce error and compromise system effectiveness. Validation works to this end by examining, through objective testing and measurement, the newly developed system, procedure or staffing level, in order to identify and eliminate those factors which may negatively influence human performance. It is essential that validation be done in a high-fidelity setting, in an objective and systematic manner, using appropriate measures, if meaningful results are to be obtained, In addition, inclusion of validation work in any design process can be seen as contributing to a good safety culture, since such activity allows licensees to eliminate elements which may negatively impact on human behaviour. (author)

  10. [Validation of the IBS-SSS].

    Science.gov (United States)

    Betz, C; Mannsdörfer, K; Bischoff, S C

    2013-10-01

    Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterised by abdominal pain, associated with stool abnormalities and changes in stool consistency. Diagnosis of IBS is based on characteristic symptoms and exclusion of other gastrointestinal diseases. A number of questionnaires exist to assist diagnosis and assessment of severity of the disease. One of these is the irritable bowel syndrome - severity scoring system (IBS-SSS). The IBS-SSS was validated 1997 in its English version. In the present study, the IBS-SSS has been validated in German language. To do this, a cohort of 60 patients with IBS according to the Rome III criteria, was compared with a control group of healthy individuals (n = 38). We studied sensitivity and reproducibility of the score, as well as the sensitivity to detect changes of symptom severity. The results of the German validation largely reflect the results of the English validation. The German version of the IBS-SSS is also a valid, meaningful and reproducible questionnaire with a high sensitivity to assess changes in symptom severity, especially in IBS patients with moderate symptoms. It is unclear if the IBS-SSS is also a valid questionnaire in IBS patients with severe symptoms because this group of patients was not studied. © Georg Thieme Verlag KG Stuttgart · New York.

  11. An information architecture for validating courseware

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    Courseware validation should locate Learning Objects inconsistent with the courseware instructional design being used. In order for validation to take place it is necessary to identify the implicit and explicit information needed for validation. In this paper, we identify this information and formally define an information architecture to model courseware validation information explicitly. This promotes tool-support for courseware validation and its interoperability with the courseware specif...

  12. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  13. Construct Validity: Advances in Theory and Methodology

    OpenAIRE

    Strauss, Milton E.; Smith, Gregory T.

    2009-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review f...

  14. Validation of WIMS-AECL/(MULTICELL)/RFSP system by the results of phase-B test at Wolsung-II unit

    Energy Technology Data Exchange (ETDEWEB)

    Hong, In Seob; Min, Byung Joo; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-03-01

    The object of this study is the validation of WIMS-AECL lattice code which has been proposed for the substitution of POWDERPUFS-V(PPV) code. For the validation of this code, WIMS-AECL/(MULTICELL)/RFSP (lattice calculation/(incremental cross section calculation)/core calculation) code system has been used for the Post-Simulation of Phase-B physics Test at Wolsung-II unit. This code system had been used for the Wolsong-I and Point Lepraeu reactors, but after a few modifications of WIMS-AECL input values for Wolsong-II, the results of WIMS-AECL/RFSP code calculations are much improved to those of the old ones. Most of the results show good estimation except moderator temperature coefficient test. And the verification of this result must be done, which is one of the further work. 6 figs., 15 tabs. (Author)

  15. Validation of a new bedside echoscopic heart examination resulting in an improvement in echo-lab workflow.

    Science.gov (United States)

    Réant, Patricia; Dijos, Marina; Arsac, Florence; Mignot, Aude; Cadenaule, Fabienne; Aumiaux, Annette; Jimenez, Christine; Dufau, Marilyne; Prévost, Alain; Pillois, Xavier; Fort, Patrick; Roudaut, Raymond; Lafitte, Stéphane

    2011-03-01

    In daily cardiology practice, porters are usually required to transfer inpatients who need an echocardiogram to the echocardiographic department (echo-lab). To assess echo-lab personnel workflow and patient transfer delay by comparing the use of a new, ultraportable, echoscopic, pocket-sized device at the bedside with patient transfer to the echo-lab for conventional transthoracic echocardiography, in patients needing pericardial control after cardiac invasive procedures. After validation of echoscopic capabilities for pericardial effusion, left ventricular function and mitral regurgitation grade compared with conventional echocardiography, we evaluated echo-lab personnel workflow and time to perform bedside echoscopy for pericardial control evaluation after invasive cardiac procedures. This strategy was compared with conventional evaluation at the echo-lab, in terms of personnel workflow, and patients' transfer, waiting and examination times. Concordance between echoscopy and conventional echocardiography for evaluation of pericardial effusion was good (0.97; kappa value 0.86). For left ventricular systolic function and mitral regurgitation evaluations, concordances were 0.96 (kappa value 0.90) and 0.96 (kappa value 0.86), respectively. In the second part of the study, the mean total time required in the bedside echoscopy group was 20.3±5.4 mins vs. 66.0±16.4 mins in the conventional echo-lab group (pporters in 100% of cases; 69% of patients needed a wheelchair. The use of miniaturized echoscopic tools for pericardial control after invasive cardiac procedures was feasible and accurate, allowing improvement in echo-lab workflow and avoiding patient waiting time and transfer. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  16. Adjustments for drink size and ethanol content: new results from a self-report diary and transdermal sensor validation study.

    Science.gov (United States)

    Bond, Jason C; Greenfield, Thomas K; Patterson, Deidre; Kerr, William C

    2014-12-01

    Prior studies adjusting self-reported measures of alcohol intake for drink size and ethanol (EtOH) content have relied on single-point assessments. A prospective 28-day diary study investigated magnitudes of drink-EtOH adjustments and factors associated with these adjustments. Transdermal alcohol sensor (TAS) readings and prediction of alcohol-related problems by number of drinks versus EtOH-adjusted intake were used to validate drink-EtOH adjustments. Self-completed event diaries listed up to 4 beverage types and 4 drinking events/d. Eligible volunteers had ≥ weekly drinking and ≥3+ drinks per occasion with ≥26 reported days and pre- and postsummary measures (n = 220). Event reports included drink types, sizes, brands or spirits contents, venues, drinks consumed, and drinking duration. Wine drinks averaged 1.19, beer 1.09, and spirits 1.54 U.S. standard drinks (14 g EtOH). Mean-adjusted alcohol intake was 22% larger using drink size and strength (brand/EtOH concentration) data. Adjusted drink levels were larger than "raw" drinks in all quantity ranges. Individual-level drink-EtOH adjustment ratios (EtOH adjusted/unadjusted amounts) averaged across all days drinking ranged from 0.73 to 3.33 (mean 1.22). Adjustment ratio was only marginally (and not significantly) positively related to usual quantity, frequency, and heavy drinking (all ps alcohol dependence symptoms (p Alcoholism.

  17. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  18. Assessment of validity with polytrauma Veteran populations.

    Science.gov (United States)

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  19. Neutron flux control systems validation

    International Nuclear Information System (INIS)

    Hascik, R.

    2003-01-01

    In nuclear installations main requirement is to obtain corresponding nuclear safety in all operation conditions. From the nuclear safety point of view is commissioning and start-up after reactor refuelling appropriate period for safety systems verification. In this paper, methodology, performance and results of neutron flux measurements systems validation is presented. Standard neutron flux measuring chains incorporated into the reactor protection and control system are used. Standard neutron flux measuring chain contains detector, preamplifier, wiring to data acquisition unit, data acquisition unit, wiring to display at control room and display at control room. During reactor outage only data acquisition unit and wiring and displaying at reactor control room is verified. It is impossible to verify detector, preamplifier and wiring to data acquisition recording unit during reactor refuelling according to low power. Adjustment and accurate functionality of these chains is confirmed by start-up rate (SUR) measurement during start-up tests after refuelling of the reactors. This measurement has direct impact to nuclear safety and increase operational nuclear safety level. Briefly description of each measuring system is given. Results are illustrated on measurements performed at Bohunice NPP during reactor start-up tests. Main failures and their elimination are described (Authors)

  20. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  1. Validation of New Cancer Biomarkers

    DEFF Research Database (Denmark)

    Duffy, Michael J; Sturgeon, Catherine M; Söletormos, Georg

    2015-01-01

    BACKGROUND: Biomarkers are playing increasingly important roles in the detection and management of patients with cancer. Despite an enormous number of publications on cancer biomarkers, few of these biomarkers are in widespread clinical use. CONTENT: In this review, we discuss the key steps...... in advancing a newly discovered cancer candidate biomarker from pilot studies to clinical application. Four main steps are necessary for a biomarker to reach the clinic: analytical validation of the biomarker assay, clinical validation of the biomarker test, demonstration of clinical value from performance...... of the biomarker test, and regulatory approval. In addition to these 4 steps, all biomarker studies should be reported in a detailed and transparent manner, using previously published checklists and guidelines. Finally, all biomarker studies relating to demonstration of clinical value should be registered before...

  2. The validated sun exposure questionnaire

    DEFF Research Database (Denmark)

    Køster, B; Søndergaard, J; Nielsen, J B

    2017-01-01

    Few questionnaires used in monitoring sun-related behavior have been tested for validity. We established criteria validity of a developed questionnaire for monitoring population sun-related behavior. During May-August 2013, 664 Danes wore a personal electronic UV-dosimeter for one week...... that measured the outdoor time and dose of erythemal UVR exposure. In the following week, they answered a questionnaire on their sun-related behavior in the measurement week. Outdoor time measured by dosimetry correlated strongly with both outdoor time and the developed exposure scale measured...... in the questionnaire. Exposure measured in SED by dosimetry correlated strongly with the exposure scale. In a linear regression model of UVR (SED) received, 41 percent of the variation was explained by skin type, age, week of participation and the exposure scale, with the exposure scale as the main contributor...

  3. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  4. Drive: Theory and Construct Validation.

    Science.gov (United States)

    Siegling, Alex B; Petrides, K V

    2016-01-01

    This article explicates the theory of drive and describes the development and validation of two measures. A representative set of drive facets was derived from an extensive corpus of human attributes (Study 1). Operationalised using an International Personality Item Pool version (the Drive:IPIP), a three-factor model was extracted from the facets in two samples and confirmed on a third sample (Study 2). The multi-item IPIP measure showed congruence with a short form, based on single-item ratings of the facets, and both demonstrated cross-informant reliability. Evidence also supported the measures' convergent, discriminant, concurrent, and incremental validity (Study 3). Based on very promising findings, the authors hope to initiate a stream of research in what is argued to be a rather neglected niche of individual differences and non-cognitive assessment.

  5. Validation of nursing management diagnoses.

    Science.gov (United States)

    Morrison, R S

    1995-01-01

    Nursing management diagnosis based on nursing and management science, merges "nursing diagnosis" and "organizational diagnosis". Nursing management diagnosis is a judgment about nursing organizational problems. The diagnoses provide a basis for nurse manager interventions to achieve outcomes for which a nurse manager is accountable. A nursing organizational problem is a discrepancy between what should be happening and what is actually happening that prevents the goals of nursing from being accomplished. The purpose of this study was to validate 73 nursing management diagnoses identified previously in 1992: 71 of the 72 diagnoses were considered valid by at least 70% of 136 participants. Diagnoses considered to have high priority for future research and development were identified by summing the mean scores for perceived frequency of occurrence and level of disruption. Further development of nursing management diagnoses and testing of their effectiveness in enhancing decision making is recommended.

  6. Validation of radiation sterilization process

    International Nuclear Information System (INIS)

    Kaluska, I.

    2007-01-01

    The standards for quality management systems recognize that, for certain processes used in manufacturing, the effectiveness of the process cannot be fully verified by subsequent inspection and testing of the product. Sterilization is an example of such a process. For this reason, sterilization processes are validated for use, the performance of sterilization process is monitored routinely and the equipment is maintained according to ISO 13 485. Different aspects of this norm are presented

  7. Satellite imager calibration and validation

    CSIR Research Space (South Africa)

    Vhengani, L

    2010-10-01

    Full Text Available and Validation Lufuno Vhengani*, Minette Lubbe, Derek Griffith and Meena Lysko Council for Scientific and Industrial Research, Defence Peace Safety and Security, Pretoria, South Africa E-mail: * lvhengani@csir.co.za Abstract: The success or failure... techniques specific to South Africa. 1. Introduction The success or failure of any earth observation mission depends on the quality of its data. To achieve optimum levels of reliability most sensors are calibrated pre-launch. However...

  8. Microservices Validation: Methodology and Implementation

    OpenAIRE

    Savchenko, D.; Radchenko, G.

    2015-01-01

    Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservic...

  9. Contrast-enhanced spectral mammography in recalls from the Dutch breast cancer screening program : validation of results in a large multireader, multicase study

    OpenAIRE

    Lalji, U C; Houben, I P L; Prevos, R; Gommers, S; van Goethem, M; Vanwetswinkel, S; Pijnappel, R; Steeman, R; Frotscher, C; Mok, W; Nelemans, P; Smidt, M L; Beets-Tan, R G; Wildberger, J E; Lobbes, M B I

    2016-01-01

    OBJECTIVES: Contrast-enhanced spectral mammography (CESM) is a promising problem-solving tool in women referred from a breast cancer screening program. We aimed to study the validity of preliminary results of CESM using a larger panel of radiologists with different levels of CESM experience. METHODS: All women referred from the Dutch breast cancer screening program were eligible for CESM. 199 consecutive cases were viewed by ten radiologists. Four had extensive CESM experience, three had no C...

  10. Validity of measures of pain and symptoms in HIV/AIDS infected households in resources poor settings: results from the Dominican Republic and Cambodia

    Directory of Open Access Journals (Sweden)

    Morineau Guy

    2006-03-01

    Full Text Available Abstract Background HIV/AIDS treatment programs are currently being mounted in many developing nations that include palliative care services. While measures of palliative care have been developed and validated for resource rich settings, very little work exists to support an understanding of measurement for Africa, Latin America or Asia. Methods This study investigates the construct validity of measures of reported pain, pain control, symptoms and symptom control in areas with high HIV-infected prevalence in Dominican Republic and Cambodia Measures were adapted from the POS (Palliative Outcome Scale. Households were selected through purposive sampling from networks of people living with HIV/AIDS. Consistencies in patterns in the data were tested used Chi Square and Mantel Haenszel tests. Results The sample persons who reported chronic illness were much more likely to report pain and symptoms compared to those not chronically ill. When controlling for the degrees of pain, pain control did not differ between the chronically ill and non-chronically ill using a Mantel Haenszel test in both countries. Similar results were found for reported symptoms and symptom control for the Dominican Republic. These findings broadly support the construct validity of an adapted version of the POS in these two less developed countries. Conclusion The results of the study suggest that the selected measures can usefully be incorporated into population-based surveys and evaluation tools needed to monitor palliative care and used in settings with high HIV/AIDS prevalence.

  11. ISOTHERMAL AIR INGRESS VALIDATION EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Chang H Oh; Eung S Kim

    2011-09-01

    Idaho National Laboratory carried out air ingress experiments as part of validating computational fluid dynamics (CFD) calculations. An isothermal test loop was designed and set to understand the stratified-flow phenomenon, which is important as the initial air flow into the lower plenum of the very high temperature gas cooled reactor (VHTR) when a large break loss-of-coolant accident occurs. The unique flow characteristics were focused on the VHTR air-ingress accident, in particular, the flow visualization of the stratified flow in the inlet pipe to the vessel lower plenum of the General Atomic’s Gas Turbine-Modular Helium Reactor (GT-MHR). Brine and sucrose were used as heavy fluids, and water was used to represent a light fluid, which mimics a counter current flow due to the density difference between the stimulant fluids. The density ratios were changed between 0.87 and 0.98. This experiment clearly showed that a stratified flow between simulant fluids was established even for very small density differences. The CFD calculations were compared with experimental data. A grid sensitivity study on CFD models was also performed using the Richardson extrapolation and the grid convergence index method for the numerical accuracy of CFD calculations . As a result, the calculated current speed showed very good agreement with the experimental data, indicating that the current CFD methods are suitable for predicting density gradient stratified flow phenomena in the air-ingress accident.

  12. CTF Validation and Verification Manual

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Dances, Christopher A. [Pennsylvania State Univ., University Park, PA (United States); Magedanz, Jeffrey W. [Pennsylvania State Univ., University Park, PA (United States); Jernigan, Caleb [Holtec International, Marlton, NJ (United States); Kelly, Joeseph [U.S. Nuclear Regulatory Commission (NRC), Rockville, MD (United States); Toptan, Aysenur [North Carolina State Univ., Raleigh, NC (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [Pennsylvania State Univ., University Park, PA (United States); Palmtag, Scott [Core Physics, Inc., Cary, NC (United States); Gehin, Jess C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-05-25

    Coolant-Boiling in Rod Arrays- Two Fluids (COBRA-TF) is a Thermal/Hydraulic (T/H) simulation code designed for Light Water Reactor (LWR) analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of nine conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and has been used and modified by several institutions over the last several decades. COBRA-TF is also used at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG), and has been improved, updated, and subsequently became the PSU RDFMG version of COBRA-TF (CTF). One part of the improvement process includes validating the methods in CTF. This document seeks to provide a certain level of certainty and confidence in the predictive capabilities of the code for the scenarios it was designed to model--rod bundle geometries with operating conditions that are representative of prototypical Pressurized Water Reactor (PWR)s and Boiling Water Reactor (BWR)s in both normal and accident conditions. This is done by modeling a variety of experiments that simulate these scenarios and then presenting a qualitative and quantitative analysis of the results that demonstrates the accuracy to which CTF is capable of capturing specific quantities of interest.

  13. Validation in the Absence of Observed Events.

    Science.gov (United States)

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  14. DTU PMU Laboratory Development - Testing and Validation

    DEFF Research Database (Denmark)

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE...... standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to follow known patterns and provide confirmation about the test system to confirm the design and settings....... In a nutshell, having 2 PMUs that observe same signals provides validation of the operation and flags questionable results with more certainty. Moreover, the performance and accuracy of the DTU-PMU is tested acquiring good and precise results, when compared with a commercial phasor measurement device, PMU-1....

  15. Validation studies of nursing diagnoses in neonatology

    Directory of Open Access Journals (Sweden)

    Pavlína Rabasová

    2016-03-01

    Full Text Available Aim: The objective of the review was the analysis of Czech and foreign literature sources and professional periodicals to obtain a relevant comprehensive overview of validation studies of nursing diagnoses in neonatology. Design: Review. Methods: The selection criterion was studies concerning the validation of nursing diagnoses in neonatology. To obtain data from relevant sources, the licensed professional databases EBSCO, Web of Science and Scopus were utilized. The search criteria were: date of publication - unlimited; academic periodicals - full text; peer-reviewed periodicals; search language - English, Czech and Slovak. Results: A total of 788 studies were found. Only 5 studies were eligible for content analysis, dealing specifically with validation of nursing diagnoses in neonatology. The analysis of the retrieved studies suggests that authors are most often concerned with identifying the defining characteristics of nursing diagnoses applicable to both the mother (parents and the newborn. The diagnoses were validated in the domains Role Relationship; Coping/Stress tolerance; Activity/Rest, and Elimination and Exchange. Diagnoses represented were from the field of dysfunctional physical needs as well as the field of psychosocial and spiritual needs. The diagnoses were as follows: Parental role conflict (00064; Impaired parenting (00056; Grieving (00136; Ineffective breathing pattern (00032; Impaired gas exchange (00030; and Impaired spontaneous ventilation (00033. Conclusion: Validation studies enable effective planning of interventions with measurable results and support clinical nursing practice.

  16. [Short evaluation of cognitive state in advanced stages of dementia: preliminary results of the Spanish validation of the Severe Mini-Mental State Examination].

    Science.gov (United States)

    Buiza, Cristina; Navarro, Ana; Díaz-Orueta, Unai; González, Mari Feli; Alaba, Javier; Arriola, Enrique; Hernández, Carmen; Zulaica, Amaia; Yanguas, José Javier

    2011-01-01

    The cognitive assessment of patients with advanced dementia needs proper screening instruments that allow obtain information about the cognitive state and resources that these individuals still have. The present work conducts a Spanish validation study of the Severe Mini Mental State Examination (SMMSE). Forty-seven patients with advanced dementia (Mini-Cognitive Examination [MEC]Cognitive Impairment Profile scales. All test items were discriminative. The test showed high internal (α=0.88), test-retest (0.64 to 1.00, Pvalidity was tested through correlations between the instrument and MEC scores (r=0.59, Pinformation on the construct validity was obtained by dividing the sample into groups that scored above or below 5 points in the MEC and recalculating their correlations with SMMSE. The correlation between the scores in the SMMSE and MEC was significant in the MEC 0-5 group (r=0.55, P5 group. Additionally, differences in scores were found in the SMMSE, but not in the MEC, between the three GDS groups (5, 6 and 7) (H=11.1, Pcognitive impairment which prevents the floor effect through an extension of lower measurement range relative to that of the MEC. From our results, this rapid screening tool and easy to administer, can be considered valid and reliable. Copyright © 2010 SEGG. Published by Elsevier Espana. All rights reserved.

  17. Validation of comprehensive space radiation transport code

    International Nuclear Information System (INIS)

    Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.

    1998-01-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation

  18. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  19. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  20. Static Validation of Security Protocols

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, P.

    2005-01-01

    We methodically expand protocol narrations into terms of a process algebra in order to specify some of the checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we demonstrate that these techniques ...... suffice to identify several authentication flaws in symmetric and asymmetric key protocols such as Needham-Schroeder symmetric key, Otway-Rees, Yahalom, Andrew secure RPC, Needham-Schroeder asymmetric key, and Beller-Chang-Yacobi MSR...

  1. Soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  2. Software for validating parameters retrieved from satellite

    Digital Repository Service at National Institute of Oceanography (India)

    Muraleedharan, P.M.; Sathe, P.V.; Pankajakshan, T.

    -channel Scanning Microwave Radiometer (MSMR) onboard the Indian satellites Occansat-1 during 1999-2001 were validated using this software as a case study. The program has several added advantages over the conventional method of validation that involves strenuous...

  3. How Mathematicians Determine if an Argument Is a Valid Proof

    Science.gov (United States)

    Weber, Keith

    2008-01-01

    The purpose of this article is to investigate the mathematical practice of proof validation--that is, the act of determining whether an argument constitutes a valid proof. The results of a study with 8 mathematicians are reported. The mathematicians were observed as they read purported mathematical proofs and made judgments about their validity;…

  4. CFD validation experiments for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  5. A CFD validation roadmap for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  6. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  7. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  8. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  9. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  10. Determination of polychlorinated dibenzodioxins and polychlorinated dibenzofurans (PCDDs/PCDFs) in food and feed using a bioassay. Result of a validation study

    Energy Technology Data Exchange (ETDEWEB)

    Gizzi, G.; Holst, C. von; Anklam, E. [Commission of the European Communities, Geel (Belgium). Joint Research Centre, Inst. for Reference Materials and Measurement, Food Safety and Quality Unit; Hoogenboom, R. [RIKILT-Intitute of Food Safety, Wageningen (Netherlands); Rose, M. [Defra Central Science Laboratory, Sand Hutton, York (United Kingdom)

    2004-09-15

    It is estimated that more than 90% of dioxins consumed by humans come from foods derived from animals. The European Commission through a Council Regulation (No 2375/2001) and a Directive (2001/102/EC), both revised by the Commission Recommendation (2002/201/EC), has set maximum levels for dioxins in food and feedstuffs. To implement the regulation, dioxin-monitoring programs of food and feedstuffs will be undertaken by the Member States requiring the analysis of large amounts of samples. Food and feed companies will have to control their products before putting them into the market. The monitoring for the presence of dioxins in food and feeds needs fast and cheap screening methods in order to select samples with potentially high levels of dioxins to be then analysed by a confirmatory method like HRGC/HRMS. Bioassays like the DR CALUX {sup registered} - assay have claimed to provide a suitable alternative for the screening of large number of samples, reducing costs and the required time of analysis. These methods have to comply with the specific characteristics considered into two Commission Directives (2002/69/EC; 2002/70/EC), establishing the requirements for the determination of dioxin and dioxin-like PCBs for the official control of food and feedstuffs. The European Commission's Joint Research Centre is pursuing validation of alternative techniques in food and feed materials. In order to evaluate the applicability of the DR CALUX {sup registered} technique as screening method in compliance with the Commission Directives, a validation study was organised in collaboration with CSL and RIKILT. The aim of validating an analytical method is first to determine its performance characteristics (e.g. variability, bias, rate of false positive and false negative results), and secondly to evaluate if the method is fit for the purpose. Two approaches are commonly used: an in-house validation is preferentially performed first in order to establish whether the method is

  11. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  12. Reliability and validity of risk analysis

    International Nuclear Information System (INIS)

    Aven, Terje; Heide, Bjornar

    2009-01-01

    In this paper we investigate to what extent risk analysis meets the scientific quality requirements of reliability and validity. We distinguish between two types of approaches within risk analysis, relative frequency-based approaches and Bayesian approaches. The former category includes both traditional statistical inference methods and the so-called probability of frequency approach. Depending on the risk analysis approach, the aim of the analysis is different, the results are presented in different ways and consequently the meaning of the concepts reliability and validity are not the same.

  13. Method Validation Procedure in Gamma Spectroscopy Laboratory

    International Nuclear Information System (INIS)

    El Samad, O.; Baydoun, R.

    2008-01-01

    The present work describes the methodology followed for the application of ISO 17025 standards in gamma spectroscopy laboratory at the Lebanese Atomic Energy Commission including the management and technical requirements. A set of documents, written procedures and records were prepared to achieve the management part. The technical requirements, internal method validation was applied through the estimation of trueness, repeatability , minimum detectable activity and combined uncertainty, participation in IAEA proficiency tests assure the external method validation, specially that the gamma spectroscopy lab is a member of ALMERA network (Analytical Laboratories for the Measurements of Environmental Radioactivity). Some of these results are presented in this paper. (author)

  14. Expert system validation in prolog

    Science.gov (United States)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  15. Validity and Reliability in Social Science Research

    Science.gov (United States)

    Drost, Ellen A.

    2011-01-01

    In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…

  16. Validity Semantics in Educational and Psychological Assessment

    Science.gov (United States)

    Hathcoat, John D.

    2013-01-01

    The semantics, or meaning, of validity is a fluid concept in educational and psychological testing. Contemporary controversies surrounding this concept appear to stem from the proper location of validity. Under one view, validity is a property of score-based inferences and entailed uses of test scores. This view is challenged by the…

  17. Validation of the Child Sport Cohesion Questionnaire

    Science.gov (United States)

    Martin, Luc J.; Carron, Albert V.; Eys, Mark A.; Loughead, Todd

    2013-01-01

    The purpose of the present study was to test the validity evidence of the Child Sport Cohesion Questionnaire (CSCQ). To accomplish this task, convergent, discriminant, and known-group difference validity were examined, along with factorial validity via confirmatory factor analysis (CFA). Child athletes (N = 290, M[subscript age] = 10.73 plus or…

  18. The Role of Generalizability in Validity.

    Science.gov (United States)

    Kane, Michael

    The relationship between generalizability and validity is explained, making four important points. The first is that generalizability coefficients provide upper bounds on validity. The second point is that generalization is one step in most interpretive arguments, and therefore, generalizability is a necessary condition for the validity of these…

  19. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance – Empirical Results and a Plea for Ecologically Valid Microworlds

    Directory of Open Access Journals (Sweden)

    Heinz-Martin Süß

    2018-05-01

    Full Text Available The original aim of complex problem solving (CPS research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell’s investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory (Tailorshop; i.e., a complex real life-oriented system twice, while in the second study, 152 students completed a forestry scenario (FSYS; i.e., a complex artificial world system. The results indicate that reasoning – specifically numerical reasoning (Studies 1 and 2 and figural reasoning (Study 2 – are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1 cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2 in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly

  20. Impact of Cognitive Abilities and Prior Knowledge on Complex Problem Solving Performance – Empirical Results and a Plea for Ecologically Valid Microworlds

    Science.gov (United States)

    Süß, Heinz-Martin; Kretzschmar, André

    2018-01-01

    The original aim of complex problem solving (CPS) research was to bring the cognitive demands of complex real-life problems into the lab in order to investigate problem solving behavior and performance under controlled conditions. Up until now, the validity of psychometric intelligence constructs has been scrutinized with regard to its importance for CPS performance. At the same time, different CPS measurement approaches competing for the title of the best way to assess CPS have been developed. In the first part of the paper, we investigate the predictability of CPS performance on the basis of the Berlin Intelligence Structure Model and Cattell’s investment theory as well as an elaborated knowledge taxonomy. In the first study, 137 students managed a simulated shirt factory (Tailorshop; i.e., a complex real life-oriented system) twice, while in the second study, 152 students completed a forestry scenario (FSYS; i.e., a complex artificial world system). The results indicate that reasoning – specifically numerical reasoning (Studies 1 and 2) and figural reasoning (Study 2) – are the only relevant predictors among the intelligence constructs. We discuss the results with reference to the Brunswik symmetry principle. Path models suggest that reasoning and prior knowledge influence problem solving performance in the Tailorshop scenario mainly indirectly. In addition, different types of system-specific knowledge independently contribute to predicting CPS performance. The results of Study 2 indicate that working memory capacity, assessed as an additional predictor, has no incremental validity beyond reasoning. We conclude that (1) cognitive abilities and prior knowledge are substantial predictors of CPS performance, and (2) in contrast to former and recent interpretations, there is insufficient evidence to consider CPS a unique ability construct. In the second part of the paper, we discuss our results in light of recent CPS research, which predominantly utilizes the

  1. Do qualitative methods validate choice experiment-results? A case study on the economic valuation of peatland restoration in Central Kalimantan, Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Schaafsma, M.; Van Beukering, P.J.H.; Davies, O.; Oskolokaite, I.

    2009-05-15

    This study explores the benefits of combining independent results of qualitative focus group discussions (FGD) with a quantitative choice experiment (CE) in a developing country context. The assessment addresses the compensation needed by local communities in Central Kalimantan to cooperate in peatland restoration programs by using a CE combined with a series of FGD to validate and explain the CE-results. The main conclusion of this study is that a combination of qualitative and quantitative methods is necessary to assess the economic value of ecological services in monetary terms and to better understand the underlying attitudes and motives that drive these outcomes. The FGD not only cross-validate results of the CE, but also help to interpret the differences in preferences of respondents arising from environmental awareness and ecosystem characteristics. The FGD confirms that the CE results provide accurate information for ecosystem valuation. Additional to the advantages of FGD listed in the literature, this study finds that FGD provide the possibility to identify the specific terms and conditions on which respondents will accept land-use change scenarios. The results show that FGD may help to address problems regarding the effects of distribution of costs and benefits over time that neo-classical economic theory poses for the interpretation of economic valuation results in the demand it puts on the rationality of trade-offs and the required calculations.

  2. Prospective, Multicenter Validation Study of Magnetic Resonance Volumetry for Response Assessment After Preoperative Chemoradiation in Rectal Cancer: Can the Results in the Literature be Reproduced?

    Energy Technology Data Exchange (ETDEWEB)

    Martens, Milou H., E-mail: mh.martens@hotmail.com [Department of Radiology, Maastricht University Medical Center, Maastricht (Netherlands); Department of Surgery, Maastricht University Medical Center, Maastricht (Netherlands); GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht (Netherlands); Heeswijk, Miriam M. van [Department of Radiology, Maastricht University Medical Center, Maastricht (Netherlands); Department of Surgery, Maastricht University Medical Center, Maastricht (Netherlands); GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht (Netherlands); Broek, Joris J. van den [Department of Surgery, Medical Center Alkmaar, Alkmaar (Netherlands); Rao, Sheng-Xiang [Department of Radiology, Maastricht University Medical Center, Maastricht (Netherlands); Department of Radiology, Fudan University, Shanghai (China); Vandecaveye, Vincent [Department of Radiology, University Hospital Leuven, Leuven (Belgium); Vliegen, Roy A. [Department of Radiology, Atrium Medical Center, Heerlen (Netherlands); Schreurs, Wilhelmina H. [Department of Surgery, Medical Center Alkmaar, Alkmaar (Netherlands); Beets, Geerard L. [Department of Surgery, Maastricht University Medical Center, Maastricht (Netherlands); GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht (Netherlands); Lambregts, Doenja M.J. [Department of Radiology, Maastricht University Medical Center, Maastricht (Netherlands); Beets-Tan, Regina G.H. [Department of Radiology, Maastricht University Medical Center, Maastricht (Netherlands); GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht (Netherlands)

    2015-12-01

    Purpose: To review the available literature on tumor size/volume measurements on magnetic resonance imaging for response assessment after chemoradiotherapy, and validate these cut-offs in an independent multicenter patient cohort. Methods and Materials: The study included 2 parts. (1) Review of the literature: articles were included that assessed the accuracy of tumor size/volume measurements on magnetic resonance imaging for tumor response assessment. Size/volume cut-offs were extracted; (2) Multicenter validation: extracted cut-offs from the literature were tested in a multicenter cohort (n=146). Accuracies were calculated and compared with reported results from the literature. Results: The review included 14 articles, in which 3 different measurement methods were assessed: (1) tumor length; (2) 3-dimensonial tumor size; and (3) whole volume. Study outcomes consisted of (1) complete response (ypT0) versus residual tumor; (2) tumor regression grade 1 to 2 versus 3 to 5; and (3) T-downstaging (ypTresults were obtained for the validation of the whole-volume measurements, in particular for the outcome ypT0 (accuracy 44%-80%), with the optimal cut-offs being 1.6 cm{sup 3} (after chemoradiation therapy) and a volume reduction of Δ80% to 86.6%. Accuracies for whole-volume measurements to assess tumor regression grade 1 to 2 were 52% to 61%, and for T-downstaging 51% to 57%. Overall accuracies for tumor length ranged between 48% and 53% and for 3D size measurement between 52% and 56%. Conclusions: Magnetic resonance volumetry using whole-tumor volume measurements can be helpful in rectal cancer response assessment with selected cut-off values. Measurements of tumor length or 3-dimensional tumor size are not helpful. Magnetic resonance volumetry is mainly accurate to assess a complete tumor response (ypT0) after chemoradiation therapy (accuracies up to 80%).

  3. Preliminary Validation of Composite Material Constitutive Characterization

    Science.gov (United States)

    John G. Michopoulos; Athanasios lliopoulos; John C. Hermanson; Adrian C. Orifici; Rodney S. Thomson

    2012-01-01

    This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by...

  4. Validation of the Drinking Motives Questionnaire

    DEFF Research Database (Denmark)

    Fernandes-Jesus, Maria; Beccaria, Franca; Demant, Jakob Johan

    2016-01-01

    • This paper assesses the validity of the DMQ-R (Cooper, 1994) among university students in six different European countries. • Results provide support for similar DMQ-R factor structures across countries. • Drinking motives have similar meanings among European university students....

  5. The Predictive Validity of Projective Measures.

    Science.gov (United States)

    Suinn, Richard M.; Oskamp, Stuart

    Written for use by clinical practitioners as well as psychological researchers, this book surveys recent literature (1950-1965) on projective test validity by reviewing and critically evaluating studies which shed light on what may reliably be predicted from projective test results. Two major instruments are covered: the Rorschach and the Thematic…

  6. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  7. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  8. Validation of dose-response curve of CRCN-NE - Regional Center for Nuclear Sciences from Northeast Brazil for 60Co: preliminary results

    International Nuclear Information System (INIS)

    Mendonca, Julyanne C.G.; Mendes, Mariana E.; Hwang, Suy F.; Lima, Fabiana F.; Santos, Neide

    2014-01-01

    The cytogenetic study has the chromosomal alterations as biomarkers in absorbed dose estimation by the body of individuals involved in exposure to ionizing radiation by interpreting a dose response calibration curve. Since the development of the technique to the analysis of data, you can see protocol characteristics, leading the International Atomic Energy Agency indicate that any laboratory with intention to carry out biological dosimetry establish their own calibration curves. The Biological Dosimetry Laboratory of the Centro Regional de Ciencias Nucleares (CRCN-NE/CNEN), Brazil, recently established the calibration curve related to gamma radiation ( 60 Co). Thus, this work aimed to start the validation of this calibration curve from samples of three different blood donors which were irradiated with an absorbed known single dose of 1 Gy. Samples were exposed to 60 Co source (Glaucoma 220) located in the Department of Nuclear Energy (DEN/UFPE). After fixation with methanol and acetic acid and 5% Giemsa staining, the frequency of chromosomal alterations (dicentric chromosomes, acentric rings and fragments) were established from reading of 500 metaphases per sample and doses were estimated using Dose Estimate program. The results showed that, using the dose-response curve calibration for dicentrics, the dose absorbed estimated for the three individuals ranged from 0.891 - 1,089Gy, taking into account the range of confidence of 95%. By using the dose-response curve for dicentrics added to rings and for the same interval of confidence the doses ranged from 0,849 - 1,081Gy. Thus, the estimative encompassed known absorbed dose the three individuals in confidence interval of 95%. These preliminary results seems to demonstrate that dicentric dose-response curves and dicentrics plus rings established by CRCN-NE / CNEN are valid for dose estimation in exposed individuals. This validation will continue with samples from different individuals at different doses

  9. Cable SGEMP Code Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Ballard, William Parker [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Center for CA Weapons Systems Engineering

    2013-05-01

    This report compared data taken on the Modular Bremsstrahlung Simulator using copper jacketed (cujac) cables with calculations using the RHSD-RA Cable SGEMP analysis tool. The tool relies on CEPXS/ONBFP to perform radiation transport in a series of 1D slices through the cable, and then uses a Green function technique to evaluate the expected current drive on the center conductor. The data were obtained in 2003 as part of a Cabana verification and validation experiment using 1-D geometries, but were not evaluated until now. The agreement between data and model is not adequate unless gaps between the dielectric and outer conductor (ground) are assumed, and these gaps are large compared with what is believed to be in the actual cable.

  10. Validation of POLDER/ADEOS data using a ground-based lidar network: Preliminary results for semi-transparent and cirrus clouds

    Science.gov (United States)

    Chepfer, H.; Sauvage, L.; Flamant, P. H.; Pelon, J.; Goloub, P.; Brogniez, G.; spinhirne, J.; Lavorato, M.; Sugimoto, N.

    1998-01-01

    At mid and tropical latitudes, cirrus clouds are present more than 50% of the time in satellites observations. Due to their large spatial and temporal coverage, and associated low temperatures, cirrus clouds have a major influence on the Earth-Ocean-Atmosphere energy balance through their effects on the incoming solar radiation and outgoing infrared radiation. At present the impact of cirrus clouds on climate is well recognized but remains to be asserted more precisely, for their optical and radiative properties are not very well known. In order to understand the effects of cirrus clouds on climate, their optical and radiative characteristics of these clouds need to be determined accurately at different scales in different locations i.e. latitude. Lidars are well suited to observe cirrus clouds, they can detect very thin and semi-transparent layers, and retrieve the clouds geometrical properties i.e. altitude and multilayers, as well as radiative properties i.e. optical depth, backscattering phase functions of ice crystals. Moreover the linear depolarization ratio can give information on the ice crystal shape. In addition, the data collected with an airborne version of POLDER (POLarization and Directionality of Earth Reflectances) instrument have shown that bidirectional polarized measurements can provide information on cirrus cloud microphysical properties (crystal shapes, preferred orientation in space). The spaceborne version of POLDER-1 has been flown on ADEOS-1 platform during 8 months (October 96 - June 97), and the next POLDER-2 instrument will be launched in 2000 on ADEOS-2. The POLDER-1 cloud inversion algorithms are currently under validation. For cirrus clouds, a validation based on comparisons between cloud properties retrieved from POLDER-1 data and cloud properties inferred from a ground-based lidar network is currently under consideration. We present the first results of the validation.

  11. Isotopic and criticality validation for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Fuentes, E.; Lancaster, D.; Rahimi, M.

    1997-01-01

    The techniques used for actinide-only burnup credit isotopic validation and criticality validation are presented and discussed. Trending analyses have been incorporated into both methodologies, requiring biases and uncertainties to be treated as a function of the trending parameters. The isotopic validation is demonstrated using the SAS2H module of SCALE 4.2, with the 27BURNUPLIB cross section library; correction factors are presented for each of the actinides in the burnup credit methodology. For the criticality validation, the demonstration is performed with the CSAS module of SCALE 4.2 and the 27BURNUPLIB, resulting in a validated upper safety limit

  12. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  13. Comparison of results from the MCNP criticality validation suite using ENDF/B-VI and preliminary ENDF/B-VII nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R. D. (Russell D.)

    2004-01-01

    The MCNP Criticality Validation Suite is a collection of 31 benchmarks taken from the International Handbook of Evaluated Criticality Safety Benchmark Experiments. MCNP5 calculations clearly demonstrate that, overall, nuclear data for a preliminary version of ENDFB-VII produce better agreement with the benchmarks in the suite than do corresponding data from ENDF/B-VI. Additional calculations identify areas where improvements in the data still are needed. Based on results for the MCNP Criticality Validation Suite, the Pre-ENDF/B-VII nuclear data produce substantially better overall results than do their ENDF/B-VI counterparts. The calculated values for k{sub eff} for bare metal spheres and for an IEU cylinder reflected by normal uranium are in much better agreement with the benchmark values. In addition, the values of k{sub eff} for the bare metal spheres are much more consistent with those for corresponding metal spheres reflected by normal uranium or water. In addition, a long-standing controversy about the need for an ad hoc adjustment to the {sup 238}U resonance integral for thermal systems may finally be resolved. On the other hand, improvements still are needed in a number of areas. Those areas include intermediate-energy cross sections for {sup 235}U, angular distributions for elastic scattering in deuterium, and fast cross sections for {sup 237}Np.

  14. Spacecraft early design validation using formal methods

    International Nuclear Information System (INIS)

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  15. Preliminary results for validation of Computational Fluid Dynamics for prediction of flow through a split vane spacer grid

    International Nuclear Information System (INIS)

    Rashkovan, A.; Novog, D.R.

    2012-01-01

    This paper presents the results of the CFD simulations of turbulent flow past spacer grid with mixing vanes. This study summarizes the first stage of the ongoing numerical blind exercise organized by OECD-NEA. McMaster University along with other participants plan to submit a numerical prediction of the detailed flow field and turbulence characteristics of the flow past 5x5 rod bundle with a spacer grid equipped with two types of mixing vanes. The results will be compared with blind experimental measurements performed in Korea. Due to the fact that a number of the modeling strategies are suggested in literature for such types of flows, we have performed a series of tests to assess the mesh requirements, flow steadiness, turbulence modeling and wall treatment effects. Results of these studies are reported in the present paper. (author)

  16. Validation results of the pre-service ultrasonic inspections of the Sizewell B pressurizer and steam generators and reactor coolant pump flywheels

    International Nuclear Information System (INIS)

    Conroy, P.J.; Leyland, K.S.

    1995-01-01

    In the UK, concern over the safety issues associated with nuclear power generation resulted in a demand for a public inquiry into the construction and operation of Sizewell ''B'', Britain's first PWR. This public inquiry was additional to the UK's normal licensing process. The onus was placed upon the UK utility, CEGB (now Nuclear Electric plc) to provide evidence to the inquiry to support the case that the plant would be constructed and operated to a sufficiently high standard of safety. Part of the evidence to the inquiry (1) relied upon the ability of ultrasonic inspections to verify that the reactor pressure vessel and other safety critical components (collectively known as IoF components), were free from defects that could threaten structural integrity. At that time, the body of evidence showed that although ultrasonic inspection had the potential to satisfy this requirement, it would be necessary to validate the procedures and key operators used in order to provide assurance that they were adequate. Inspection validation therefore became an integral part of the UK PWR nuclear power program

  17. Approaches for accounting and prediction of fast neutron fluence on WWER pressure vessels and results of validation of calculational procedure

    International Nuclear Information System (INIS)

    Borodkin, P.G.; Khrennikov, N.N.; Ryabinin, Yu.A.; Adeev, V.A.

    2015-01-01

    A description is given of the universal procedure for calculation of fast neutron fluence (FNF) on WWER vessels. Approbation of the calculation procedure was carried out by comparing the calculation results for this procedure and measurements on the outer surface of the WWER-440 and WWER-1000 vessels. In addition, an estimation of the uncertainty of the settlement procedure was made in accordance with the requirements of regulatory documents. The developed procedure is applied at Kola NPP for independent fast neutron fluence estimates on the WWER-440 reactor vessels when planning core loads taking into account the introduction of new fuels. The results of the pilot operation of the procedure for calculating FNF at the Kola NPP were taken into account when improving the procedure and its application to the calculations of FNF on the WWER-1000 vessels [ru

  18. Active Transportation Demand Management (ATDM) Trajectory Level Validation

    Data.gov (United States)

    Department of Transportation — The ATDM Trajectory Validation project developed a validation framework and a trajectory computational engine to compare and validate simulated and observed vehicle...

  19. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  20. Validating PHITS for heavy ion fragmentation reactions

    International Nuclear Information System (INIS)

    Ronningen, Reginald M.

    2015-01-01

    The performance of the Monte Carlo code system PHITS is validated for heavy-ion transport capabilities by performing simulations and comparing results against experimental data from heavy-ion reactions of benchmark quality. These data are from measurements of isotope yields produced in the fragmentation of a 140 MeV/u "4"8Ca beam on a beryllium target and on a tantalum target. The results of this study show that PHITS performs reliably. (authors)

  1. System to monitor data analyses and results of physics data validation between pulses at DIII-D

    International Nuclear Information System (INIS)

    Flanagan, S.; Schachter, J.M.; Schissel, D.P.

    2004-01-01

    A data analysis monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded, thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, C Language Integrated Production System to implement expert system logic, and displays its results to multiple web clients via Hypertext Markup Language. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse

  2. A NEW SYSTEM TO MONITOR DATA ANALYSES AND RESULTS OF PHYSICS DATA VALIDATION BETWEEN PULSES AT DIII-D

    International Nuclear Information System (INIS)

    FLANAGAN, A; SCHACHTER, J.M; SCHISSEL, D.P

    2003-01-01

    A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This new system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, CLIPS to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse

  3. Design and validation of a comprehensive fecal incontinence questionnaire.

    Science.gov (United States)

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  4. Carbohydrate-deficient transferrin--a valid marker of alcoholism in population studies? Results from the Copenhagen City Heart Study

    DEFF Research Database (Denmark)

    Grønbaek, M; Becker, U; Henriksen, Jens Henrik Sahl

    1995-01-01

    Carbohydrate-deficient transferrin (CDT) was analyzed by a modified radioimmunoassay test in a random population sample of 400 individuals, and results were compared with reported alcohol intake derived from a structured questionnaire. Among the 180 men, the test was found to be acceptable...... with respect to detecting harmful alcohol intake (> 35 beverages/week) and alcohol intake above the recommended level (21 beverages/week), although the positive predictive values were low. Among the 220 women, the test was invalid with low predictive values. CDT was compared with other known markers of high...... alcohol intake, and it was observed that CDT had higher sensitivity and specificity than AST and short Michigan Alcoholism Screening Test (sMAST) in men, whereas the positive and negative predictive values were low in all tests. A combination of CDT and AST proved to be a better marker of both harmful...

  5. Bed slope effects on turbulent wave boundary layers: 1. Model validation and quantification of rough-turbulent results

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Fredsøe, Jørgen; Sumer, B. Mutlu

    2009-01-01

    measurements for steady streaming induced by a skewed free stream velocity signal is also provided. We then simulate a series of experiments involving oscillatory flow in a convergent-divergent smooth tunnel, and a good match with respect to bed shear stresses and streaming velocities is achieved......A numerical model solving incompressible Reynolds-averaged Navier-Stokes equations, combined with a two-equation k-omega turbulence closure, is used to study converging-diverging effects from a sloping bed on turbulent (oscillatory) wave boundary layers. Bed shear stresses from the numerical model....... The streaming is conceptually explained using analogies from steady converging and diffuser flows. A parametric study is undertaken to assess both the peak and time-averaged bed shear stresses in converging and diverging half periods under rough-turbulent conditions. The results are presented as friction factor...

  6. Cavity Attenuated Phase Shift (CAPS) Method for Airborne Aerosol Light Extinction Measurement: Instrument Validation and First Results from Field Deployment

    Science.gov (United States)

    Petzold, A.; Perim de Faria, J.; Berg, M.; Bundke, U.; Freedman, A.

    2015-12-01

    Monitoring the direct impact of aerosol particles on climate requires the continuous measurement of aerosol optical parameters like the aerosol extinction coefficient on a regular basis. Remote sensing and ground-based networks are well in place (e.g., AERONET, ACTRIS), whereas the regular in situ measurement of vertical profiles of atmospheric aerosol optical properties remains still an important challenge in quantifying climate change. The European Research Infrastructure IAGOS (In-service Aircraft for a Global Observing System; www.iagos.org) responds to the increasing requests for long-term, routine in situ observational data by using commercial passenger aircraft as measurement platform. However, scientific instrumentation for the measurement of atmospheric constituents requires major modifications before being deployable aboard in-service passenger aircraft. Recently, a compact and robust family of optical instruments based on the cavity attenuated phase shift (CAPS) technique has become available for measuring aerosol light extinction. While this technique was successfully deployed for ground-based atmospheric measurements under various conditions, its suitability for operation aboard aircraft in the free and upper free troposphere still has to be demonstrated. In this work, the modifications of a CAPS PMex instrument for measuring aerosol light extinction on aircraft, the results from subsequent laboratory tests for evaluating the modified instrument prototype, and first results from a field deployment aboard a research aircraft will be covered. In laboratory studies, the instrument showed excellent agreement (deviation CAPS PMex instrument response within 10% deviation. During the field deployment, aerosol extinction coefficients and associated aerosol size distributions have been measured and will be presented as comparison studies between measured and calculated data.

  7. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  8. Measurements and validation of parametric schemes. Recent results, Cracow experiment / in the framework of cost - action 715

    Energy Technology Data Exchange (ETDEWEB)

    Godlowska, J.; Tomaszewska, A.M.; Rozwoda, W.; Walczewski, J.; Burzynski, J. [Div. for the Remote Sensing, Cracow (Poland). Inst. of Meteorology and Water Management

    2004-07-01

    on the Penman- Monteith resistance method with 3 different theoretical approaches (Smith, Holtslag and Van Ulden, Berkowicz and Prahm). These formulas are widely used for the flat, non-urban terrain. The results were compared with the results of measurements made with use of a ultrasonic anemometer (30 minutes moving data for every 1 minute). (orig.)

  9. Validation of a near infrared microscopy method for the detection of animal products in feedingstuffs: results of a collaborative study.

    Science.gov (United States)

    Boix, A; Fernández Pierna, J A; von Holst, C; Baeten, V

    2012-01-01

    The performance characteristics of a near infrared microscopy (NIRM) method, when applied to the detection of animal products in feedingstuffs, were determined via a collaborative study. The method delivers qualitative results in terms of the presence or absence of animal particles in feed and differentiates animal from vegetable feed ingredients on the basis of the evaluation of near infrared spectra obtained from individual particles present in the sample. The specificity ranged from 86% to 100%. The limit of detection obtained on the analysis of the sediment fraction, prepared as for the European official method, was 0.1% processed animal proteins (PAPs) in feed, since all laboratories correctly identified the positive samples. This limit has to be increased up to 2% for the analysis of samples which are not sedimented. The required sensitivity for the official control is therefore achieved in the analysis of the sediment fraction of the samples where the method can be applied for the detection of the presence of animal meal. Criteria for the classification of samples, when fewer than five spectra are found, as being of animal origin needs to be set up in order to harmonise the approach taken by the laboratories when applying NIRM for the detection of the presence of animal meal in feed.

  10. Simulation analysis of impact tests of steel plate reinforced concrete and reinforced concrete slabs against aircraft impact and its validation with experimental results

    International Nuclear Information System (INIS)

    Sadiq, Muhammad; Xiu Yun, Zhu; Rong, Pan

    2014-01-01

    Highlights: • Simulation analysis is carried out with two constitutive concrete models. • Winfrith model can better simulate nonlinear response of concrete than CSCM model. • Performance of steel plate concrete is better than reinforced concrete. • Thickness of safety related structures can be reduced by adopting steel plates. • Analysis results, mainly concrete material models should be validated. - Abstract: The steel plate reinforced concrete and reinforced concrete structures are used in nuclear power plants for protection against impact of an aircraft. In order to compare the impact resistance performance of steel plate reinforced concrete and reinforced concrete slabs panels, simulation analysis of 1/7.5 scale model impact tests is carried out by using finite element code ANSYS/LS-DYNA. The damage modes of all finite element models, velocity time history curves of the aircraft engine and damage to aircraft model are compared with the impact test results of steel plate reinforced concrete and reinforced concrete slab panels. The results indicate that finite element simulation results correlate well with the experimental results especially for constitutive winfrith concrete model. Also, the impact resistance performance of steel plate reinforced concrete slab panels is better than reinforced concrete slab panels, particularly the rear face steel plate is very effective in preventing the perforation and scabbing of concrete than conventional reinforced concrete structures. In this way, the thickness of steel plate reinforced concrete structures can be reduced in important structures like nuclear power plants against impact of aircraft. It also demonstrates the methodology to validate the analysis procedure with experimental and analytical studies. It may be effectively employed to predict the precise response of safety related structures against aircraft impact

  11. CosmoQuest:Using Data Validation for More Than Just Data Validation

    Science.gov (United States)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  12. Reliability and validity in a nutshell.

    Science.gov (United States)

    Bannigan, Katrina; Watson, Roger

    2009-12-01

    To explore and explain the different concepts of reliability and validity as they are related to measurement instruments in social science and health care. There are different concepts contained in the terms reliability and validity and these are often explained poorly and there is often confusion between them. To develop some clarity about reliability and validity a conceptual framework was built based on the existing literature. The concepts of reliability, validity and utility are explored and explained. Reliability contains the concepts of internal consistency and stability and equivalence. Validity contains the concepts of content, face, criterion, concurrent, predictive, construct, convergent (and divergent), factorial and discriminant. In addition, for clinical practice and research, it is essential to establish the utility of a measurement instrument. To use measurement instruments appropriately in clinical practice, the extent to which they are reliable, valid and usable must be established.

  13. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  14. Seismic Data Gathering and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-02-01

    Three recent earthquakes in the last seven years have exceeded their design basis earthquake values (so it is implied that damage to SSC’s should have occurred). These seismic events were recorded at North Anna (August 2011, detailed information provided in [Virginia Electric and Power Company Memo]), Fukushima Daichii and Daini (March 2011 [TEPCO 1]), and Kaswazaki-Kariwa (2007, [TEPCO 2]). However, seismic walk downs at some of these plants indicate that very little damage occurred to safety class systems and components due to the seismic motion. This report presents seismic data gathered for two of the three events mentioned above and recommends a path for using that data for two purposes. One purpose is to determine what margins exist in current industry standard seismic soil-structure interaction (SSI) tools. The second purpose is the use the data to validated seismic site response tools and SSI tools. The gathered data represents free field soil and in-structure acceleration time histories data. Gathered data also includes elastic and dynamic soil properties and structural drawings. Gathering data and comparing with existing models has potential to identify areas of uncertainty that should be removed from current seismic analysis and SPRA approaches. Removing uncertainty (to the extent possible) from SPRA’s will allow NPP owners to make decisions on where to reduce risk. Once a realistic understanding of seismic response is established for a nuclear power plant (NPP) then decisions on needed protective measures, such as SI, can be made.

  15. Predictive validity of the Slovene Matura

    Directory of Open Access Journals (Sweden)

    Valentin Bucik

    2001-09-01

    Full Text Available Passing Matura is the last step of the secondary school graduation, but it is also the entrance ticket for the university. Besides, the summary score of Matura exam takes part in the selection process for the particular university studies in case of 'numerus clausus'. In discussing either aim of Matura important dilemmas arise, namely, is the Matura examination sufficiently exact and rightful procedure to, firstly, use its results for settling starting studying conditions and, secondly, to select validly, reliably and sensibly the best candidates for university studies. There are some questions concerning predictive validity of Matura that should be answered, e.g. (i does Matura as an enrollment procedure add to the qualitaty of the study; (ii is it a better selection tool than entrance examinations formerly used in different faculties in the case of 'numerus clausus'; and (iii is it reasonable to expect high predictive validity of Matura results for success at the university at all. Recent results show that in the last few years the dropout-rate is lower than before, the pass-rate between the first and the second year is higher and the average duration of study per student is shorter. It is clear, however, that it is not possible to simply predict the study success from the Matura results. There are too many factors influencing the success in the university studies. In most examined study programs the correlation between Matura results and study success is positive but moderate, therefore it can not be said categorically that only candidates accepted according to the Matura results are (or will be the best students. Yet it has been shown that Matura is a standardized procedure, comparable across different candidates entering university, and that – when compared entrance examinations – it is more objective, reliable, and hen ce more valid and fair a procedure. In addition, comparable procedures of university recruiting and selection can be

  16. Validation for chromatographic and electrophoretic methods

    OpenAIRE

    Ribani, Marcelo; Bottoli, Carla Beatriz Grespan; Collins, Carol H.; Jardim, Isabel Cristina Sales Fontes; Melo, Lúcio Flávio Costa

    2004-01-01

    The validation of an analytical method is fundamental to implementing a quality control system in any analytical laboratory. As the separation techniques, GC, HPLC and CE, are often the principal tools used in such determinations, procedure validation is a necessity. The objective of this review is to describe the main aspects of validation in chromatographic and electrophoretic analysis, showing, in a general way, the similarities and differences between the guidelines established by the dif...

  17. Redundant sensor validation by using fuzzy logic

    International Nuclear Information System (INIS)

    Holbert, K.E.; Heger, A.S.; Alang-Rashid, N.K.

    1994-01-01

    This research is motivated by the need to relax the strict boundary of numeric-based signal validation. To this end, the use of fuzzy logic for redundant sensor validation is introduced. Since signal validation employs both numbers and qualitative statements, fuzzy logic provides a pathway for transforming human abstractions into the numerical domain and thus coupling both sources of information. With this transformation, linguistically expressed analysis principles can be coded into a classification rule-base for signal failure detection and identification

  18. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  19. Reliability and Validity of Qualitative and Operational Research Paradigm

    Directory of Open Access Journals (Sweden)

    Muhammad Bashir

    2008-01-01

    Full Text Available Both qualitative and quantitative paradigms try to find the same result; the truth. Qualitative studies are tools used in understanding and describing the world of human experience. Since we maintain our humanity throughout the research process, it is largely impossible to escape the subjective experience, even for the most experienced of researchers. Reliability and Validity are the issue that has been described in great deal by advocates of quantitative researchers. The validity and the norms of rigor that are applied to quantitative research are not entirely applicable to qualitative research. Validity in qualitative research means the extent to which the data is plausible, credible and trustworthy; and thus can be defended when challenged. Reliability and validity remain appropriate concepts for attaining rigor in qualitative research. Qualitative researchers have to salvage responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers’ judgments to the investigators themselves. There have different opinions on validity with some suggesting that the concepts of validity is incompatible with qualitative research and should be abandoned while others argue efforts should be made to ensure validity so as to lend credibility to the results. This paper is an attempt to clarify the meaning and use of reliability and validity in the qualitative research paradigm.

  20. Generation of Human Induced Pluripotent Stem Cells Using RNA-Based Sendai Virus System and Pluripotency Validation of the Resulting Cell Population.

    Science.gov (United States)

    Chichagova, Valeria; Sanchez-Vera, Irene; Armstrong, Lyle; Steel, David; Lako, Majlinda

    2016-01-01

    Human induced pluripotent stem cells (hiPSCs) provide a platform for studying human disease in vitro, increase our understanding of human embryonic development, and provide clinically relevant cell types for transplantation, drug testing, and toxicology studies. Since their discovery, numerous advances have been made in order to eliminate issues such as vector integration into the host genome, low reprogramming efficiency, incomplete reprogramming and acquisition of genomic instabilities. One of the ways to achieve integration-free reprogramming is by using RNA-based Sendai virus. Here we describe a method to generate hiPSCs with Sendai virus in both feeder-free and feeder-dependent culture systems. Additionally, we illustrate methods by which to validate pluripotency of the resulting stem cell population.

  1. Preliminary results of the empirical validation of daily increments in otoliths of jack mackerel Trachurus symmetricus murphyi (Nichols, 1920 marked with oxytetracycline

    Directory of Open Access Journals (Sweden)

    Miguel Araya

    2003-12-01

    Full Text Available The frequency of microincrement formation in sagittae otoliths of jack mackerel Trachurus symmetricus was validated using experiments on captive fish. Adult jack mackerel were injected with a dose of 100 mg of oxytetracycline/kg of fish. A second injection was performed 30 days later. The fish were then sacrificed and their sagittae otoliths were extracted. Thin sections of the otoliths were prepared and observed through an epifluorescent microscope using ultraviolet light. Two fluorescent marks corresponding to the two injections were clearly visible. The average number of microincrements between the two fluorescent marks was 29 (n=10; S.D.=1.63 and the median was 29.3. The Wilcoxon signed-rank test indicated that this value was not significantly different from 30. This result indicates that microincrements in otoliths of adult jack mackerel of between 28.4 and 37.7 cm fork length are formed with a daily frequency.

  2. Verification and validation methodology of training simulators

    International Nuclear Information System (INIS)

    Hassan, M.W.; Khan, N.M.; Ali, S.; Jafri, M.N.

    1997-01-01

    A full scope training simulator comprising of 109 plant systems of a 300 MWe PWR plant contracted by Pakistan Atomic Energy Commission (PAEC) from China is near completion. The simulator has its distinction in the sense that it will be ready prior to fuel loading. The models for the full scope training simulator have been developed under APROS (Advanced PROcess Simulator) environment developed by the Technical Research Center (VTT) and Imatran Voima (IVO) of Finland. The replicated control room of the plant is contracted from Shanghai Nuclear Engineering Research and Design Institute (SNERDI), China. The development of simulation models to represent all the systems of the target plant that contribute to plant dynamics and are essential for operator training has been indigenously carried out at PAEC. This multifunctional simulator is at present under extensive testing and will be interfaced with the control planes in March 1998 so as to realize a full scope training simulator. The validation of the simulator is a joint venture between PAEC and SNERDI. For the individual components and the individual plant systems, the results have been compared against design data and PSAR results to confirm the faithfulness of the simulator against the physical plant systems. The reactor physics parameters have been validated against experimental results and benchmarks generated using design codes. Verification and validation in the integrated state has been performed against the benchmark transients conducted using the RELAP5/MOD2 for the complete spectrum of anticipated transient covering the well known five different categories. (author)

  3. Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.

    Science.gov (United States)

    Levinson, Cheri A; Rodebaugh, Thomas L

    2011-09-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.

  4. Application of theoretical vehicle dynamic results for experimental validation of vehicle characteristics in autonomous vehicle guidance; Aehnlichkeitstheoretische Modelluebertragung zur experimentellen Eigenschaftsabsicherung in der autonomen Fahrzeugfuehrung

    Energy Technology Data Exchange (ETDEWEB)

    Hilgert, J.; Bertram, T. [Univ. Duisburg (Germany). Fachbereich Maschinenbau

    2002-07-01

    The validation and verification of theoretical vehicle dynamic results for autonomous driving can be seen as a major challenge. The main reasons are the high cost of driving tests and the risk of damaging or destroying the test vehicle and the involved persons. One possibility for avoiding these problems and simultaneously to ensure good experimental results lies in the use of scaled model vehicles. Of special relevance is the transfer of relevant parameters to the full size vehicle. In this paper a method based on similitude analysis is developed for validation and verification of driving tests for autonomous vehicles. This method is described for a lane change manoeuvre for a 1:5 scaled vehicle belonging to the Institute of Mechatronics and System Dynamics at the Gerhard-Mercator-Universitaet Duisburg. (orig.) [German] In der autonomen Fahrzeugfuehrung stellt die experimentelle Verifikation und Validierung von theoretischen Ergebnissen hinsichtlich fahrdynamischer Eigenschaften eine grosse Herausforderung dar. Die Ursachen hierfuer liegen zum einen in den hohen Kosten, welche bei Fahrversuchen entstehen, und zum anderen im Unfallrisiko fuer den Versuchstraeger und die am Versuch beteiligten Personen. Eine Moeglichkeit diese Nachteile zu umgehen und gleichzeitig experimentelle Ergebnisse zu bekommen, besteht in der Verwendung massstabgetreuer Modellfahrzeuge. Von besonderer Bedeutung ist hier die Uebertragung relevanter Parameter auf das reale Fahrzeug. In diesem Beitrag wird daher mit Hilfe von aehnlichkeitstheoretischen Ueberlegungen ein Konzept zur experimentellen Verifikation und Validierung von Fahrversuchen auf Basis eines am Institut fuer Mechatronik und Systemdynamik der Gerhard-Mercator-Universitaet Duisburg vorhandenen Fahrzeugmodells (Massstab 1:5) anhand eines Spurwechselmanoevers vorgestellt. (orig.)

  5. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  6. Development and Validation of a Self-Assessment Tool for Albuminuria: Results From the Reasons for Geographic and Racial Differences in Stroke (REGARDS) Study

    Science.gov (United States)

    Muntner, Paul; Woodward, Mark; Carson, April P; Judd, Suzanne E; Levitan, Emily B; Mann, Devin; McClellan, William; Warnock, David G

    2011-01-01

    Background The prevalence of albuminuria in the general population is high, but awareness of it is low. Therefore, we sought to develop and validate a self-assessment tool that allows individuals to estimate their probability of having albuminuria. Study Design Cross-sectional study Setting & Participants The population-based REasons for Geographic And Racial Differences in Stroke (REGARDS) study for model development and the National Health and Nutrition Examination Survey 1999-2004 (NHANES 1999-2004) for model validation. US adults ≥ 45 years of age in the REGARDS study (n=19,697) and NHANES 1999-2004 (n=7,168) [nijsje 1]Factor Candidate items for the self-assessment tool were collected using a combination of interviewer- and self-administered questionnaires. Outcome Albuminuria was defined as a urinary albumin to urinary creatinine ratio ≥ 30 mg/g in spot samples. Results Eight items were included in the self-assessment tool (age, race, gender, current smoking, self-rated health, and self-reported history of diabetes, hypertension, and stroke). These items provided a c-statistic of 0.709 (95% CI, 0.699 – 0.720) and a good model fit (Hosmer-Lemeshow chi-square p-value = 0.49). In the external validation data set, the c-statistic for discriminating individuals with and without albuminuria using the self-assessment tool was 0.714. Using a threshold of ≥ 10% probability of albuminuria from the self-assessment tool, 36% of US adults ≥ 45 years of age in NHANES 1999-2004 would test positive and be recommended screening. The sensitivity, specificity, and positive and negative predictive values for albuminuria associated with a probability ≥ 10% were 66%, 68%, 23% and 93%, respectively. Limitations Repeat urine samples were not available to assess the persistency of albuminuria. Conclusions Eight self-report items provide good discrimination for the probability of having albuminuria. This tool may encourage individuals with a high probability to request

  7. Is the Job Satisfaction Survey a good tool to measure job satisfaction amongst health workers in Nepal? Results of a validation analysis.

    Science.gov (United States)

    Batura, Neha; Skordis-Worrall, Jolene; Thapa, Rita; Basnyat, Regina; Morrison, Joanna

    2016-07-27

    Job satisfaction is an important predictor of an individual's intention to leave the workplace. It is increasingly being used to consider the retention of health workers in low-income countries. However, the determinants of job satisfaction vary in different contexts, and it is important to use measurement methods that are contextually appropriate. We identified a measurement tool developed by Paul Spector, and used mixed methods to assess its validity and reliability in measuring job satisfaction among maternal and newborn health workers (MNHWs) in government facilities in rural Nepal. We administered the tool to 137 MNHWs and collected qualitative data from 78 MNHWs, and district and central level stakeholders to explore definitions of job satisfaction and factors that affected it. We calculated a job satisfaction index for all MNHWs using quantitative data and tested for validity, reliability and sensitivity. We conducted qualitative content analysis and compared the job satisfaction indices with qualitative data. Results from the internal consistency tests offer encouraging evidence of the validity, reliability and sensitivity of the tool. Overall, the job satisfaction indices reflected the qualitative data. The tool was able to distinguish levels of job satisfaction among MNHWs. However, the work environment and promotion dimensions of the tool did not adequately reflect local conditions. Further, community fit was found to impact job satisfaction but was not captured by the tool. The relatively high incidence of missing responses may suggest that responding to some statements was perceived as risky. Our findings indicate that the adapted job satisfaction survey was able to measure job satisfaction in Nepal. However, it did not include key contextual factors affecting job satisfaction of MNHWs, and as such may have been less sensitive than a more inclusive measure. The findings suggest that this tool can be used in similar settings and populations, with the

  8. Developing a validation for environmental sustainability

    Science.gov (United States)

    Adewale, Bamgbade Jibril; Mohammed, Kamaruddeen Ahmed; Nawi, Mohd Nasrun Mohd; Aziz, Zulkifli

    2016-08-01

    One of the agendas for addressing environmental protection in construction is to reduce impacts and make the construction activities more sustainable. This important consideration has generated several research interests within the construction industry, especially considering the construction damaging effects on the ecosystem, such as various forms of environmental pollution, resource depletion and biodiversity loss on a global scale. Using Partial Least Squares-Structural Equation Modeling technique, this study validates environmental sustainability (ES) construct in the context of large construction firms in Malaysia. A cross-sectional survey was carried out where data was collected from Malaysian large construction firms using a structured questionnaire. Results of this study revealed that business innovativeness and new technology are important in determining environmental sustainability (ES) of the Malaysian construction firms. It also established an adequate level of internal consistency reliability, convergent validity and discriminant validity for each of this study's constructs. And based on this result, it could be suggested that the indicators for organisational innovativeness dimensions (business innovativeness and new technology) are useful to measure these constructs in order to study construction firms' tendency to adopt environmental sustainability (ES) in their project execution.

  9. INTRA - Maintenance and Validation. Final Report

    International Nuclear Information System (INIS)

    Edlund, Ove; Jahn, Hermann; Yitbarek, Z.

    2002-05-01

    The INTRA code is specified by the ITER Joint Central Team and the European Community as a reference code for safety analyses of Tokamak type fusion reactors. INTRA has been developed by GRS and Studsvik EcoSafe to analyse integrated behaviours such as pressurisation, chemical reactions and temperature transients inside the plasma chamber and adjacent rooms, following postulated accidents, e.g. ingress of coolant water or air. Important results of the ICE and EVITA experiments, which became available early 2001, were used to validate and improve specific INTRA models. Large efforts were spent on the behaviour of water and steam injection into a low-pressure volumes at high temperature as well as on the modelling of boiling of water in contact with hot surfaces. As a result of this a new version, INTRA/Mod4, was documented and issued. The work included implementation and validation of selected physical models in the code, maintaining code versions, preparation review and distribution of code documents, and monitoring of the code related activities being performed by the GRS under a separate contract. The INTRA/Mod4 Manual and Code Description is documented in four volumes: Volume 1 - Physical Modelling, Volume 2 - User's Manual, Volume 3 -Code Structure and Volume 4 - Validation

  10. Principles of validation of diagnostic assays for infectious diseases

    International Nuclear Information System (INIS)

    Jacobson, R.H.

    1998-01-01

    Assay validation requires a series of inter-related processes. Assay validation is an experimental process: reagents and protocols are optimized by experimentation to detect the analyte with accuracy and precision. Assay validation is a relative process: its diagnostic sensitivity and diagnostic specificity are calculated relative to test results obtained from reference animal populations of known infection/exposure status. Assay validation is a conditional process: classification of animals in the target population as infected or uninfected is conditional upon how well the reference animal population used to validate the assay represents the target population; accurate predictions of the infection status of animals from test results (PV+ and PV-) are conditional upon the estimated prevalence of disease/infection in the target population. Assay validation is an incremental process: confidence in the validity of an assay increases over time when use confirms that it is robust as demonstrated by accurate and precise results; the assay may also achieve increasing levels of validity as it is upgraded and extended by adding reference populations of known infection status. Assay validation is a continuous process: the assay remains valid only insofar as it continues to provide accurate and precise results as proven through statistical verification. Therefore, the work required for validation of diagnostic assays for infectious diseases does not end with a time-limited series of experiments based on a few reference samples rather, to assure valid test results from an assay requires constant vigilance and maintenance of the assay, along with reassessment of its performance characteristics for each unique population of animals to which it is applied. (author)

  11. Construct validity of patient-reported outcome instruments in US adults with hemophilia: results from the Pain, Functional Impairment, and Quality of life (P-FiQ study

    Directory of Open Access Journals (Sweden)

    Batt K

    2017-08-01

    Full Text Available Katharine Batt,1 Michael Recht,2 David L Cooper,3 Neeraj N Iyer,3 Christine L Kempton4 1Hematology and Oncology, Wake Forest School of Medicine, Winston-Salem, NC, 2The Hemophilia Center, Oregon Health & Science University, Portland, OR, 3Novo Nordisk Inc., Plainsboro, NJ, 4Departments of Pediatrics and Hematology and Medical Oncology, Emory University School of Medicine, Atlanta, GA, USA Background: People with hemophilia (PWH experience frequent joint bleeding, resulting in pain and functional impairment. Generic and disease-specific patient-reported outcome (PRO instruments have been used in clinical studies, but rarely in the comprehensive hemophilia care setting. Objective: The objective of this study was to assess construct validity of PRO instruments measuring pain, functional impairment, and health-related quality of life in US PWH with a history of joint pain/bleeding. Methods: Adult male PWH completed 4 PRO instruments (EQ-5D-5L with visual analog scale, Brief Pain Inventory v2 Short Form [BPI], SF-36v2, Hemophilia Activities List [HAL] and underwent a musculoskeletal examination (Hemophilia Joint Health Score v2.1 [HJHS]. Construct validity between index and domain scores was evaluated by Pearson product-moment correlation coefficient. Results: A total of 381 PWH were enrolled. EQ-5D-5L Mobility correlated with BPI, SF-36v2, and HAL domains related to pain, physical function, and activity of the lower extremities. EQ-5D-5L Self-Care correlated only with HAL Self-Care. EQ-5D-5L Usual Activities correlated with BPI Pain Interference and domains within SF-36v2 and HAL related to pain and physical function/activities (particularly those involving the lower extremities. EQ-5D-5L Pain/Discomfort correlated with Bodily Pain and Physical Summary on SF-36v2, HAL Overall Activity, and all BPI pain domains. EQ-5D-5L Anxiety/Depression correlated with social/emotional/mental aspects of SF-36v2. On BPI, most pain domains correlated with Bodily

  12. Wide Angle Imaging Lidar (WAIL): Theory of Operation and Results from Cross-Platform Validation at the ARM Southern Great Plains Site

    Science.gov (United States)

    Polonsky, I. N.; Davis, A. B.; Love, S. P.

    2004-05-01

    WAIL was designed to determine physical and geometrical characteristics of optically thick clouds using the off-beam component of the lidar return that can be accurately modeled within the 3D photon diffusion approximation. The theory shows that the WAIL signal depends not only on the cloud optical characteristics (phase function, extinction and scattering coefficients) but also on the outer thickness of the cloud layer. This makes it possible to estimate the mean optical and geometrical thicknesses of the cloud. The comparison with Monte Carlo simulation demonstrates the high accuracy of the diffusion approximation for moderately to very dense clouds. During operation WAIL is able to collect a complete data set from a cloud every few minutes, with averaging over horizontal scale of a kilometer or so. In order to validate WAIL's ability to deliver cloud properties, the LANL instrument was deployed as a part of the THickness from Off-beam Returns (THOR) validation IOP. The goal was to probe clouds above the SGP CART site at night in March 2002 from below (WAIL and ARM instruments) and from NASA's P3 aircraft (carrying THOR, the GSFC counterpart of WAIL) flying above the clouds. The permanent cloud instruments we used to compare with the results obtained from WAIL were ARM's laser ceilometer, micro-pulse lidar (MPL), millimeter-wavelength cloud radar (MMCR), and micro-wave radiometer (MWR). The comparison shows that, in spite of an unusually low cloud ceiling, an unfavorable observation condition for WAIL's present configuration, cloud properties obtained from the new instrument are in good agreement with their counterparts obtained by other instruments. So WAIL can duplicate, at least for single-layer clouds, the cloud products of the MWR and MMCR together. But WAIL does this with green laser light, which is far more representative than microwaves of photon transport processes at work in the climate system.

  13. Validity and clinical utility of the DSM-5 severity specifier for bulimia nervosa: results from a multisite sample of patients who received evidence-based treatment.

    Science.gov (United States)

    Dakanalis, Antonios; Bartoli, Francesco; Caslini, Manuela; Crocamo, Cristina; Zanetti, Maria Assunta; Riva, Giuseppe; Clerici, Massimo; Carrà, Giuseppe

    2017-12-01

    A new "severity specifier" for bulimia nervosa (BN), based on the frequency of inappropriate weight compensatory behaviours (IWCBs), was added to the DSM-5 as a means of documenting heterogeneity and variability in the severity of the disorder. Yet, evidence for its validity in clinical populations, including prognostic significance for treatment outcome, is currently lacking. Existing data from 281 treatment-seeking patients with DSM-5 BN, who received the best available treatment for their disorder (manual-based cognitive behavioural therapy; CBT) in an outpatient setting, were re-analysed to examine whether these patients subgrouped based on the DSM-5 severity levels would show meaningful and consistent differences on (a) a range of clinical variables assessed at pre-treatment and (b) post-treatment abstinence from IWCBs. Results highlight that the mild, moderate, severe, and extreme severity groups were statistically distinguishable on 22 variables assessed at pre-treatment regarding eating disorder pathological features, maintenance factors of BN, associated (current) and lifetime psychopathology, social maladjustment and illness-specific functional impairment, and abstinence outcome. Mood intolerance, a maintenance factor of BN but external to eating disorder pathological features (typically addressed within CBT), emerged as the primary clinical variable distinguishing the severity groups showing a differential treatment response. Overall, the findings speak to the concurrent and predictive validity of the new DSM-5 severity criterion for BN and are important because a common benchmark informing patients, clinicians, and researchers about severity of the disorder and allowing severity fluctuation and patient's progress to be tracked does not exist so far. Implications for future research are outlined.

  14. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    Energy Technology Data Exchange (ETDEWEB)

    SEXTON, R.A.

    2000-03-13

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation.

  15. Construct validity of the Individual Work Performance Questionnaire.

    OpenAIRE

    Koopmans, L.; Bernaards, C.M.; Hildebrandt, V.H.; Vet, H.C.W. de; Beek, A.J. van der

    2014-01-01

    Objective: To examine the construct validity of the Individual Work Performance Questionnaire (IWPQ). Methods: A total of 1424 Dutch workers from three occupational sectors (blue, pink, and white collar) participated in the study. First, IWPQ scores were correlated with related constructs (convergent validity). Second, differences between known groups were tested (discriminative validity). Results: First, IWPQ scores correlated weakly to moderately with absolute and relative presenteeism, and...

  16. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    International Nuclear Information System (INIS)

    SEXTON, R.A.

    2000-01-01

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation

  17. Validation of the Netherlands pacemaker patient registry

    NARCIS (Netherlands)

    Dijk, WA; Kingma, T; Hooijschuur, CAM; Dassen, WRM; Hoorntje, JCA; van Gelder, LM

    1997-01-01

    This paper deals with the validation of the information stored in the Netherlands central pacemaker patient database. At this moment the registry database contains information on more than 70500 patients, 85000 pacemakers and 90000 leads. The validation procedures consisted of an internal

  18. Ensuring validity in qualitative International Business Research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman

    2004-01-01

    The purpose of this paper is to provide an account for how the validity issue may be gasped within a qualitative apporach to the IB field......The purpose of this paper is to provide an account for how the validity issue may be gasped within a qualitative apporach to the IB field...

  19. 77 FR 27135 - HACCP Systems Validation

    Science.gov (United States)

    2012-05-09

    ... validation, the journal article should identify E.coli O157:H7 and other pathogens as the hazard that the..., or otherwise processes ground beef may determine that E. coli O157:H7 is not a hazard reasonably... specifications that require that the establishment's suppliers apply validated interventions to address E. coli...

  20. Validity in assessment of prior learning

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne; Aarkrog, Vibe

    2015-01-01

    , the article discusses the need for specific criteria for assessment. The reliability and validity of the assessment procedures depend on whether the competences are well-defined, and whether the teachers are adequately trained for the assessment procedures. Keywords: assessment, prior learning, adult...... education, vocational training, lifelong learning, validity...

  1. Validation of the Classroom Behavior Inventory

    Science.gov (United States)

    Blunden, Dale; And Others

    1974-01-01

    Factor-analytic methods were used toassess contruct validity of the Classroom Behavior Inventory, a scale for rating behaviors associated with hyperactivity. The Classroom Behavior Inventory measures three dimensions of behavior: Hyperactivity, Hostility, and Sociability. Significant concurrent validity was obtained for only one Classroom Behavior…

  2. DESIGN AND VALIDATION OF A CARDIORESPIRATORY ...

    African Journals Online (AJOL)

    UJA

    This study aimed to validate the 10x20m test for children aged 3 to 6 years in order ... obtained adequate parameters of reliability and validity in healthy children aged 3 ... and is a determinant of cardiovascular risk in preschool children (Bürgi et al., ... (Seca 222, Hamburg, Germany), and weight (kg) that was recorded with a ...

  3. DESCQA: Synthetic Sky Catalog Validation Framework

    Science.gov (United States)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  4. Structural Validation of the Holistic Wellness Assessment

    Science.gov (United States)

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  5. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  6. Validation of self-reported erythema

    DEFF Research Database (Denmark)

    Petersen, B; Thieden, E; Lerche, C M

    2013-01-01

    Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data.......Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data....

  7. Validity of a Measure of Assertiveness

    Science.gov (United States)

    Galassi, John P.; Galassi, Merna D.

    1974-01-01

    This study was concerned with further validation of a measure of assertiveness. Concurrent validity was established for the College Self-Expression Scale using the method of contrasted groups and through correlations of self-and judges' ratings of assertiveness. (Author)

  8. Empirical Validation of Listening Proficiency Guidelines

    Science.gov (United States)

    Cox, Troy L.; Clifford, Ray

    2014-01-01

    Because listening has received little attention and the validation of ability scales describing multidimensional skills is always challenging, this study applied a multistage, criterion-referenced approach that used a framework of aligned audio passages and listening tasks to explore the validity of the ACTFL and related listening proficiency…

  9. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  10. Is intercessory prayer valid nursing intervention?

    Science.gov (United States)

    Stang, Cecily Wellelr

    2011-01-01

    Is the use of intercessory prayer (IP) in modern nursing a valid practice? As discussed in current healthcare literature, IP is controversial, with authors offering support for and against the efficacy of the practice. This article reviews IP literature and research, concluding IP is a valid intervention for Christian nurses.

  11. Promoting Rigorous Validation Practice: An Applied Perspective

    Science.gov (United States)

    Mattern, Krista D.; Kobrin, Jennifer L.; Camara, Wayne J.

    2012-01-01

    As researchers at a testing organization concerned with the appropriate uses and validity evidence for our assessments, we provide an applied perspective related to the issues raised in the focus article. Newton's proposal for elaborating the consensus definition of validity is offered with the intention to reduce the risks of inadequate…

  12. Terminology, Emphasis, and Utility in Validation

    Science.gov (United States)

    Kane, Michael T.

    2008-01-01

    Lissitz and Samuelsen (2007) have proposed an operational definition of "validity" that shifts many of the questions traditionally considered under validity to a separate category associated with the utility of test use. Operational definitions support inferences about how well people perform some kind of task or how they respond to some kind of…

  13. Validating Measures of Mathematical Knowledge for Teaching

    Science.gov (United States)

    Kane, Michael

    2007-01-01

    According to Schilling, Blunk, and Hill, the set of papers presented in this journal issue had two main purposes: (1) to use an argument-based approach to evaluate the validity of the tests of mathematical knowledge for teaching (MKT), and (2) to critically assess the author's version of an argument-based approach to validation (Kane, 2001, 2004).…

  14. Validation of the German version of the insomnia severity index in adolescents, young adults and adult workers: results from three cross-sectional studies.

    Science.gov (United States)

    Gerber, Markus; Lang, Christin; Lemola, Sakari; Colledge, Flora; Kalak, Nadeem; Holsboer-Trachsler, Edith; Pühse, Uwe; Brand, Serge

    2016-05-31

    A variety of objective and subjective methods exist to assess insomnia. The Insomnia Severity Index (ISI) was developed to provide a brief self-report instrument useful to assess people's perception of sleep complaints. The ISI was developed in English, and has been translated into several languages including German. Surprisingly, the psychometric properties of the German version have not been evaluated, although the ISI is often used with German-speaking populations. The psychometric properties of the ISI are tested in three independent samples: 1475 adolescents, 862 university students, and 533 police and emergency response service officers. In all three studies, participants provide information about insomnia (ISI), sleep quality (Pittsburgh Sleep Quality Index), and psychological functioning (diverse instruments). Descriptive statistics, gender differences, homogeneity and internal consistency, convergent validity, and factorial validity (including measurement invariance across genders) are examined in each sample. The findings show that the German version of the ISI has generally acceptable psychometric properties and sufficient concurrent validity. Confirmatory factor analyses show that a 1-factor solution achieves good model fit. Furthermore, measurement invariance across gender is supported in all three samples. While the ISI has been widely used in German-speaking countries, this study is the first to provide empirical evidence that the German version of this instrument has good psychometric properties and satisfactory convergent and factorial validity across various age groups and both men and women. Thus, the German version of the ISI can be recommended as a brief screening measure in German-speaking populations.

  15. Italian Validation of Homophobia Scale (HS).

    Science.gov (United States)

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A

    2015-09-01

    The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self-reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. The aim of this study was to validate the HS in the Italian context. An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back into the English language. A psychologist and sexologist checked the translated items from a clinical point of view. We recruited 100 subjects aged18-65 for the Italian validation of the HS. The Pearson coefficient and Cronbach's α coefficient were performed to test the test-retest reliability and internal consistency. A sociodemographic questionnaire including the main information as age, geographic distribution, partnership status, education, religious orientation, and sex orientation was administrated together with the translated version of HS. The analysis of the internal consistency showed an overall Cronbach's α coefficient of 0.92. In the four domains, the Cronbach's α coefficient was 0.90 in behavior/negative affect, 0.94 in affect/behavioral aggression, and 0.92 in negative cognition, whereas in the total score was 0.86. The test-retest reliability showed the following results: the HS total score was r = 0.93 (P cognition was r = 0.75 (P validation of the HS revealed the use of this self-report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior.

  16. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  17. Initial Verification and Validation Assessment for VERA

    Energy Technology Data Exchange (ETDEWEB)

    Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States); Athe, Paridhi [North Carolina State Univ., Raleigh, NC (United States); Jones, Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hetzler, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sieger, Matt [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-04-01

    The Virtual Environment for Reactor Applications (VERA) code suite is assessed in terms of capability and credibility against the Consortium for Advanced Simulation of Light Water Reactors (CASL) Verification and Validation Plan (presented herein) in the context of three selected challenge problems: CRUD-Induced Power Shift (CIPS), Departure from Nucleate Boiling (DNB), and Pellet-Clad Interaction (PCI). Capability refers to evidence of required functionality for capturing phenomena of interest while capability refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements against which the VERA software is assessed. This approach, in turn, enables the focused assessment of only those capabilities relevant to the challenge problem. The evaluation of VERA against the challenge problem requirements represents a capability assessment. The mechanism for assessment is the Sandia-developed Predictive Capability Maturity Model (PCMM) that, for this assessment, evaluates VERA on 8 major criteria: (1) Representation and Geometric Fidelity, (2) Physics and Material Model Fidelity, (3) Software Quality Assurance and Engineering, (4) Code Verification, (5) Solution Verification, (6) Separate Effects Model Validation, (7) Integral Effects Model Validation, and (8) Uncertainty Quantification. For each attribute, a maturity score from zero to three is assigned in the context of each challenge problem. The evaluation of these eight elements constitutes the credibility assessment for VERA.

  18. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  19. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  20. Validity of the Danish Prostate Symptom Score questionnaire in stroke

    DEFF Research Database (Denmark)

    Tibaek, S.; Dehlendorff, Christian

    2009-01-01

    Objective – To determine the content and face validity of the Danish Prostate Symptom Score (DAN-PSS-1) questionnaire in stroke patients. Materials and methods – Content validity was judged among an expert panel in neuro-urology. The judgement was measured by the content validity index (CVI). Face...... validity was indicated in a clinical sample of 482 stroke patients in a hospital-based, cross-sectional survey. Results – I-CVI was rated >0.78 (range 0.94–1.00) for 75% of symptom and bother items corresponding to adequate content validity. The expert panel rated the entire DAN-PSS-1 questionnaire highly...... questionnaire appears to be content and face valid for measuring lower urinary tract symptoms after stroke....

  1. Assessment of teacher competence using video portfolios: reliability, construct validity and consequential validity

    NARCIS (Netherlands)

    Admiraal, W.; Hoeksma, M.; van de Kamp, M.-T.; van Duin, G.

    2011-01-01

    The richness and complexity of video portfolios endanger both the reliability and validity of the assessment of teacher competencies. In a post-graduate teacher education program, the assessment of video portfolios was evaluated for its reliability, construct validity, and consequential validity.

  2. Validation of Symptom Validity Tests Using a "Child-model" of Adult Cognitive Impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P. E. J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children's cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  3. Validation of symptom validity tests using a "child-model" of adult cognitive impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P.E.J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children’s cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  4. Method validation in pharmaceutical analysis: from theory to practical optimization

    Directory of Open Access Journals (Sweden)

    Jaqueline Kaleian Eserian

    2015-01-01

    Full Text Available The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC analysis.   Type: Commentary

  5. Test validation of nuclear and fossil fuel control operators

    International Nuclear Information System (INIS)

    Moffie, D.J.

    1976-01-01

    To establish job relatedness, one must go through a procedure of concurrent and predictive validation. For concurrent validity a group of employees is tested and the test scores are related to performance concurrently or during the same time period. For predictive validity, individuals are tested but the results of these tests are not used at the time of employment. The tests are sealed and scored at a later date, and then related to job performance. Job performance data include ratings by supervisors, actual job performance indices, turnover, absenteeism, progress in training, etc. The testing guidelines also stipulate that content and construct validity can be used

  6. Worldwide Protein Data Bank validation information: usage and trends.

    Science.gov (United States)

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  7. Student mathematical imagination instruments: construction, cultural adaptation and validity

    Science.gov (United States)

    Dwijayanti, I.; Budayasa, I. K.; Siswono, T. Y. E.

    2018-03-01

    Imagination has an important role as the center of sensorimotor activity of the students. The purpose of this research is to construct the instrument of students’ mathematical imagination in understanding concept of algebraic expression. The researcher performs validity using questionnaire and test technique and data analysis using descriptive method. Stages performed include: 1) the construction of the embodiment of the imagination; 2) determine the learning style questionnaire; 3) construct instruments; 4) translate to Indonesian as well as adaptation of learning style questionnaire content to student culture; 5) perform content validation. The results stated that the constructed instrument is valid by content validation and empirical validation so that it can be used with revisions. Content validation involves Indonesian linguists, english linguists and mathematics material experts. Empirical validation is done through a legibility test (10 students) and shows that in general the language used can be understood. In addition, a questionnaire test (86 students) was analyzed using a biserial point correlation technique resulting in 16 valid items with a reliability test using KR 20 with medium reability criteria. While the test instrument test (32 students) to find all items are valid and reliability test using KR 21 with reability is 0,62.

  8. Identification of seniors at risk (ISAR) screening tool in the emergency department: implementation using the plan-do-study-act model and validation results.

    Science.gov (United States)

    Asomaning, Nana; Loftus, Carla

    2014-07-01

    To better meet the needs of older adults in the emergency department, Senior Friendly care processes, such as high-risk screening are recommended. The identification of Seniors at Risk (ISAR) tool is a 6-item validated screening tool for identifying elderly patients at risk of the adverse outcomes post-ED visit. This paper describes the implementation of the tool in the Mount Sinai Hospital emergency department using a Plan-Do-Study-Act model; and demonstrates whether the tool predicts adverse outcomes. An observational study tracked tool implementation. A retrospective chart audit was completed to collect data about elderly ED patients during 2 time periods in 2010 and 2011. Data analysis compared the characteristics of patients with positive and negative screening tool results. The identification of Seniors at Risk tool was completed for 51.6% of eligible patients, with 61.2% of patients having a positive result. Patients with positive screening results were more likely to be over age 79 (P = .003); be admitted to hospital (P Risk tool was challenged by problematic compliance with tool completion. Strategies to address this included tool adaptation; and providing staff with knowledge of ED and inpatient geriatric resources and feedback on completion rates. Positive screening results predicted adverse outcomes in elderly Mount Sinai Hospital ED patients. © 2014. Published by Elsevier Inc. All rights reserved.

  9. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  10. Paleoclimate validation of a numerical climate model

    International Nuclear Information System (INIS)

    Schelling, F.J.; Church, H.W.; Zak, B.D.; Thompson, S.L.

    1994-01-01

    An analysis planned to validate regional climate model results for a past climate state at Yucca Mountain, Nevada, against paleoclimate evidence for the period is described. This analysis, which will use the GENESIS model of global climate nested with the RegCM2 regional climate model, is part of a larger study for DOE's Yucca Mountain Site Characterization Project that is evaluating the impacts of long term future climate change on performance of the potential high level nuclear waste repository at Yucca Mountain. The planned analysis and anticipated results are presented

  11. Transient Mixed Convection Validation for NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Barton [Utah State Univ., Logan, UT (United States); Schultz, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-10-19

    The results of this project are best described by the papers and dissertations that resulted from the work. They are included in their entirety in this document. They are: (1) Jeff Harris PhD dissertation (focused mainly on forced convection); (2) Blake Lance PhD dissertation (focused mainly on mixed and transient convection). This dissertation is in multi-paper format and includes the article currently submitted and one to be submitted shortly; and, (3) JFE paper on CFD Validation Benchmark for Forced Convection.

  12. Transient Mixed Convection Validation for NGNP

    International Nuclear Information System (INIS)

    Smith, Barton; Schultz, Richard

    2015-01-01

    The results of this project are best described by the papers and dissertations that resulted from the work. They are included in their entirety in this document. They are: (1) Jeff Harris PhD dissertation (focused mainly on forced convection); (2) Blake Lance PhD dissertation (focused mainly on mixed and transient convection). This dissertation is in multi-paper format and includes the article currently submitted and one to be submitted shortly; and, (3) JFE paper on CFD Validation Benchmark for Forced Convection.

  13. Comparison and validation of the results of the AZNHEX v.1.0 code with the MCNP code simulating the core of a fast reactor cooled with sodium

    International Nuclear Information System (INIS)

    Galicia A, J.; Francois L, J. L.; Bastida O, G. E.; Esquivel E, J.

    2016-09-01

    The development of the AZTLAN platform for the analysis and design of nuclear reactors is led by Instituto Nacional de Investigaciones Nucleares (ININ) and divided into four working groups, which have well-defined activities to achieve significant progress in this project individually and jointly. Within these working groups is the users group, whose main task is to use the codes that make up the AZTLAN platform to provide feedback to the developers, and in this way to make the final versions of the codes are efficient and at the same time reliable and easy to understand. In this paper we present the results provided by the AZNHEX v.1.0 code when simulating the core of a fast reactor cooled with sodium at steady state. The validation of these results is a fundamental part of the platform development and responsibility of the users group, so in this research the results obtained with AZNHEX are compared and analyzed with those provided by the Monte Carlo code MCNP-5, software worldwide used and recognized. A description of the methodology used with MCNP-5 is also presented for the calculation of the interest variables and the difference that is obtained with respect to the calculated with AZNHEX. (Author)

  14. Factorial Validity of the ADHD Adult Symptom Rating Scale in a French Community Sample: Results From the ChiP-ARD Study.

    Science.gov (United States)

    Morin, Alexandre J S; Tran, Antoine; Caci, Hervé

    2016-06-01

    Recent publications reported that a bifactor model better represented the underlying structure of ADHD than classical models, at least in youth. The Adult ADHD Symptoms Rating Scale (ASRS) has been translated into many languages, but a single study compared its structure in adults across Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) and International Classification of Diseases (ICD-10) classifications. We investigated the factor structure, reliability, and measurement invariance of the ASRS among a community sample of 1,171 adults. Results support a bifactor model, including one general ADHD factor and three specific Inattention, Hyperactivity, and Impulsivity factors corresponding to ICD-10, albeit the Impulsivity specific factor was weakly defined. Results also support the complete measurement invariance of this model across gender and age groups, and that men have higher scores than women on the ADHD G-factor but lower scores on all three S-factors. Results suggest that a total ASRS-ADHD score is meaningful, reliable, and valid in adults. (J. of Att. Dis. 2016; 20(6) 530-541). © The Author(s) 2013.

  15. Overview of SCIAMACHY validation: 2002 2004

    Science.gov (United States)

    Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.

    2005-08-01

    SCIAMACHY, on board Envisat, is now in operation for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. The actual validation of the operational SCIAMACHY processors established at DLR on behalf of ESA has been hampered by data distribution and processor problems. Since first data releases in summer 2002, operational processors were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, remaining processor problems cause major errors preventing from scientific usability in other periods and domains. Untied to the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products (both columns and profiles) already have acceptable, if not excellent, quality

  16. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  17. Validation of Yoon's Critical Thinking Disposition Instrument.

    Science.gov (United States)

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  18. Validation of the IASI operational CH4 and N2O products using ground-based Fourier Transform Spectrometer: preliminary results at the Izaña Observatory (28ºN, 17ºW

    Directory of Open Access Journals (Sweden)

    Omaira García

    2014-01-01

    Full Text Available Within the project VALIASI (VALidation of IASI level 2 products the validation of the IASI operational atmospheric trace gas products (total column amounts of H2O, O3, CH4, N2O, CO2 and CO as well H2O and O3 profiles will be carried out. Ground-based FTS (Fourier Transform Spectrometer trace gas measurements made in the framework of NDACC (Network for the Detection of Atmospheric Composition Change serve as the validation reference. In this work, we will present the validation methodology developed for this project and show the first intercomparison results obtained for the Izaña Atmospheric Observatory between 2008 and 2012. As example, we will focus on two of the most important greenhouse gases, CH4 and N2O.

  19. Validation of Refractivity Profiles Retrieved from FORMOSAT-3/COSMIC Radio Occultation Soundings: Preliminary Results of Statistical Comparisons Utilizing Balloon-Borne Observations

    Directory of Open Access Journals (Sweden)

    Hiroo Hayashi

    2009-01-01

    Full Text Available The GPS radio occultation (RO soundings by the FORMOSAT-3/COSMIC (Taiwan¡¦s Formosa Satellite Misssion #3/Constellation Observing System for Meteorology, Ionosphere and Climate satellites launched in mid-April 2006 are compared with high-resolution balloon-borne (radiosonde and ozonesonde observations. This paper presents preliminary results of validation of the COSMIC RO measurements in terms of refractivity through the troposphere and lower stratosphere. With the use of COSMIC RO soundings within 2 hours and 300 km of sonde profiles, statistical comparisons between the collocated refractivity profiles are erformed for some tropical regions (Malaysia and Western Pacific islands where moisture-rich air is expected in the lower troposphere and for both northern and southern polar areas with a very dry troposphere. The results of the comparisons show good agreement between COSMIC RO and sonde refractivity rofiles throughout the troposphere (1 - 1.5% difference at most with a positive bias generally becoming larger at progressively higher altitudes in the lower stratosphere (1 - 2% difference around 25 km, and a very small standard deviation (about 0.5% or less for a few kilometers below the tropopause level. A large standard deviation of fractional differences in the lowermost troposphere, which reaches up to as much as 3.5 - 5%at 3 km, is seen in the tropics while a much smaller standard deviation (1 - 2% at most is evident throughout the polar troposphere.

  20. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  1. Preliminary Validation of the Child Abuse Potential Inventory in Turkey

    Science.gov (United States)

    Kutsal, Ebru; Pasli, Figen; Isikli, Sedat; Sahin, Figen; Yilmaz, Gokce; Beyazova, Ufuk

    2011-01-01

    This study aims to provide preliminary findings on the validity of Child Abuse Potential Inventory (CAP Inventory) on Turkish sample of 23 abuser and 47 nonabuser parents. To investigate validity in two groups, Minnesota Multiphasic Personality Inventory (MMPI) Psychopathic Deviate (MMPI-PD) scale is also used along with CAP. The results show…

  2. The Validity and Reliability of the Mobbing Scale (MS)

    Science.gov (United States)

    Yaman, Erkan

    2009-01-01

    The aim of this research is to develop the Mobbing Scale and examine its validity and reliability. The sample of the study consisted of 515 persons from Sakarya and Bursa. In this study, construct validity, internal consistency, test-retest reliability, and item analysis of the scale were examined. As a result of factor analysis for construct…

  3. Assessment of Irrational Beliefs: The Question of Discriminant Validity.

    Science.gov (United States)

    Smith, Timothy W.; Zurawski, Raymond M.

    1983-01-01

    Evaluated discriminant validity in frequently used measures of irrational beliefs relative to measures of trait anxiety in college students (N=142). Results showed discriminant validity in the Rational Behavior Inventory but not in the Irrational Beliefs Test and correlated cognitive rather than somatic aspects of trait anxiety with both measures.…

  4. Evaluating the Predictive Validity of Graduate Management Admission Test Scores

    Science.gov (United States)

    Sireci, Stephen G.; Talento-Miller, Eileen

    2006-01-01

    Admissions data and first-year grade point average (GPA) data from 11 graduate management schools were analyzed to evaluate the predictive validity of Graduate Management Admission Test[R] (GMAT[R]) scores and the extent to which predictive validity held across sex and race/ethnicity. The results indicated GMAT verbal and quantitative scores had…

  5. Study of the validity of a job-exposure matrix for psychosocial work factors: results from the national French SUMER survey.

    Science.gov (United States)

    Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres

    2008-10-01

    To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.

  6. Surgery for the correction of hallux valgus: minimum five-year results with a validated patient-reported outcome tool and regression analysis.

    Science.gov (United States)

    Chong, A; Nazarian, N; Chandrananth, J; Tacey, M; Shepherd, D; Tran, P

    2015-02-01

    This study sought to determine the medium-term patient-reported and radiographic outcomes in patients undergoing surgery for hallux valgus. A total of 118 patients (162 feet) underwent surgery for hallux valgus between January 2008 and June 2009. The Manchester-Oxford Foot Questionnaire (MOXFQ), a validated tool for the assessment of outcome after surgery for hallux valgus, was used and patient satisfaction was sought. The medical records and radiographs were reviewed retrospectively. At a mean of 5.2 years (4.7 to 6.0) post-operatively, the median combined MOXFQ score was 7.8 (IQR:0 to 32.8). The median domain scores for pain, walking/standing, and social interaction were 10 (IQR: 0 to 45), 0 (IQR: 0 to 32.1) and 6.3 (IQR: 0 to 25) respectively. A total of 119 procedures (73.9%, in 90 patients) were reported as satisfactory but only 53 feet (32.7%, in 43 patients) were completely asymptomatic. The mean (SD) correction of hallux valgus, intermetatarsal, and distal metatarsal articular angles was 18.5° (8.8°), 5.7° (3.3°), and 16.6° (8.8°), respectively. Multivariable regression analysis identified that an American Association of Anesthesiologists grade of >1 (Incident Rate Ratio (IRR) = 1.67, p-value = 0.011) and recurrent deformity (IRR = 1.77, p-value = 0.003) were associated with significantly worse MOXFQ scores. No correlation was found between the severity of deformity, the type, or degree of surgical correction and the outcome. When using a validated outcome score for the assessment of outcome after surgery for hallux valgus, the long-term results are worse than expected when compared with the short- and mid-term outcomes, with 25.9% of patients dissatisfied at a mean follow-up of 5.2 years. ©2015 The British Editorial Society of Bone & Joint Surgery.

  7. Screening for postdeployment conditions: development and cross-validation of an embedded validity scale in the neurobehavioral symptom inventory.

    Science.gov (United States)

    Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G

    2014-01-01

    To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.

  8. Validity and efficacy of the labor contract

    Directory of Open Access Journals (Sweden)

    Jorge Toyama

    2012-12-01

    Full Text Available The validity and efficacy of the labor contract as well as cases of nullity and defeasibility import an analysis of scopes of the supplementary application of Civil Code taking into account the peculiarities of Labor Law. Labor contract, while legal business has as regulatory framework to the regulations of Civil Code but it is necessary to determine, in each case, whether to apply fully this normative body, or modulate its supplemental application, or simply conclude that it doesn’t result compatible its regulation due to the special nature of labor relations. Specifically, this issue will be analyzed from cases of nullity and defeasibility of the labor contract.

  9. Validation of a semiquantitative food frequency questionnaire to assess folate status. Results discriminate a high-risk group of women residing on the Mexico-US border.

    Science.gov (United States)

    Bacardí-Gascón, Montserrat; Ley y de Góngora, Silvia; Castro-Vázquez, Brenda Yuniba; Jiménez-Cruz, Arturo

    2003-01-01

    The purpose of the study was to estimate dietary intake of folate in two groups of women from different economic backgrounds and to evaluate validity of the 5-day-weighed food registry (5-d-WFR) and Food Frequency Questionnaire (FFQ) using biological markers. A cross-sectional study was conducted in two samples of urban Mexican women: one represented the middle socioeconomic status (middle SES) and the other, low socioeconomic status (low SES). Middle SES included 34 women recruited from 1998 to 1999. Participants were between the ages of 18 and 32 years and were employed in the banking industry (middle SES) in the US-Mexican border city of Tijuana, Baja California. Low SES included 70 women between the ages of 18 and 35 years recruited during the year 2000. These women were receiving care at a primary health care center in Ensenada, Baja California Norte State, Mexico (low SES). Pearson correlations were calculated between folate intake among 5-day diet registry, FFQ, and biochemical indices. FFQ reproducibility was performed by Spearman correlation of each food item daily and of weekly intake. Average folate intake in middle SES from 5-d-WFR was 210 microg +/- 171. Fifty four percent of participants had intakes risk of NTDs as a result of low folate intake and low serum folate and RBC folate concentrations.

  10. Evaluation of recruitment and selection for specialty training in public health: interim results of a prospective cohort study to measure the predictive validity of the selection process.

    Science.gov (United States)

    Pashayan, Nora; Gray, Selena; Duff, Celia; Parkes, Julie; Williams, David; Patterson, Fiona; Koczwara, Anna; Fisher, Grant; Mason, Brendan W

    2016-06-01

    The recruitment process for public health specialty training includes an assessment centre (AC) with three components, Rust Advanced Numerical Reasoning Appraisal (RANRA), Watson-Glaser Critical Thinking Appraisal (WGCT) and a Situation Judgement Test (SJT), which determines invitation to a selection centre (SC). The scores are combined into a total recruitment (TR) score that determines the offers of appointment. A prospective cohort study using anonymous record linkage to investigate the association between applicant's scores in the recruitment process and registrar's progress through training measured by results of Membership Faculty Public Health (MFPH) examinations and outcomes of the Annual Review of Competence Progression (ARCP). Higher scores in RANRA, WGCT, AC, SC and TR were all significantly associated with higher adjusted odds of passing Part A MFPH exam at the first attempt. Higher scores in AC, SC and TR were significantly associated with passing Part B exam at the first attempt. Higher scores in SJT, AC and SC were significantly associated with satisfactory ARCP outcomes. The current UK national recruitment and selection process for public health specialty training has good predictive validity. The individual components of the process are testing different skills and abilities and together they are providing additive value. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  12. Assessing the construct validity of aberrant salience

    Directory of Open Access Journals (Sweden)

    Kristin Schmidt

    2009-12-01

    Full Text Available We sought to validate the psychometric properties of a recently developed paradigm that aims to measure salience attribution processes proposed to contribute to positive psychotic symptoms, the Salience Attribution Test (SAT. The “aberrant salience” measure from the SAT showed good face validity in previous results, with elevated scores both in high-schizotypy individuals, and in patients with schizophrenia suffering from delusions. Exploring the construct validity of salience attribution variables derived from the SAT is important, since other factors, including latent inhibition/learned irrelevance, attention, probabilistic reward learning, sensitivity to probability, general cognitive ability and working memory could influence these measures. Fifty healthy participants completed schizotypy scales, the SAT, a learned irrelevance task, and a number of other cognitive tasks tapping into potentially confounding processes. Behavioural measures of interest from each task were entered into a principal components analysis, which yielded a five-factor structure accounting for ~75% percent of the variance in behaviour. Implicit aberrant salience was found to load onto its own factor, which was associated with elevated “Introvertive Anhedonia” schizotypy, replicating our previous finding. Learned irrelevance loaded onto a separate factor, which also included implicit adaptive salience, but was not associated with schizotypy. Explicit adaptive and aberrant salience, along with a measure of probabilistic learning, loaded onto a further factor, though this also did not correlate with schizotypy. These results suggest that the measures of learned irrelevance and implicit adaptive salience might be based on similar underlying processes, which are dissociable both from implicit aberrant salience and explicit measures of salience.

  13. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  14. The validation of Huffaz Intelligence Test (HIT)

    Science.gov (United States)

    Rahim, Mohd Azrin Mohammad; Ahmad, Tahir; Awang, Siti Rahmah; Safar, Ajmain

    2017-08-01

    In general, a hafiz who can memorize the Quran has many specialties especially in respect to their academic performances. In this study, the theory of multiple intelligences introduced by Howard Gardner is embedded in a developed psychometric instrument, namely Huffaz Intelligence Test (HIT). This paper presents the validation and the reliability of HIT of some tahfiz students in Malaysia Islamic schools. A pilot study was conducted involving 87 huffaz who were randomly selected to answer the items in HIT. The analysis method used includes Partial Least Square (PLS) on reliability, convergence and discriminant validation. The study has validated nine intelligences. The findings also indicated that the composite reliabilities for the nine types of intelligences are greater than 0.8. Thus, the HIT is a valid and reliable instrument to measure the multiple intelligences among huffaz.

  15. Ensuring validity in qualitative international business research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman; Skaates, Maria Anne

    2002-01-01

    The purpose of this paper is to provide an account of how the validity issue related to qualitative research strategies within the IB field may be grasped from an at least partially subjectivist point of view. In section two, we will first assess via the aforementioned literature review the extent...... to which the validity issue has been treated in qualitative research contributions published in six leading English-language journals which publish IB research. Thereafter, in section three, we will discuss our findings and relate them to (a) various levels of a research project and (b) the existing...... literature on potential validity problems from a more subjectivist point of view. As a part of this step, we will demonstrate that the assumptions of objectivist and subjectivist ontologies and their corresponding epistemologies merit different canons for assessing research validity. In the subsequent...

  16. Convergent Validity of Four Innovativeness Scales.

    Science.gov (United States)

    Goldsmith, Ronald E.

    1986-01-01

    Four scales of innovativeness were administered to two samples of undergraduate students: the Open Processing Scale, Innovativeness Scale, innovation subscale of the Jackson Personality Inventory, and Kirton Adaption-Innovation Inventory. Intercorrelations indicated the scales generally exhibited convergent validity. (GDC)

  17. Validity of Sensory Systems as Distinct Constructs

    OpenAIRE

    Su, Chia-Ting; Parham, L. Diane

    2014-01-01

    Confirmatory factor analysis testing whether sensory questionnaire items represented distinct sensory system constructs found, using data from two age groups, that such constructs can be measured validly using questionnaire data.

  18. Validation of the reactor dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.

    1994-05-01

    HEXTRAN is a new three-dimensional, hexagonal reactor dynamics code developed in the Technical Research Centre of Finland (VTT) for VVER type reactors. This report describes the validation work of HEXTRAN. The work has been made with the financing of the Finnish Centre for Radiation and Nuclear Safety (STUK). HEXTRAN is particularly intended for calculation of such accidents, in which radially asymmetric phenomena are included and both good neutron dynamics and two-phase thermal hydraulics are important. HEXTRAN is based on already validated codes. The models of these codes have been shown to function correctly also within the HEXTRAN code. The main new model of HEXTRAN, the spatial neutron kinetics model has been successfully validated against LR-0 test reactor and Loviisa plant measurements. Connected with SMABRE, HEXTRAN can be reliably used for calculation of transients including effects of the whole cooling system of VVERs. Further validation plans are also introduced in the report. (orig.). (23 refs., 16 figs., 2 tabs.)

  19. Validation of method in instrumental NAA for food products sample

    International Nuclear Information System (INIS)

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  20. An information architecture for courseware validation

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    A lack of pedagogy in courseware can lead to learner rejec- tion. It is therefore vital that pedagogy is a central concern of courseware construction. Courseware validation allows the course creator to specify pedagogical rules and principles which courseware must conform to. In this paper we investigate the information needed for courseware valida- tion and propose an information architecture to be used as a basis for validation.

  1. Italian Validation of Homophobia Scale (HS)

    OpenAIRE

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L.; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A.

    2015-01-01

    Introduction: The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self‐reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. Aim: The aim of this study was to validate the HS in the Italian context. Methods: An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back i...

  2. VAlidation STandard antennas: Past, present and future

    DEFF Research Database (Denmark)

    Drioli, Luca Salghetti; Ostergaard, A; Paquay, M

    2011-01-01

    designed for validation campaigns of antenna measurement ranges. The driving requirements of VAST antennas are their mechanical stability over a given operational temperature range and with respect to any orientation of the gravity field. The mechanical design shall ensure extremely stable electrical....../V-band of telecom satellites. The paper will address requirements for future VASTs and possible architecture for multi-frequency Validation Standard antennas....

  3. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  4. Verification and Validation in Systems Engineering

    CERN Document Server

    Debbabi, Mourad; Jarraya, Yosr; Soeanu, Andrei; Alawneh, Luay

    2010-01-01

    "Verification and validation" represents an important process used for the quality assessment of engineered systems and their compliance with the requirements established at the beginning of or during the development cycle. Debbabi and his coauthors investigate methodologies and techniques that can be employed for the automatic verification and validation of systems engineering design models expressed in standardized modeling languages. Their presentation includes a bird's eye view of the most prominent modeling languages for software and systems engineering, namely the Unified Model

  5. The Legality and Validity of Administrative Enforcement

    Directory of Open Access Journals (Sweden)

    Sergei V. Iarkovoi

    2018-01-01

    Full Text Available The article discusses the concept and content of the validity of adopted by the executive authorities and other bodies of public administration legal acts and committed by them legal actions as an important characteristic of law enforcement by these bodies. The Author concludes that the validity of the administrative law enforcement is not an independent requirement for it, and acts as an integral part of its legal requirements.

  6. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  7. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  8. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  9. [French validation of the Frustration Discomfort Scale].

    Science.gov (United States)

    Chamayou, J-L; Tsenova, V; Gonthier, C; Blatier, C; Yahyaoui, A

    2016-08-01

    Rational emotive behavior therapy originally considered the concept of frustration intolerance in relation to different beliefs or cognitive patterns. Psychological disorders or, to some extent, certain affects such as frustration could result from irrational beliefs. Initially regarded as a unidimensional construct, recent literature considers those irrational beliefs as a multidimensional construct; such is the case for the phenomenon of frustration. In order to measure frustration intolerance, Harrington (2005) developed and validated the Frustration Discomfort Scale. The scale includes four dimensions of beliefs: emotional intolerance includes beliefs according to which emotional distress is intolerable and must be controlled or avoided as soon as possible. The intolerance of discomfort or demand for comfort is the second dimension based on beliefs that life should be peaceful and comfortable and that any inconvenience, effort or hassle should be avoided. The third dimension is entitlement, which includes beliefs about personal goals, such as merit, fairness, respect and gratification, and that others must not frustrate those non-negotiable desires. The fourth dimension is achievement, which reflects demands for high expectations or standards. The aim of this study was to translate and validate in a French population the Frustration and Discomfort Scale developed by Harrington (2005), assess its psychometric properties, highlight the four factors structure of the scale, and examine the relationships between this concept and both emotion regulation and perceived stress. We translated the Frustration Discomfort Scale from English to French and back from French to English in order to ensure good quality of translation. We then submitted the scale to 289 students (239 females and 50 males) from the University of Savoy in addition to the Cognitive Emotional Regulation Questionnaire and the Perceived Stress Scale. The results showed satisfactory psychometric

  10. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  11. Radiochemical verification and validation in the environmental data collection process

    International Nuclear Information System (INIS)

    Rosano-Reece, D.; Bottrell, D.; Bath, R.J.

    1994-01-01

    A credible and cost effective environmental data collection process should produce analytical data which meets regulatory and program specific requirements. Analytical data, which support the sampling and analysis activities at hazardous waste sites, undergo verification and independent validation before the data are submitted to regulators. Understanding the difference between verification and validation and their respective roles in the sampling and analysis process is critical to the effectiveness of a program. Verification is deciding whether the measurement data obtained are what was requested. The verification process determines whether all the requirements were met. Validation is more complicated than verification. It attempts to assess the impacts on data use, especially when requirements are not met. Validation becomes part of the decision-making process. Radiochemical data consists of a sample result with an associated error. Therefore, radiochemical validation is different and more quantitative than is currently possible for the validation of hazardous chemical data. Radiochemical data include both results and uncertainty that can be statistically compared to identify significance of differences in a more technically defensible manner. Radiochemical validation makes decisions about analyte identification, detection, and uncertainty for a batch of data. The process focuses on the variability of the data in the context of the decision to be made. The objectives of this paper are to present radiochemical verification and validation for environmental data and to distinguish the differences between the two operations

  12. The Perceived Leadership Communication Questionnaire (PLCQ): Development and Validation.

    Science.gov (United States)

    Schneider, Frank M; Maier, Michaela; Lovrekovic, Sara; Retzbach, Andrea

    2015-01-01

    The Perceived Leadership Communication Questionnaire (PLCQ) is a short, reliable, and valid instrument for measuring leadership communication from both perspectives of the leader and the follower. Drawing on a communication-based approach to leadership and following a theoretical framework of interpersonal communication processes in organizations, this article describes the development and validation of a one-dimensional 6-item scale in four studies (total N = 604). Results from Study 1 and 2 provide evidence for the internal consistency and factorial validity of the PLCQ's self-rating version (PLCQ-SR)-a version for measuring how leaders perceive their own communication with their followers. Results from Study 3 and 4 show internal consistency, construct validity, and criterion validity of the PLCQ's other-rating version (PLCQ-OR)-a version for measuring how followers perceive the communication of their leaders. Cronbach's α had an average of.80 over the four studies. All confirmatory factor analyses yielded good to excellent model fit indices. Convergent validity was established by average positive correlations of.69 with subdimensions of transformational leadership and leader-member exchange scales. Furthermore, nonsignificant correlations with socially desirable responding indicated discriminant validity. Last, criterion validity was supported by a moderately positive correlation with job satisfaction (r =.31).

  13. Validating presupposed versus focused text information.

    Science.gov (United States)

    Singer, Murray; Solar, Kevin G; Spear, Jackie

    2017-04-01

    There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.

  14. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  15. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  16. Validation of the vaccine conspiracy beliefs scale

    Directory of Open Access Journals (Sweden)

    Gilla K. Shapiro

    2016-12-01

    Full Text Available Background: Parents’ vaccine attitudes influence their decision regarding child vaccination. To date, no study has evaluated the impact of vaccine conspiracy beliefs on human papillomavirus vaccine acceptance. The authors assessed the validity of a Vaccine Conspiracy Beliefs Scale (VCBS and determined whether this scale was associated with parents’ willingness to vaccinate their son with the HPV vaccine. Methods: Canadian parents completed a 24-min online survey in 2014. Measures included socio-demographic variables, HPV knowledge, health care provider recommendation, Conspiracy Mentality Questionnaire (CMQ, the seven-item VCBS, and parents’ willingness to vaccinate their son at two price points. Results: A total of 1427 Canadian parents completed the survey in English (61.2% or French (38.8%. A Factor Analysis revealed the VCBS is one-dimensional and has high internal consistency (α=0.937. The construct validity of the VCBS was supported by a moderate relationship with the CMQ (r=0.44, p<0.001. Hierarchical regression analyses found the VCBS is negatively related to parents’ willingness to vaccinate their son with the HPV vaccine at both price points (‘free’ or ‘$300′ after controlling for gender, age, household income, education level, HPV knowledge, and health care provider recommendation. Conclusions: The VCBS is a brief, valid scale that will be useful in further elucidating the correlates of vaccine hesitancy. Future research could use the VCBS to evaluate the impact of vaccine conspiracies beliefs on vaccine uptake and how concerns about vaccination may be challenged and reversed. Keywords: Cancer prevention, Conspiracy beliefs, Human papillomavirus, Vaccine hesitancy, Vaccines, Vaccine Conspiracy Belief Scale

  17. A PHYSICAL ACTIVITY QUESTIONNAIRE: REPRODUCIBILITY AND VALIDITY

    Directory of Open Access Journals (Sweden)

    Nicolas Barbosa

    2007-12-01

    Full Text Available This study evaluates the Quantification de L'Activite Physique en Altitude chez les Enfants (QAPACE supervised self-administered questionnaire reproducibility and validity on the estimation of the mean daily energy expenditure (DEE on Bogotá's schoolchildren. The comprehension was assessed on 324 students, whereas the reproducibility was studied on a different random sample of 162 who were exposed twice to it. Reproducibility was assessed using both the Bland-Altman plot and the intra-class correlation coefficient (ICC. The validity was studied in a sample of 18 girls and 18 boys randomly selected, which completed the test - re-test study. The DEE derived from the questionnaire was compared with the laboratory measurement results of the peak oxygen uptake (Peak VO2 from ergo-spirometry and Leger Test. The reproducibility ICC was 0.96 (95% C.I. 0.95-0.97; by age categories 8-10, 0.94 (0.89-0. 97; 11-13, 0.98 (0.96- 0.99; 14-16, 0.95 (0.91-0.98. The ICC between mean TEE as estimated by the questionnaire and the direct and indirect Peak VO2 was 0.76 (0.66 (p<0.01; by age categories, 8-10, 11-13, and 14-16 were 0.89 (0.87, 0.76 (0.78 and 0.88 (0.80 respectively. The QAPACE questionnaire is reproducible and valid for estimating PA and showed a high correlation with the Peak VO2 uptake

  18. Validation of evaluated neutron standard cross sections

    International Nuclear Information System (INIS)

    Badikov, S.; Golashvili, T.

    2008-01-01

    Some steps of the validation and verification of the new version of the evaluated neutron standard cross sections were carried out. In particular: -) the evaluated covariance data was checked for physical consistency, -) energy-dependent evaluated cross-sections were tested in most important neutron benchmark field - 252 Cf spontaneous fission neutron field, -) a procedure of folding differential standard neutron data in group representation for preparation of specialized libraries of the neutron standards was verified. The results of the validation and verification of the neutron standards can be summarized as follows: a) the covariance data of the evaluated neutron standards is physically consistent since all the covariance matrices of the evaluated cross sections are positive definite, b) the 252 Cf spectrum averaged standard cross-sections are in agreement with the evaluated integral data (except for 197 Au(n,γ) reaction), c) a procedure of folding differential standard neutron data in group representation was tested, as a result a specialized library of neutron standards in the ABBN 28-group structure was prepared for use in reactor applications. (authors)

  19. RESEM-CA: Validation and testing

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Vineeta; Carroll, William L.; Bourassa, Norman

    2002-09-01

    This report documents the results of an extended comparison of RESEM-CA energy and economic performance predictions with the recognized benchmark tool DOE2.1E to determine the validity and effectiveness of this tool for retrofit design and analysis. The analysis was a two part comparison of patterns of (1) monthly and annual energy consumption of a simple base-case building and controlled variations in it to explore the predictions of load components of each program, and (2) a simplified life-cycle cost analysis of the predicted effects of selected Energy Conservation Measures (ECMs). The study tries to analyze and/or explain the differences that were observed. On the whole, this validation study indicates that RESEM is a promising tool for retrofit analysis. As a result of this study some factors (incident solar radiation, outside air film coefficient, IR radiation) have been identified where there is a possibility of algorithmic improvements. These would have to be made in a way that does not sacrifice the speed of the tool, necessary for extensive parametric search of optimum ECM measures.

  20. Validation testing of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Hang Bae; Han, Jae Bok

    1995-01-01

    A software engineering process has been developed for the design of safety critical software for Wolsung 2/3/4 project to satisfy the requirements of the regulatory body. Among the process, this paper described the detail process of validation testing performed to ensure that the software with its hardware, developed by the design group, satisfies the requirements of the functional specification prepared by the independent functional group. To perform the tests, test facility and test software were developed and actual safety system computer was connected. Three kinds of test cases, i.e., functional test, performance test and self-check test, were programmed and run to verify each functional specifications. Test failures were feedback to the design group to revise the software and test results were analyzed and documented in the report to submit to the regulatory body. The test methodology and procedure were very efficient and satisfactory to perform the systematic and automatic test. The test results were also acceptable and successful to verify the software acts as specified in the program functional specification. This methodology can be applied to the validation of other safety-critical software. 2 figs., 2 tabs., 14 refs. (Author)

  1. Validation of the Rotation Ratios Method

    International Nuclear Information System (INIS)

    Foss, O.A.; Klaksvik, J.; Benum, P.; Anda, S.

    2007-01-01

    Background: The rotation ratios method describes rotations between pairs of sequential pelvic radiographs. The method seems promising but has not been validated. Purpose: To validate the accuracy of the rotation ratios method. Material and Methods: Known pelvic rotations between 165 radiographs obtained from five skeletal pelvises in an experimental material were compared with the corresponding calculated rotations to describe the accuracy of the method. The results from a clinical material of 262 pelvic radiographs from 46 patients defined the ranges of rotational differences compared. Repeated analyses, both on the experimental and the clinical material, were performed using the selected reference points to describe the robustness and the repeatability of the method. Results: The reference points were easy to identify and barely influenced by pelvic rotations. The mean differences between calculated and real pelvic rotations were 0.0 deg (SD 0.6) for vertical rotations and 0.1 deg (SD 0.7) for transversal rotations in the experimental material. The intra- and interobserver repeatability of the method was good. Conclusion: The accuracy of the method was reasonably high, and the method may prove to be clinically useful

  2. COVERS Neonatal Pain Scale: Development and Validation

    Directory of Open Access Journals (Sweden)

    Ivan L. Hand

    2010-01-01

    Full Text Available Newborns and infants are often exposed to painful procedures during hospitalization. Several different scales have been validated to assess pain in specific populations of pediatric patients, but no single scale can easily and accurately assess pain in all newborns and infants regardless of gestational age and disease state. A new pain scale was developed, the COVERS scale, which incorporates 6 physiological and behavioral measures for scoring. Newborns admitted to the Neonatal Intensive Care Unit or Well Baby Nursery were evaluated for pain/discomfort during two procedures, a heel prick and a diaper change. Pain was assessed using indicators from three previously established scales (CRIES, the Premature Infant Pain Profile, and the Neonatal Infant Pain Scale, as well as the COVERS Scale, depending upon gestational age. Premature infant testing resulted in similar pain assessments using the COVERS and PIPP scales with an r=0.84. For the full-term infants, the COVERS scale and NIPS scale resulted in similar pain assessments with an r=0.95. The COVERS scale is a valid pain scale that can be used in the clinical setting to assess pain in newborns and infants and is universally applicable to all neonates, regardless of their age or physiological state.

  3. Overview of SCIAMACHY validation: 2002–2004

    Directory of Open Access Journals (Sweden)

    A. J. M. Piters

    2006-01-01

    Full Text Available SCIAMACHY, on board Envisat, has been in operation now for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. Since provisional releases of limited data sets in summer 2002, operational SCIAMACHY processors established at DLR on behalf of ESA were upgraded regularly and some data products – level-1b spectra, level-2 O3, NO2, BrO and clouds data – have improved significantly. Validation results summarised in this paper and also reported in this special issue conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, current processor versions still experience known limitations that hamper scientific usability in other periods and domains. Free from the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE/IUP-Bremen, IUP-Heidelberg, KNMI, MPI, SAO and SRON have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products – O3, NO2, SO2, H2O total columns; BrO, OClO slant columns; O3, NO2, BrO profiles

  4. Overview of SCIAMACHY validation: 2002-2004

    Science.gov (United States)

    Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.

    2006-01-01

    SCIAMACHY, on board Envisat, has been in operation now for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. Since provisional releases of limited data sets in summer 2002, operational SCIAMACHY processors established at DLR on behalf of ESA were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper and also reported in this special issue conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, current processor versions still experience known limitations that hamper scientific usability in other periods and domains. Free from the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE/IUP-Bremen, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products - O3, NO2, SO2, H2O total columns; BrO, OClO slant columns; O3, NO2, BrO profiles - already have acceptable

  5. The predictive validity of safety climate.

    Science.gov (United States)

    Johnson, Stephen E

    2007-01-01

    -level climates. Journal of Applied Psychology, 90(4), 616-628]. In addition, safety behavior and accident experience data were collected for 5 months following the survey and were statistically analyzed (structural equation modeling, confirmatory factor analysis, exploratory factor analysis, etc.) to identify correlations, associations, internal consistency, and factorial structures. Results revealed that the ZSCQ: (a) was psychometrically reliable and valid, (b) served as an effective predictor of safety-related outcomes (behavior and accident experience), and (c) could be trimmed to an 11 item survey with little loss of explanatory power. Practitioners and researchers can use the ZSCQ with reasonable certainty of the questionnaire's reliability and validity. This provides a solid foundation for the development of meaningful organizational interventions and/or continued research into social factors affecting industrial accident experience.

  6. Development of the safety evaluation system in the respects of organizational factors and workers' consciousness. Pt. 1. Study of validities of functions for necessary evaluation and results obtained

    International Nuclear Information System (INIS)

    Takano, Kenichi; Tsuge, Tadafumi; Hasegawa, Naoko; Hirose, Ayako; Sasou, Kunihide

    2002-01-01

    CRIEPI decided to develop the safety evaluation system to investigate the safety level of the industrial sites due to questionnaires of organizational climate, safety managements, and workers' safety consciousness to workers. This report describes the questionnaire survey to apply to the domestic nuclear power plant for using obtained results as a fundamental data in order to construct the safety evaluation system. This system will be used for promoting safety culture in organizations of nuclear power plants. The questionnaire survey was conducted to 14 nuclear power stations for understanding the present status relating to safety issues. This questionnaire involves 122 items classified into following three categories: (1) safety awareness and behavior of plant personnel; (2) safety management; (3) organizational climate, based on the model considering contributing factor groups to safety culture. Obtained results were analyzed by statistical method to prepare functions of evaluation. Additionally, by applying a multivariate analysis, it was possible to extract several crucial factors influencing safety performance and to find a comprehensive safety indicator representing total organizational safety level. Significant relations were identified between accident rates (both labor accidents and facility failures) and above comprehensive safety indicator. Next, 122 questionnaire items were classified into 20 major safety factors to grasp the safety profiles of each site. This profile is considered as indicating the features of each site and also indicating the direction of progress for improvement of safety situation in the site. These findings can be reflected in developing the safety evaluation system, by confirming the validity of the evaluation method and giving specific functions. (author)

  7. Contrast-enhanced spectral mammography in recalls from the Dutch breast cancer screening program: validation of results in a large multireader, multicase study.

    Science.gov (United States)

    Lalji, U C; Houben, I P L; Prevos, R; Gommers, S; van Goethem, M; Vanwetswinkel, S; Pijnappel, R; Steeman, R; Frotscher, C; Mok, W; Nelemans, P; Smidt, M L; Beets-Tan, R G; Wildberger, J E; Lobbes, M B I

    2016-12-01

    Contrast-enhanced spectral mammography (CESM) is a promising problem-solving tool in women referred from a breast cancer screening program. We aimed to study the validity of preliminary results of CESM using a larger panel of radiologists with different levels of CESM experience. All women referred from the Dutch breast cancer screening program were eligible for CESM. 199 consecutive cases were viewed by ten radiologists. Four had extensive CESM experience, three had no CESM experience but were experienced breast radiologists, and three were residents. All readers provided a BI-RADS score for the low-energy CESM images first, after which the score could be adjusted when viewing the entire CESM exam. BI-RADS 1-3 were considered benign and BI-RADS 4-5 malignant. With this cutoff, we calculated sensitivity, specificity and area under the ROC curve. CESM increased diagnostic accuracy in all readers. The performance for all readers using CESM was: sensitivity 96.9 % (+3.9 %), specificity 69.7 % (+33.8 %) and area under the ROC curve 0.833 (+0.188). CESM is superior to conventional mammography, with excellent problem-solving capabilities in women referred from the breast cancer screening program. Previous results were confirmed even in a larger panel of readers with varying CESM experience. • CESM is consistently superior to conventional mammography • CESM increases diagnostic accuracy regardless of a reader's experience • CESM is an excellent problem-solving tool in recalls from screening programs.

  8. A cross-validation trial of an Internet-based prevention program for alcohol and cannabis: Preliminary results from a cluster randomised controlled trial.

    Science.gov (United States)

    Champion, Katrina E; Newton, Nicola C; Stapinski, Lexine; Slade, Tim; Barrett, Emma L; Teesson, Maree

    2016-01-01

    Replication is an important step in evaluating evidence-based preventive interventions and is crucial for establishing the generalizability and wider impact of a program. Despite this, few replications have occurred in the prevention science field. This study aims to fill this gap by conducting a cross-validation trial of the Climate Schools: Alcohol and Cannabis course, an Internet-based prevention program, among a new cohort of Australian students. A cluster randomized controlled trial was conducted among 1103 students (Mage: 13.25 years) from 13 schools in Australia in 2012. Six schools received the Climate Schools course and 7 schools were randomized to a control group (health education as usual). All students completed a self-report survey at baseline and immediately post-intervention. Mixed-effects regressions were conducted for all outcome variables. Outcomes assessed included alcohol and cannabis use, knowledge and intentions to use these substances. Compared to the control group, immediately post-intervention the intervention group reported significantly greater alcohol (d = 0.67) and cannabis knowledge (d = 0.72), were less likely to have consumed any alcohol (even a sip or taste) in the past 6 months (odds ratio = 0.69) and were less likely to intend on using alcohol in the future (odds ratio = 0.62). However, there were no effects for binge drinking, cannabis use or intentions to use cannabis. These preliminary results provide some support for the Internet-based Climate Schools: Alcohol and Cannabis course as a feasible way of delivering alcohol and cannabis prevention. Intervention effects for alcohol and cannabis knowledge were consistent with results from the original trial; however, analyses of longer-term follow-up data are needed to provide a clearer indication of the efficacy of the intervention, particularly in relation to behavioral changes. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  9. Reconceptualising the external validity of discrete choice experiments.

    Science.gov (United States)

    Lancsar, Emily; Swait, Joffre

    2014-10-01

    External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.

  10. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  11. Elaboration and Validation of the Medication Prescription Safety Checklist 1

    Science.gov (United States)

    Pires, Aline de Oliveira Meireles; Ferreira, Maria Beatriz Guimarães; do Nascimento, Kleiton Gonçalves; Felix, Márcia Marques dos Santos; Pires, Patrícia da Silva; Barbosa, Maria Helena

    2017-01-01

    ABSTRACT Objective: to elaborate and validate a checklist to identify compliance with the recommendations for the structure of medication prescriptions, based on the Protocol of the Ministry of Health and the Brazilian Health Surveillance Agency. Method: methodological research, conducted through the validation and reliability analysis process, using a sample of 27 electronic prescriptions. Results: the analyses confirmed the content validity and reliability of the tool. The content validity, obtained by expert assessment, was considered satisfactory as it covered items that represent the compliance with the recommendations regarding the structure of the medication prescriptions. The reliability, assessed through interrater agreement, was excellent (ICC=1.00) and showed perfect agreement (K=1.00). Conclusion: the Medication Prescription Safety Checklist showed to be a valid and reliable tool for the group studied. We hope that this study can contribute to the prevention of adverse events, as well as to the improvement of care quality and safety in medication use. PMID:28793128

  12. Verification and Validation of a Fingerprint Image Registration Software

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2006-01-01

    Full Text Available The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a the validation of the source code with respect to the system requirements specification; (b the validation of the optimization algorithm, which is in the core of the registration system; and (c the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  13. Validation of One-Dimensional Module of MARS-KS1.2 Computer Code By Comparison with the RELAP5/MOD3.3/patch3 Developmental Assessment Results

    International Nuclear Information System (INIS)

    Bae, S. W.; Chung, B. D.

    2010-07-01

    This report records the results of the code validation for the one-dimensional module of the MARS-KS thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 Code Developmental Assessment Problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The result suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  14. US-APWR human systems interface system verification and validation results. Application of the Mitsubishi advanced design to the US market

    International Nuclear Information System (INIS)

    Hall, Robert E.; Easter, James; Roth, Emilie; Kabana, Leonard; Takahashi, Koichi; Clouser, Timothy

    2009-01-01

    The US-APWR, under Design Certification Review by the US Nuclear Regulatory Commission, is a four loop evolutionary pressurized water reactor with a four train active safety system by Mitsubishi Heavy Industries and Instrumentation and Control System (I and C)/Human Systems Interface (HSI) platform applied by Mitsubishi Electric Corporation. This design is currently being applied to the latest Japanese PWR plant under construction and to the nuclear power plant I and C modernization program in Japan. The US-APWR's fully digital I and C system and HSI platform utilizes computerized systems, including computer based procedures and alarm prioritization, relying principally on an HSI system with soft controls, console based video display units and a large overview wall display panel. Conventional hard controls are limited to Safety System level manual actions and a Diverse Actuation System. The overall design philosophy is based on the concept that operator performance will be enhanced through the integration of safety- and non-safety display and control systems in a robust digital environment. This philosophy is augmented, for diversity, by the application of independent safety-only soft displays and controls. As with all advanced designs, the digital systems resolve many long- standing issues of human and system performance while opening a number of new, less understood, questions. This paper discusses a testing program that begins to address these new questions and specifically explores the needs of moving a mature design into the US market with minimum changes from its original design. Details for the program took shape during 2007 and early 2008, resulting in an eight-week testing program during the months of July and August 2008. This extensive verification and validation program on the advanced design was undertaken with the objective of assessing United States operators' performance in this digital design environment. This testing program included analyses that

  15. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  16. The development and validation of the Male Genital Self-Image Scale: results from a nationally representative probability sample of men in the United States.

    Science.gov (United States)

    Herbenick, Debby; Schick, Vanessa; Reece, Michael; Sanders, Stephanie A; Fortenberry, J Dennis

    2013-06-01

    Numerous factors may affect men's sexual experiences, including their health status, past trauma or abuse, medication use, relationships, mood, anxiety, and body image. Little research has assessed the influence of men's genital self-image on their sexual function or behaviors and none has done so in a nationally representative sample. The purpose of this study was to, in a nationally representative probability sample of men ages 18 to 60, assess the reliability and validity of the Male Genital Self-Image Scale (MGSIS), and to examine the relationship between scores on the MGSIS and men's scores on the International Index of Erectile Function (IIEF). The MGSIS was developed in two stages. Phase One involved a review of the literature and an analysis of cross-sectional survey data. Phase Two involved an administration of the scale items to a nationally representative sample of men in the United States ages 18 to 60. Measures include demographic items, the IIEF, and the MGSIS. Overall, most men felt positively about their genitals. However, 24.6% of men expressed some discomfort letting a healthcare provider examine their genitals and about 20% reported dissatisfaction with their genital size. The MGSIS was found to be reliable and valid, with the MGSIS-5 (consisting of five items) being the best fit to the data. The MGSIS was found to be a reliable and valid measure. In addition, men's scores on the MGSIS-5 were found to be positively related to men's scores on the IIEF. © 2013 International Society for Sexual Medicine.

  17. A Visual Analog Scale to assess anxiety in children during anesthesia induction (VAS-I): Results supporting its validity in a sample of day care surgery patients.

    Science.gov (United States)

    Berghmans, Johan M; Poley, Marten J; van der Ende, Jan; Weber, Frank; Van de Velde, Marc; Adriaenssens, Peter; Himpe, Dirk; Verhulst, Frank C; Utens, Elisabeth

    2017-09-01

    The modified Yale Preoperative Anxiety Scale is widely used to assess children's anxiety during induction of anesthesia, but requires training and its administration is time-consuming. A Visual Analog Scale, in contrast, requires no training, is easy-to-use and quickly completed. The aim of this study was to evaluate a Visual Analog Scale as a tool to assess anxiety during induction of anesthesia and to determine cut-offs to distinguish between anxious and nonanxious children. Four hundred and one children (1.5-16 years) scheduled for daytime surgery were included. Children's anxiety during induction was rated by parents and anesthesiologists on a Visual Analog Scale and by a trained observer on the modified Yale Preoperative Anxiety Scale. Psychometric properties assessed were: (i) concurrent validity (correlations between parents' and anesthesiologists' Visual Analog Scale and modified Yale Preoperative Anxiety Scale scores); (ii) construct validity (differences between subgroups according to the children's age and the parents' anxiety as assessed by the State-Trait Anxiety Inventory); (iii) cross-informant agreement using Bland-Altman analysis; (iv) cut-offs to distinguish between anxious and nonanxious children (reference: modified Yale Preoperative Anxiety Scale ≥30). Correlations between parents' and anesthesiologists' Visual Analog Scale and modified Yale Preoperative Anxiety Scale scores were strong (0.68 and 0.73, respectively). Visual Analog Scale scores were higher for children ≤5 years compared to children aged ≥6. Visual Analog Scale scores of children of high-anxious parents were higher than those of low-anxious parents. The mean difference between parents' and anesthesiologists' Visual Analog Scale scores was 3.6, with 95% limits of agreement (-56.1 to 63.3). To classify anxious children, cut-offs for parents (≥37 mm) and anesthesiologists (≥30 mm) were established. The present data provide preliminary data for the validity of a Visual

  18. Failure mode and effects analysis outputs: are they valid?

    Science.gov (United States)

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  19. STAR-CCM+ Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-30

    The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methods (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.

  20. NDE reliability and advanced NDE technology validation

    International Nuclear Information System (INIS)

    Doctor, S.R.; Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Hutton, P.H.; Reid, L.D.; Simonen, F.A.; Spanner, J.C.; Vo, T.V.

    1989-01-01

    This paper reports on progress for three programs: (1) evaluation and improvement in nondestructive examination reliability for inservice inspection of light water reactors (LWR) (NDE Reliability Program), (2) field validation acceptance, and training for advanced NDE technology, and (3) evaluation of computer-based NDE techniques and regional support of inspection activities. The NDE Reliability Program objectives are to quantify the reliability of inservice inspection techniques for LWR primary system components through independent research and establish means for obtaining improvements in the reliability of inservice inspections. The areas of significant progress will be described concerning ASME Code activities, re-analysis of the PISC-II data, the equipment interaction matrix study, new inspection criteria, and PISC-III. The objectives of the second program are to develop field procedures for the AE and SAFT-UT techniques, perform field validation testing of these techniques, provide training in the techniques for NRC headquarters and regional staff, and work with the ASME Code for the use of these advanced technologies. The final program's objective is to evaluate the reliability and accuracy of interpretation of results from computer-based ultrasonic inservice inspection systems, and to develop guidelines for NRC staff to monitor and evaluate the effectiveness of inservice inspections conducted on nuclear power reactors. This program started in the last quarter of FY89, and the extent of the program was to prepare a work plan for presentation to and approval from a technical advisory group of NRC staff

  1. Range of validity of transport equations

    International Nuclear Information System (INIS)

    Berges, Juergen; Borsanyi, Szabolcs

    2006-01-01

    Transport equations can be derived from quantum field theory assuming a loss of information about the details of the initial state and a gradient expansion. While the latter can be systematically improved, the assumption about a memory loss is not known to be controlled by a small expansion parameter. We determine the range of validity of transport equations for the example of a scalar g 2 Φ 4 theory. We solve the nonequilibrium time evolution using the three-loop 2PI effective action. The approximation includes off-shell and memory effects and assumes no gradient expansion. This is compared to transport equations to lowest order (LO) and beyond (NLO). We find that the earliest time for the validity of transport equations is set by the characteristic relaxation time scale t damp =-2ω/Σ ρ (eq) , where -Σ ρ (eq) /2 denotes the on-shell imaginary-part of the self-energy. This time scale agrees with the characteristic time for partial memory loss, but is much shorter than thermal equilibration times. For times larger than about t damp the gradient expansion to NLO is found to describe the full results rather well for g 2 (less-or-similar sign)1

  2. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  3. PRA (Probabilistic Risk Assessments) Participation versus Validation

    Science.gov (United States)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  4. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  5. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  6. Validation Tools for ATLAS Muon Spectrometer Commissioning

    International Nuclear Information System (INIS)

    Benekos, N.Chr.; Dedes, G.; Laporte, J.F.; Nicolaidou, R.; Ouraou, A.

    2008-01-01

    The ATLAS Muon Spectrometer (MS), currently being installed at CERN, is designed to measure final state muons of 14 TeV proton-proton interactions at the Large Hadron Collider (LHC) with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, taking into account the high level background enviroment, the inhomogeneous magnetic field, and the large size of the apparatus (24 m diameter by 44 m length). The MS layout of the ATLAS detector is made of a large toroidal magnet, arrays of high-pressure drift tubes for precise tracking and dedicated fast detectors for the first-level trigger, and is organized in eight Large and eight Small sectors. All the detectors of the barrel toroid have been installed and the commissioning has started with cosmic rays. In order to validate the MS performance using cosmic events, a Muon Commissioning Validation package has been developed and its results are presented in this paper. Integration with the rest of the ATLAS sub-detectors is now being done in the ATLAS cavern

  7. The ALICE Software Release Validation cluster

    International Nuclear Information System (INIS)

    Berzano, D; Krzewicki, M

    2015-01-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future. (paper)

  8. Quality of Life After Palliative Radiation Therapy for Patients With Painful Bone Metastases: Results of an International Study Validating the EORTC QLQ-BM22

    Energy Technology Data Exchange (ETDEWEB)

    Zeng Liang [Department of Radiation Oncology, Odette Cancer Centre, University of Toronto, Toronto, Ontario (Canada); Chow, Edward, E-mail: edward.chow@sunnybrook.ca [Department of Radiation Oncology, Odette Cancer Centre, University of Toronto, Toronto, Ontario (Canada); Bedard, Gillian; Zhang, Liying [Department of Radiation Oncology, Odette Cancer Centre, University of Toronto, Toronto, Ontario (Canada); Fairchild, Alysa [Department of Radiation Oncology, Cross Cancer Institute, Edmonton, Alberta (Canada); Vassiliou, Vassilios [Department of Radiation Oncology, Bank of Cyprus Oncology Centre, Nicosia (Cyprus); Alm El-Din, Mohamed A. [Department of Clinical Oncology, Tanta University Hospital, Tanta Faculty of Medicine, Tanta (Egypt); Jesus-Garcia, Reynaldo [Department of Orthopedic Oncology, Federal University of Sao Paulo, Sao Paulo (Brazil); Kumar, Aswin [Division of Gynaecology and Genitourinary Oncology, Department of Radiation Oncology, Regional Cancer Center, Trivandrum (India); Forges, Fabien [Inserm CIE3, Saint Etienne University Hospital, Saint-Etienne (France); Unit of Clinical Research, Innovation, and Pharmacology, Saint Etienne University Hospital, Saint-Etienne (France); Tseng, Ling-Ming [Department of Surgery, Taipei Veterans General Hospital, National Yang-Ming University, Taipei, Taiwan (China); Hou, Ming-Feng [Department of Gastroenterologic Surgery, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan (China); Chie, Wei-Chu [Department of Public Health and Institute of Epidemiology and Preventative Medicine, National Taiwan University, Taipei, Taiwan (China); Bottomley, Andrew [European Organisation for Research and Treatment of Cancer, EORTC Headquarters, Brussels (Belgium)

    2012-11-01

    Purpose: Radiation therapy (RT) is an effective method of palliating painful bone metastases and can improve function and reduce analgesic requirements. In advanced cancer patients, quality of life (QOL) is the primary outcome of interest over traditional endpoints such as survival. The purpose of our study was to compare bone metastasis-specific QOL scores among patients who responded differently to palliative RT. Methods and Materials: Patients receiving RT for bone metastases across 6 countries were prospectively enrolled from March 2010-January 2011 in a trial validating the QLQ-BM22 and completed the QLQ-BM22 and the core measure (QLQ-C30) at baseline and after 1 month. Pain scores and analgesic intake were recorded, and response to RT was determined according to the latest published guidelines. The Kruskal-Wallis nonparametric and Wilcoxon rank sum tests compared changes in QOL among response groups. A Bonferroni-adjusted P<.003 indicated statistical significance. Results: Of 79 patients who received palliative RT, 59 were assessable. Partial response, pain progression, and indeterminate response were observed in 22, 8, and 29 patients, respectively; there were no patients with a complete response. Patients across all groups had similar baseline QOL scores apart from physical functioning (patients who progressed had better initial functioning). One month after RT, patients who responded had significant improvements in 3 of 4 QLQ-BM22 domains (painful site, P<.0001; painful characteristic, P<.0001; and functional interference, P<.0001) and 3 QLQ-C30 domains (physical functioning, P=.0006; role functioning, P=.0026; and pain, P<.0001). Patients with progression in pain had significantly worse functional interference (P=.0007) and pain (P=.0019). Conclusions: Patients who report pain relief after palliative RT also have better QOL with respect to bone metastasis-specific issues. The QLQ-BM22 and QLQ-C30 are able to discriminate among patients with varying

  9. Adjustments for drink size and ethanol content: New results from a self-report diary and trans-dermal sensor validation study

    Science.gov (United States)

    Bond, J. C.; Greenfield, T. K.; Patterson, D.; Kerr, W.C.

    2014-01-01

    Background Prior studies adjusting self-reported measures of alcohol intake for drink size and ethanol content have relied on single-point assessments. Methods A prospective 28-day diary study investigated magnitudes of drink ethanol adjustments and factors associated with these adjustments. Transdermal alcohol sensor (TAS) readings and prediction of alcohol-related problems by number of drinks versus ethanol-adjusted intake were used to validate drink ethanol adjustments. Self-completed event diaries listed up to 4 beverage types and 4 drinking events/day. Eligible volunteers had ≥ weekly drinking and ≥ 3+ drinks per occasion with ≥ 26 reported days and pre- and post-summary measures (n = 220). Event reports included drink types, sizes, brands or spirits contents, venues, drinks consumed and drinking duration. Results Wine drinks averaged 1.19, beer, 1.09 and spirits 1.54 US standard drinks (14g ethanol). Mean adjusted alcohol intake was 22% larger using drink size and strength (brand/ethanol concentration) data. Adjusted drink levels were larger than “raw” drinks in all quantity ranges. Individual-level drink ethanol adjustment ratios (ethanol adjusted/unadjusted amounts) averaged across all days drinking ranged from 0.73-3.33 (mean 1.22). Adjustment ratio was only marginally (and not significantly) positively related to usual quantity, frequency and heavy drinking (all psalcohol dependence symptoms (p<.01) and number of consequences (p<.05). In 30 respondents with sufficiently high quality TAS readings, higher correlations (p=.04) were found between the adjusted vs. the raw drinks/event and TAS areas under the curve. Conclusions Absent drink size and strength data, intake assessments are downward biased by at least 20%. Between-subject variation in typical drink content and pour sizes should be addressed in treatment and epidemiological research. PMID:25581661

  10. Quality of Life After Palliative Radiation Therapy for Patients With Painful Bone Metastases: Results of an International Study Validating the EORTC QLQ-BM22

    International Nuclear Information System (INIS)

    Zeng Liang; Chow, Edward; Bedard, Gillian; Zhang, Liying; Fairchild, Alysa; Vassiliou, Vassilios; Alm El-Din, Mohamed A.; Jesus-Garcia, Reynaldo; Kumar, Aswin; Forges, Fabien; Tseng, Ling-Ming; Hou, Ming-Feng; Chie, Wei-Chu; Bottomley, Andrew

    2012-01-01

    Purpose: Radiation therapy (RT) is an effective method of palliating painful bone metastases and can improve function and reduce analgesic requirements. In advanced cancer patients, quality of life (QOL) is the primary outcome of interest over traditional endpoints such as survival. The purpose of our study was to compare bone metastasis-specific QOL scores among patients who responded differently to palliative RT. Methods and Materials: Patients receiving RT for bone metastases across 6 countries were prospectively enrolled from March 2010-January 2011 in a trial validating the QLQ-BM22 and completed the QLQ-BM22 and the core measure (QLQ-C30) at baseline and after 1 month. Pain scores and analgesic intake were recorded, and response to RT was determined according to the latest published guidelines. The Kruskal-Wallis nonparametric and Wilcoxon rank sum tests compared changes in QOL among response groups. A Bonferroni-adjusted P<.003 indicated statistical significance. Results: Of 79 patients who received palliative RT, 59 were assessable. Partial response, pain progression, and indeterminate response were observed in 22, 8, and 29 patients, respectively; there were no patients with a complete response. Patients across all groups had similar baseline QOL scores apart from physical functioning (patients who progressed had better initial functioning). One month after RT, patients who responded had significant improvements in 3 of 4 QLQ-BM22 domains (painful site, P<.0001; painful characteristic, P<.0001; and functional interference, P<.0001) and 3 QLQ-C30 domains (physical functioning, P=.0006; role functioning, P=.0026; and pain, P<.0001). Patients with progression in pain had significantly worse functional interference (P=.0007) and pain (P=.0019). Conclusions: Patients who report pain relief after palliative RT also have better QOL with respect to bone metastasis-specific issues. The QLQ-BM22 and QLQ-C30 are able to discriminate among patients with varying

  11. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    Directory of Open Access Journals (Sweden)

    Hazel Ekin Akmaz

    2018-05-01

    Full Text Available Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance

  12. CONCURRENT VALIDITY OF THE STUDENT TEACHER PROFESSIONAL IDENTITY SCALE

    Directory of Open Access Journals (Sweden)

    Predrag Živković

    2018-04-01

    Full Text Available The main purpose of study was to examine concurrent validity of the Student Teachers Professional Identity Scale–STPIS (Fisherman and Abbot, 1998 that was for the first time used in Serbia. Indicators of concurrent validity was established by correlation with student teacher self-reported well-being, self-esteem, burnout stress and resilience. Based on the results we can conclude that the STPIS meets the criterion of concurrent validity. The implications of these results are important for researchers and decisions makers in teacher education

  13. Detailed validation in PCDDF analysis. ISO17025 data from Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Kernick Carvalhaes, G.; Azevedo, J.A.; Azevedo, G.; Machado, M.; Brooks, P. [Analytical Solutions, Rio de Janeiro (Brazil)

    2004-09-15

    When we define validation method we can use the ISO standard 8402, in reference to this, 'validation' is the 'confirmation by the examination and supplying of objective evidences that the particular requirements for a specific intended use are fulfilled'. This concept is extremely important to guarantee the quality of results. Validation method is based on the combined use of different validation procedures, but in this selection we have to analyze the cost benefit conditions. We must focus on the critical elements, and these critical factors must be the essential elements for providing good properties and results. If we have a solid validation methodology and a research of the source of uncertainty of our analytical method, we can generate results with confidence and veracity. When analyzing these two considerations, validation method and uncertainty calculations, we found out that there are very few articles and papers about these subjects, and it is even more difficult to find such materials on dioxins and furans. This short paper describes a validation and uncertainty calculation methodology using traditional studies with a few adaptations, yet it shows a new idea of recovery study as a source of uncertainty.

  14. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  15. A validated battery of vocal emotional expressions

    Directory of Open Access Journals (Sweden)

    Pierre Maurage

    2007-11-01

    Full Text Available For a long time, the exploration of emotions focused on facial expression, and vocal expression of emotion has only recently received interest. However, no validated battery of emotional vocal expressions has been published and made available to the researchers’ community. This paper aims at validating and proposing such material. 20 actors (10 men recorded sounds (words and interjections expressing six basic emotions (anger, disgust, fear, happiness, neutral and sadness. These stimuli were then submitted to a double validation phase: (1 preselection by experts; (2 quantitative and qualitative validation by 70 participants. 195 stimuli were selected for the final battery, each one depicting a precise emotion. The ratings provide a complete measure of intensity and specificity for each stimulus. This paper provides, to our knowledge, the first validated, freely available and highly standardized battery of emotional vocal expressions (words and intonations. This battery could constitute an interesting tool for the exploration of prosody processing among normal and pathological populations, in neuropsychology as well as psychiatry. Further works are nevertheless needed to complement the present material.

  16. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  17. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  18. Italian version of Dyspnoea-12: cultural-linguistic validation, quantitative and qualitative content validity study.

    Science.gov (United States)

    Caruso, Rosario; Arrigoni, Cristina; Groppelli, Katia; Magon, Arianna; Dellafiore, Federica; Pittella, Francesco; Grugnetti, Anna Maria; Chessa, Massimo; Yorke, Janelle

    2018-01-16

    Dyspnoea-12 is a valid and reliable scale to assess dyspneic symptom, considering its severity, physical and emotional components. However, it is not available in Italian version due to it was not yet translated and validated. For this reason, the aim of this study was to develop an Italian version Dyspnoea-12, providing a cultural and linguistic validation, supported by the quantitative and qualitative content validity. This was a methodological study, divided into two phases: phase one is related to the cultural and linguistic validation, phase two is related to test the quantitative and qualitative content validity. Linguistic validation followed a standardized translation process. Quantitative content validity was assessed computing content validity ratio (CVR) and index (I-CVIs and S-CVI) from expert panellists response. Qualitative content validity was assessed by the narrative analysis on the answers of three open-ended questions to the expert panellists, aimed to investigate the clarity and the pertinence of the Italian items. The translation process found a good agreement in considering clear the items in both the six involved bilingual expert translators and among the ten voluntary involved patients. CVR, I-CVIs and S-CVI were satisfactory for all the translated items. This study has represented a pivotal step to use Dyspnoea-12 amongst Italian patients. Future researches are needed to deeply investigate the Italian version of  Dyspnoea-12 construct validity and its reliability, and to describe how dyspnoea components (i.e. physical and emotional) impact the life of patients with cardiorespiratory diseases.

  19. SWMM LID Module Validation Study

    Science.gov (United States)

    EPA’s Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructu...

  20. Solution Validation for a Double Façade Prototype

    Directory of Open Access Journals (Sweden)

    Pau Fonseca i Casas

    2017-12-01

    Full Text Available A Solution Validation involves comparing the data obtained from the system that are implemented following the model recommendations, as well as the model results. This paper presents a Solution Validation that has been performed with the aim of certifying that a set of computer-optimized designs, for a double façade, are consistent with reality. To validate the results obtained through simulation models, based on dynamic thermal calculation and using Computational Fluid Dynamic techniques, a comparison with the data obtained by monitoring a real implemented prototype has been carried out. The new validated model can be used to describe the system thermal behavior in different climatic zones without having to build a new prototype. The good performance of the proposed double façade solution is confirmed since the validation assures there is a considerable energy saving, preserving and even improving interior comfort. This work shows all the processes in the Solution Validation depicting some of the problems we faced and represents an example of this kind of validation that often is not considered in a simulation project.

  1. Validation of the Danish language Injustice Experience Questionnaire

    DEFF Research Database (Denmark)

    la Cour, Peter; Schultz, Rikke; Smith, Anne Agerskov

    2017-01-01

    /somatoform symptoms. These patients also completed questionnaires concerning sociodemographics, anxiety and depression, subjective well-being, and overall physical and mental functioning. Our results showed satisfactory interpretability and face validity, and high internal consistency (Cronbach's alpha = .90...

  2. Experimental validation of a topology optimized acoustic cavity

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole; Fernandez Grande, Efren

    2015-01-01

    This paper presents the experimental validation of an acoustic cavity designed using topology optimization with the goal of minimizing the sound pressure locally for monochromatic excitation. The presented results show good agreement between simulations and measurements. The effect of damping...

  3. Simulation Validation for Societal Systems

    Science.gov (United States)

    2006-09-01

    system. Rather than assuming the existence of an expert experienced in diagnosing a problem, model-based approaches assume the existence of a system...system behavior is required, the method is capable of diagnosing faults that have never occurred before. 44 3.1.5 Causal Reasoning When...BioWar has hundreds of parameters. The resulting parameter space is gigantic . Suppose that the Response Surface Methodology or RSM (Myers and Montgomery

  4. Validation of the prosthetic esthetic index

    DEFF Research Database (Denmark)

    Özhayat, Esben B; Dannemand, Katrine

    2014-01-01

    OBJECTIVES: In order to diagnose impaired esthetics and evaluate treatments for these, it is crucial to evaluate all aspects of oral and prosthetic esthetics. No professionally administered index currently exists that sufficiently encompasses comprehensive prosthetic esthetics. This study aimed...... to validate a new comprehensive index, the Prosthetic Esthetic Index (PEI), for professional evaluation of esthetics in prosthodontic patients. MATERIAL AND METHODS: The content, criterion, and construct validity; the test-retest, inter-rater, and internal consistency reliability; and the sensitivity...... furthermore distinguish between participants and controls, indicating sufficient sensitivity. CONCLUSION: The PEI is considered a valid and reliable instrument involving sufficient aspects for assessment of the professionally evaluated esthetics in prosthodontic patients. CLINICAL RELEVANCE...

  5. Entropy Evaluation Based on Value Validity

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2014-09-01

    Full Text Available Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

  6. Orthorexia nervosa: validation of a diagnosis questionnaire.

    Science.gov (United States)

    Donini, L M; Marsili, D; Graziani, M P; Imbriale, M; Cannella, C

    2005-06-01

    To validate a questionnaire for the diagnosis of orhorexia oervosa, an eating disorder defined as "maniacal obsession for healthy food". 525 subjects were enrolled. Then they were randomized into two samples (sample of 404 subjects for the construction of the test for the diagnosis of orthorexia ORTO-15; sample of 121 subjects for the validation of the test). The ORTO-15 questionnaire, validated for the diagnosis of orthorexia, is made-up of 15 multiple-choice items. The test we proposed for the diagnosis of orthorexia (ORTO 15) showed a good predictive capability at a threshold value of 40 (efficacy 73.8%, sensitivity 55.6% and specificity 75.8%) also on verification with a control sample. However, it has a limit in identifying the obsessive disorder. For this reason we maintain that further investigation is necessary and that new questions useful for the evaluation of the obsessive-compulsive behavior should be added to the ORTO-15 questionnaire.

  7. Validation of Visual Caries Activity Assessment

    DEFF Research Database (Denmark)

    Guedes, R S; Piovesan, C; Ardenghi, T M

    2014-01-01

    We evaluated the predictive and construct validity of a caries activity assessment system associated with the International Caries Detection and Assessment System (ICDAS) in primary teeth. A total of 469 children were reexamined: participants of a caries survey performed 2 yr before (follow-up rate...... of 73.4%). At baseline, children (12-59 mo old) were examined with the ICDAS and a caries activity assessment system. The predictive validity was assessed by evaluating the risk of active caries lesion progression to more severe conditions in the follow-up, compared with inactive lesions. We also...... assessed if children with a higher number of active caries lesions were more likely to develop new lesions (construct validity). Noncavitated active caries lesions at occlusal surfaces presented higher risk of progression than inactive ones. Children with a higher number of active lesions and with higher...

  8. Validity Examination of EFQM’s Results by DEA Models = Examen de la validez de los resultados de EFQM mediante modelos DEA

    Directory of Open Access Journals (Sweden)

    Ben Mustafa, Adli

    2008-01-01

    Full Text Available Validity Examination of EFQM’s Results by DEA Models = Examen de la validez de los resultados de EFQM mediante modelos DEAAbstract: The European Foundation Quality Management is one of the models which deal with the assessment of function of an organization using a self-assessment for measuring the concepts some of which are more and more qualitative. Consequently, complete understanding and correct usage of this model in an organization depend on the comprehensive recognition of that model and different strategies of self-assessment. The process of self-assessment on the basis of this model in an organization needs to use the experienced auditors. This leads to reduce the wrong privilege making to the criteria and to subcriteria probable way. In this paper, first some of the weaknesses of the EFQM model are studied, then with the usage of structure of input-output governing of the model and using of Data Envelopment Analysis, a method is offered to recognize the lack of the proportion between Enablers and the results of organization which may occur due to problems and obstacles hidden in the heart of organization. = La Fundación Europea de Gestión de la Calidad (EFQM significa uno de los modelos para la evaluación de las funciones de las organizaciones, utilizando la autoevaluación para medir aspectos que, algunos de los cuales, son cada vez más cualitativos. Consecuentemente, la comprensión completa y el uso correcto de este modelo en una organización dependen del conocimiento profundo del modelo y de las diferentes estrategias de autoevaluación. El proceso de autoevaluación en la base de este modelo, en cualquier organización, necesita la intervención de auditores experimentados. Esto es precisamente lo que lleva a reducir el uso incorrecto de los criterios y de los subcriterios. En este artículo, primero se estudian algunas de las debilidades del modelo EFQM y después, mediante la utilización de estructura de control de

  9. Features to validate cerebral toxoplasmosis

    Directory of Open Access Journals (Sweden)

    Carolina da Cunha Correia

    2013-06-01

    Full Text Available Introduction Neurotoxoplasmosis (NT sometimes manifests unusual characteristics. Methods We analyzed 85 patients with NT and AIDS according to clinical, cerebrospinal fluid, cranial magnetic resonance, and polymerase chain reaction (PCR characteristics. Results In 8.5%, focal neurological deficits were absent and 16.4% had single cerebral lesions. Increased sensitivity of PCR for Toxoplasma gondii DNA in the central nervous system was associated with pleocytosis and presence of >4 encephalic lesions. Conclusions Patients with NT may present without focal neurological deficit and NT may occur with presence of a single cerebral lesion. Greater numbers of lesions and greater cellularity in cerebrospinal fluid improve the sensitivity of PCR to T gondii.

  10. Empirical Validation of Building Simulation Software : Modeling of Double Facades

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  11. Predictive validity of neurotic disorders

    DEFF Research Database (Denmark)

    Jepsen, Peter Winning; Butler, Birgitte; Rasmussen, Stig

    2014-01-01

    behaviour, including committed suicide, and with regard to symptom profile. MATERIAL AND METHODS: A total of 112 patients were followed on the Danish Central Psychiatric Research Register and the Danish Cause of Death Register with regard to their diagnostic behaviour. In a subset of the sample (n = 24......), the patients were assessed using the Hopkins Symptom Checklist (SCL)-90. RESULTS: Both at the diagnostic level, including suicide rate, and at the level of symptom severity (SCL-90), anxiety neurosis and obsessive-compulsive neurosis were similar, in contrast to hysterical neurosis which had no more...... association with the other two categories of neurosis than would be expected by chance. CONCLUSION: Anxiety neurosis and obsessive-compulsive neurosis are more severe disorders than hysterical neurosis, both in terms of symptom profile and depression, including suicidal behaviour. The identified suicides were...

  12. Validation of protein carbonyl measurement

    DEFF Research Database (Denmark)

    Augustyniak, Edyta; Adam, Aisha; Wojdyla, Katarzyna

    2015-01-01

    Protein carbonyls are widely analysed as a measure of protein oxidation. Several different methods exist for their determination. A previous study had described orders of magnitude variance that existed when protein carbonyls were analysed in a single laboratory by ELISA using different commercial...... protein carbonyl analysis across Europe. ELISA and Western blotting techniques detected an increase in protein carbonyl formation between 0 and 5min of UV irradiation irrespective of method used. After irradiation for 15min, less oxidation was detected by half of the laboratories than after 5min...... irradiation. Three of the four ELISA carbonyl results fell within 95% confidence intervals. Likely errors in calculating absolute carbonyl values may be attributed to differences in standardisation. Out of up to 88 proteins identified as containing carbonyl groups after tryptic cleavage of irradiated...

  13. Development and validation of the Stirling Eating Disorder Scales.

    Science.gov (United States)

    Williams, G J; Power, K G; Miller, H R; Freeman, C P; Yellowlees, A; Dowds, T; Walker, M; Parry-Jones, W L

    1994-07-01

    The development and reliability/validity check of an 80-item, 8-scale measure for use with eating disorder patients is presented. The Stirling Eating Disorder Scales (SEDS) assess anorexic dietary behavior, anorexic dietary cognitions, bulimic dietary behavior, bulimic dietary cognitions, high perceived external control, low assertiveness, low self-esteem, and self-directed hostility. The SEDS were administered to 82 eating disorder patients and 85 controls. Results indicate that the SEDS are acceptable in terms of internal consistency, reliability, group validity, and concurrent validity.

  14. Development and validation of sodium fire analysis code ASSCOPS

    International Nuclear Information System (INIS)

    Ohno, Shuji

    2001-01-01

    A version 2.1 of the ASSCOPS sodium fire analysis code was developed to evaluate the thermal consequences of a sodium leak and consequent fire in LMFBRs. This report describes the computational models and the validation studies using the code. The ASSCOPS calculates sodium droplet and pool fire, and consequential heat/mass transfer behavior. Analyses of sodium pool or spray fire experiments confirmed that this code and parameters used in the validation studies gave valid results on the thermal consequences of sodium leaks and fires. (author)

  15. Physics validation of detector simulation tools for LHC

    International Nuclear Information System (INIS)

    Beringer, J.

    2004-01-01

    Extensive studies aimed at validating the physics processes built into the detector simulation tools Geant4 and Fluka are in progress within all Large Hardon Collider (LHC) experiments, within the collaborations developing these tools, and within the LHC Computing Grid (LCG) Simulation Physics Validation Project, which has become the primary forum for these activities. This work includes detailed comparisons with test beam data, as well as benchmark studies of simple geometries and materials with single incident particles of various energies for which experimental data is available. We give an overview of these validation activities with emphasis on the latest results

  16. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, A.; Bleyer, A.

    2011-01-01

    The ASTEC code (jointly developed by IRSN and GRS, i.e. Gesellschaft fur Anlagen- und Reaktorsicherheit mbH) development is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this benchmark, only the CPA (Containment Part of ASTEC) module is used. CPA is a lumped-parameter able to represent a multi-compartments containment. It used the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. In this paper, three selected from the PANDA SETH Benchmark 9, 9bis and 25 are considered to investigate the impact of injection velocity and steam condensation on the plume shape and on gas distribution. Coarse and fine meshes were developed by considering the test facility with the two vessels DW1, DW2, and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows the good agreement between experiments and calculations. (author)

  17. Uranium Detection - Technique Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Colletti, Lisa Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Chemistry Division; Garduno, Katherine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Chemistry Division; Lujan, Elmer J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Chemistry Division; Mechler-Hickson, Alexandra Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Chemistry Division; Univ. of Wisconsin, Madison, WI (United States); May, Iain [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Chemistry Division; Reilly, Sean Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Chemistry Division

    2016-04-14

    As a LANL activity for DOE/NNSA in support of SHINE Medical Technologies™ ‘Accelerator Technology’ we have been investigating the application of UV-vis spectroscopy for uranium analysis in solution. While the technique has been developed specifically for sulfate solutions, the proposed SHINE target solutions, it can be adapted to a range of different solution matrixes. The FY15 work scope incorporated technical development that would improve accuracy, specificity, linearity & range, precision & ruggedness, and comparative analysis. Significant progress was achieved throughout FY 15 addressing these technical challenges, as is summarized in this report. In addition, comparative analysis of unknown samples using the Davies-Gray titration technique highlighted the importance of controlling temperature during analysis (impacting both technique accuracy and linearity/range). To fully understand the impact of temperature, additional experimentation and data analyses were performed during FY16. The results from this FY15/FY16 work were presented in a detailed presentation, LA-UR-16-21310, and an update of this presentation is included with this short report summarizing the key findings. The technique is based on analysis of the most intense U(VI) absorbance band in the visible region of the uranium spectra in 1 M H2SO4, at λmax = 419.5 nm.

  18. RELAP-7 Software Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support; Choi, Yong-Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support; Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support

    2014-09-25

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.

  19. Presentation of valid correlations in some morphological

    Directory of Open Access Journals (Sweden)

    Florian Miftari

    2018-05-01

    Full Text Available Study-research deals with younger students of both sexes aged 13-14, who, besides attending classes of physical education and sports, also practice in basketball schools in the city of Pristina. The experiment contains a total of 7 morphological variables, while four tests of basic motion skills and seven variables are from specific motion skills. In this study, the verification and analysis of the correlation of morphological characteristics and basic and situational motor skills in both groups of both sexes (boys and girls were treated. Based on the results obtained between several variables, valid correlations with high coefficients are presented, whereas among the variables are presented correlations with optimal values. The experimentation in question includes the number of 80 entities of both sexes; the group of 40 boys and the other group consisting of 40 girls who have undergone the tests for this study-experiment.

  20. MAAP4 model and validation status

    International Nuclear Information System (INIS)

    Plys, M.G.; Paik, C.Y.; Henry, R.E.; Wu, Chunder; Suh, K.Y.; Sung Jin Lee; McCartney, M.A.; Wang, Zhe

    1993-01-01

    The MAAP 4 code for integrated severe accident analysis is intended to be used for Level 1 and Level 2 probabilistic safety assessment and severe accident management evaluations for current and advanced light water reactors. MAAP 4 can be used to determine which accidents lead to fuel damage and which are successfully terminated which accidents lead to fuel damage and which are successfully terminated before or after fuel damage (a level 1 application). It can also be used to determine which sequences result in fission product release to the environment and provide the time history of such releases (a level 2 application). The MAAP 4 thermal-hydraulic and fission product models and their validation are discussed here. This code is the newest version of MAAP offered by the Electric Power Research Institute (EPRI) and it contains substantial mechanistic improvements over its predecessor, MAAP 3.0B

  1. Towards Seamless Validation of Land Cover Data

    Science.gov (United States)

    Chuprikova, Ekaterina; Liebel, Lukas; Meng, Liqiu

    2018-05-01

    This article demonstrates the ability of the Bayesian Network analysis for the recognition of uncertainty patterns associated with the fusion of various land cover data sets including GlobeLand30, CORINE (CLC2006, Germany) and land cover data derived from Volunteered Geographic Information (VGI) such as Open Street Map (OSM). The results of recognition are expressed as probability and uncertainty maps which can be regarded as a by-product of the GlobeLand30 data. The uncertainty information may guide the quality improvement of GlobeLand30 by involving the ground truth data, information with superior quality, the know-how of experts and the crowd intelligence. Such an endeavor aims to pave a way towards a seamless validation of global land cover data on the one hand and a targeted knowledge discovery in areas with higher uncertainty values on the other hand.

  2. Validation of human factor engineering integrated system

    International Nuclear Information System (INIS)

    Fang Zhou

    2013-01-01

    Apart from hundreds of thousands of human-machine interface resources, the control room of a nuclear power plant is a complex system integrated with many factors such as procedures, operators, environment, organization and management. In the design stage, these factors are considered by different organizations separately. However, whether above factors could corporate with each other well in operation and whether they have good human factors engineering (HFE) design to avoid human error, should be answered in validation of the HFE integrated system before delivery of the plant. This paper addresses the research and implementation of the ISV technology based on case study. After introduction of the background, process and methodology of ISV, the results of the test are discussed. At last, lessons learned from this research are summarized. (authors)

  3. Installation and validation of MCNP-4A

    International Nuclear Information System (INIS)

    Marks, N.A.

    1997-01-01

    MCNP-4A is a multi-purpose Monte Carlo program suitable for the modelling of neutron, photon, and electron transport problems. It is a particularly useful technique when studying systems containing irregular shapes. MCNP has been developed over the last 25 years by Los Alamos, and is distributed internationally via RSIC at Oak Ridge. This document describes the installation of MCNP-4A (henceforth referred to as MCNP) on the Silicon Graphics workstation (bluey.ansto.gov.au). A limited number of benchmarks pertaining to fast and thermal systems were performed to check the installation and validate the code. The results are compared to deterministic calculations performed using the AUS neutronics code system developed at ANSTO. (author)

  4. Human Factors methods concerning integrated validation of nuclear power plant control rooms; Metodutveckling foer integrerad validering

    Energy Technology Data Exchange (ETDEWEB)

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia (Swedish Defence Research Agency, Information Systems, Linkoeping (Sweden))

    2010-02-15

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  5. Further Validation of the IDAS: Evidence of Convergent, Discriminant, Criterion, and Incremental Validity

    Science.gov (United States)

    Watson, David; O'Hara, Michael W.; Chmielewski, Michael; McDade-Montez, Elizabeth A.; Koffel, Erin; Naragon, Kristin; Stuart, Scott

    2008-01-01

    The authors explicated the validity of the Inventory of Depression and Anxiety Symptoms (IDAS; D. Watson et al., 2007) in 2 samples (306 college students and 605 psychiatric patients). The IDAS scales showed strong convergent validity in relation to parallel interview-based scores on the Clinician Rating version of the IDAS; the mean convergent…

  6. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    Science.gov (United States)

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  7. Marketing Plan for Demonstration and Validation Assets

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-05-30

    The National Security Preparedness Project (NSPP), is to be sustained by various programs, including technology demonstration and evaluation (DEMVAL). This project assists companies in developing technologies under the National Security Technology Incubator program (NSTI) through demonstration and validation of technologies applicable to national security created by incubators and other sources. The NSPP also will support the creation of an integrated demonstration and validation environment. This report documents the DEMVAL marketing and visibility plan, which will focus on collecting information about, and expanding the visibility of, DEMVAL assets serving businesses with national security technology applications in southern New Mexico.

  8. Italian Validation of Homophobia Scale (HS

    Directory of Open Access Journals (Sweden)

    Giacomo Ciocca, PsyD, PhD

    2015-09-01

    Conclusions: The Italian validation of the HS revealed the use of this self‐report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior. Ciocca G, Capuano N, Tuziak B, Mollaioli D, Limoncin E, Valsecchi D, Carosa E, Gravina GL, Gianfrilli D, Lenzi A, and Jannini EA. Italian validation of Homophobia Scale (HS. Sex Med 2015;3:213–218.

  9. Validating Acquisition IS Integration Readiness with Drills

    DEFF Research Database (Denmark)

    Wynne, Peter J.

    2017-01-01

    To companies, mergers and acquisitions are important strategic tools, yet they often fail to deliver their expected value. Studies have shown the integration of information systems is a significant roadblock to the realisation of acquisition benefits, and for an IT department to be ready......), to understand how an IT department can use them to validate their integration plans. The paper presents a case study of two drills used to validate an IT department’s readiness to carry out acquisition IS integration, and suggests seven acquisition IS integration drill characteristics others could utilise when...

  10. Plant monitoring and signal validation at HFIR

    International Nuclear Information System (INIS)

    Mullens, J.A.

    1991-01-01

    This paper describes a monitoring system for the Oak Ridge National Laboratory's (ORNL'S) High Flux Isotope Reactor (HFIR). HFIR is an 85 MW pressurized water reactor designed to produce isotopes and intense neutron beams. The monitoring system is described with respect to plant signals and computer system; monitoring overview; data acquisition, logging and network distribution; signal validation; status displays; reactor condition monitoring; reactor operator aids. Future work will include the addition of more plant signals, more signal validation and diagnostic capabilities, improved status display, integration of the system with the RELAP plant simulation and graphical interface, improved operator aids, and an alarm filtering system. 8 refs., 7 figs. (MB)

  11. Static validation of licence conformance policies

    DEFF Research Database (Denmark)

    Hansen, Rene Rydhof; Nielson, Flemming; Nielson, Hanne Riis

    2008-01-01

    Policy conformance is a security property gaining importance due to commercial interest like Digital Rights Management. It is well known that static analysis can be used to validate a number of more classical security policies, such as discretionary and mandatory access control policies, as well...... as communication protocols using symmetric and asymmetric cryptography. In this work we show how to develop a Flow Logic for validating the conformance of client software with respect to a licence conformance policy. Our approach is sufficiently flexible that it extends to fully open systems that can admit new...

  12. Validation of Housing Standards Addressing Accessibility

    DEFF Research Database (Denmark)

    Helle, Tina

    2013-01-01

    The aim was to explore the use of an activity-based approach to determine the validity of a set of housing standards addressing accessibility. This included examination of the frequency and the extent of accessibility problems among older people with physical functional limitations who used...... participant groups were examined. Performing well-known kitchen activities was associated with accessibility problems for all three participant groups, in particular those using a wheelchair. The overall validity of the housing standards examined was poor. Observing older people interacting with realistic...... environments while performing real everyday activities seems to be an appropriate method for assessing accessibility problems....

  13. Validity of Linder Hypothesis in Bric Countries

    Directory of Open Access Journals (Sweden)

    Rana Atabay

    2016-03-01

    Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.

  14. Serial album validation for promotion of infant body weight control

    Directory of Open Access Journals (Sweden)

    Nathalia Costa Gonzaga Saraiva

    2018-05-01

    Full Text Available ABSTRACT Objective: to validate the content and appearance of a serial album for children aged from 7 to 10 years addressing the topic of prevention and control of body weight. Method: methodological study with descriptive nature. The validation process was attended by 33 specialists in educational technologies and/or in excess of infantile weight. The agreement index of 80% was the minimum considered to guarantee the validation of the material. Results: most of the specialists had a doctoral degree and a graduate degree in nursing. Regarding content, illustrations, layout and relevance, all items were validated and 69.7% of the experts considered the album as great. The overall agreement validation index for the educational technology was 0.88. Only the script-sheet 3 did not reach the cutoff point of the content validation index. Changes were made to the material, such as title change, inclusion of the school context and insertion of nutritionist and physical educator in the story narrated in the album. Conclusion: the proposed serial album was considered valid by experts regarding content and appearance, suggesting that this technology has the potential to contribute in health education by promoting healthy weight in the age group of 7 to 10 years.

  15. Trait sexual motivation questionnaire: concept and validation.

    Science.gov (United States)

    Stark, Rudolf; Kagerer, Sabine; Walter, Bertram; Vaitl, Dieter; Klucken, Tim; Wehrum-Osinsky, Sina

    2015-04-01

    Trait sexual motivation defines a psychological construct that reflects the long-lasting degree of motivation for sexual activities, which is assumed to be the result of biological and sociocultural influences. With this definition, it shares commonalities with other sexuality-related constructs like sexual desire, sexual drive, sexual needs, and sexual compulsivity. The Trait Sexual Motivation Questionnaire (TSMQ) was developed in order to measure trait sexual motivation with its different facets. Several steps were conducted: First, items were composed assessing sexual desire, the effort made to gain sex, as well as specific sexual behaviors. Factor analysis of the data of a first sample (n = 256) was conducted. Second, the factor solution was verified by a confirmatory factor analysis in a second sample (n = 498) and construct validity was demonstrated. Third, the temporal stability of the TSMQ was tested in a third study (n = 59). Questionnaire data. The exploratory and confirmatory factor analyses revealed that trait sexual motivation is best characterized by four subscales: Solitary Sexuality, Importance of Sex, Seeking Sexual Encounters, and Comparison with Others. It could be shown that the test quality of the questionnaire is high. Most importantly for the trait concept, the retest reliability after 1 year was r = 0.87. Our results indicate that the TSMQ is indeed a suitable tool for measuring long-lasting sexual motivation with high test quality and high construct validity. A future differentiation between trait and state sexual motivation might be helpful for clinical as well as forensic research. © 2015 International Society for Sexual Medicine.

  16. Validation of dose calculation programmes for recycling

    International Nuclear Information System (INIS)

    Menon, Shankar; Brun-Yaba, Christine; Yu, Charley; Cheng, Jing-Jy; Williams, Alexander

    2002-12-01

    This report contains the results from an international project initiated by the SSI in 1999. The primary purpose of the project was to validate some of the computer codes that are used to estimate radiation doses due to the recycling of scrap metal. The secondary purpose of the validation project was to give a quantification of the level of conservatism in clearance levels based on these codes. Specifically, the computer codes RESRAD-RECYCLE and CERISE were used to calculate radiation doses to individuals during the processing of slightly contaminated material, mainly in Studsvik, Sweden. Calculated external doses were compared with measured data from different steps of the process. The comparison of calculations and measurements shows that the computer code calculations resulted in both overestimations and underestimations of the external doses for different recycling activities. The SSI draws the conclusion that the accuracy is within one order of magnitude when experienced modellers use their programmes to calculate external radiation doses for a recycling process involving material that is mainly contaminated with cobalt-60. No errors in the codes themselves were found. Instead, the inaccuracy seems to depend mainly on the choice of some modelling parameters related to the receptor (e.g., distance, time, etc.) and simplifications made to facilitate modelling with the codes (e.g., object geometry). Clearance levels are often based on studies on enveloping scenarios that are designed to cover all realistic exposure pathways. It is obvious that for most practical cases, this gives a margin to the individual dose constraint (in the order of 10 micro sievert per year within the EC). This may be accentuated by the use of conservative assumptions when modelling the enveloping scenarios. Since there can obviously be a fairly large inaccuracy in the calculations, it seems reasonable to consider some degree of conservatism when establishing clearance levels based on

  17. Validation of dose calculation programmes for recycling

    Energy Technology Data Exchange (ETDEWEB)

    Menon, Shankar [Menon Consulting, Nykoeping (Sweden); Brun-Yaba, Christine [Inst. de Radioprotection et Securite Nucleaire (France); Yu, Charley; Cheng, Jing-Jy [Argonne National Laboratory, IL (United States). Environmental Assessment Div.; Bjerler, Jan [Studsvik Stensand, Nykoeping (Sweden); Williams, Alexander [Dept. of Energy (United States). Office of Environmental Management

    2002-12-01

    This report contains the results from an international project initiated by the SSI in 1999. The primary purpose of the project was to validate some of the computer codes that are used to estimate radiation doses due to the recycling of scrap metal. The secondary purpose of the validation project was to give a quantification of the level of conservatism in clearance levels based on these codes. Specifically, the computer codes RESRAD-RECYCLE and CERISE were used to calculate radiation doses to individuals during the processing of slightly contaminated material, mainly in Studsvik, Sweden. Calculated external doses were compared with measured data from different steps of the process. The comparison of calculations and measurements shows that the computer code calculations resulted in both overestimations and underestimations of the external doses for different recycling activities. The SSI draws the conclusion that the accuracy is within one order of magnitude when experienced modellers use their programmes to calculate external radiation doses for a recycling process involving material that is mainly contaminated with cobalt-60. No errors in the codes themselves were found. Instead, the inaccuracy seems to depend mainly on the choice of some modelling parameters related to the receptor (e.g., distance, time, etc.) and simplifications made to facilitate modelling with the codes (e.g., object geometry). Clearance levels are often based on studies on enveloping scenarios that are designed to cover all realistic exposure pathways. It is obvious that for most practical cases, this gives a margin to the individual dose constraint (in the order of 10 micro sievert per year within the EC). This may be accentuated by the use of conservative assumptions when modelling the enveloping scenarios. Since there can obviously be a fairly large inaccuracy in the calculations, it seems reasonable to consider some degree of conservatism when establishing clearance levels based on

  18. Validation and ease of use of a new pen device for self-administration of recombinant human growth hormone: results from a two-center usability study

    Directory of Open Access Journals (Sweden)

    Rapaport R

    2013-09-01

    Full Text Available Robert Rapaport,1 Paul Saenger,2 Heinrich Schmidt,3 Yukihiro Hasegawa,4 Michel Colle,5 Sandro Loche,6 Sandra Marcantonio,7 Walter Bonfig,8 Markus Zabransky,9 Fima Lifshitz10 1Division of Pediatric Endocrinology and Diabetes, Mount Sinai School of Medicine, 2Winthrop University Hospital, Mineola, NY, USA; 3University Children's Hospital, Division of Endocrinology and Diabetology, Munich, Germany; 4Department of Endocrinology and Metabolism, Tokyo Metropolitan Children's Medical Center, Tokyo, Japan; 525 rue Boudet, Bordeaux, France; 6Servizio di Endocrinologia Pediatrica, Ospedale Microcitemico ASL Cagliari, Cagliari, Italy; 7Clinica de Endocrinologia Pediátrica, Londrina, Brazil; 8Division of Pediatric Endocrinology, Technical University München, Munich, Germany; 9Sandoz International GmbH, Holzkirchen, Germany; 10Pediatric Sunshine Academics, Inc, Santa Barbara, CA, USA Abstract: Close adherence to the recommended treatment regimen is important for the success of recombinant human growth hormone therapy, although nonadherence can be common. Ease of use and safety during use/storage are among several important factors in the design of a growth hormone injection device intended for long-term use. This study was performed to validate the usability and assess the ease of use of a new pen device (SurePal™ that has been developed to support daily administration of the recombinant human growth hormone product, Omnitrope® (somatropin. The primary objectives of the study were to assess if study participants, representing intended users of the pen in clinical practice, were able to perform an injection procedure into an injection pad effectively and safely and disassemble the pen without receiving a needlestick injury. A total of 106 participants (61 adults and 45 children/adolescents were enrolled at two study centers (one in the US, one in Germany. Results for both primary usability tasks met the predefined acceptance criteria, with >85% of

  19. Test of Gross Motor Development : Expert Validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-12-01

    Full Text Available The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motordevelopment. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by expertsand the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. Across-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionalsand 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls.Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated thatthe Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices ofconfirmatory factorial validity (χ2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tuckerand Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. ThePortuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  20. Test of Gross Motor Development: expert validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-01-01

    The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motor development. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by experts and the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. A cross-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionals and 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls. Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated that the Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices of confirmatory factorial validity (÷2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tucker and Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. The Portuguese TGMD-2 demonstrated validity and reliability for the sample investigated.