WorldWideScience

Sample records for validation validation results

  1. ValidatorDB: database of up-to-date validation results for ligands and non-standard residues from the Protein Data Bank.

    Science.gov (United States)

    Sehnal, David; Svobodová Vařeková, Radka; Pravda, Lukáš; Ionescu, Crina-Maria; Geidl, Stanislav; Horský, Vladimír; Jaiswal, Deepti; Wimmerová, Michaela; Koča, Jaroslav

    2015-01-01

    Following the discovery of serious errors in the structure of biomacromolecules, structure validation has become a key topic of research, especially for ligands and non-standard residues. ValidatorDB (freely available at http://ncbr.muni.cz/ValidatorDB) offers a new step in this direction, in the form of a database of validation results for all ligands and non-standard residues from the Protein Data Bank (all molecules with seven or more heavy atoms). Model molecules from the wwPDB Chemical Component Dictionary are used as reference during validation. ValidatorDB covers the main aspects of validation of annotation, and additionally introduces several useful validation analyses. The most significant is the classification of chirality errors, allowing the user to distinguish between serious issues and minor inconsistencies. Other such analyses are able to report, for example, completely erroneous ligands, alternate conformations or complete identity with the model molecules. All results are systematically classified into categories, and statistical evaluations are performed. In addition to detailed validation reports for each molecule, ValidatorDB provides summaries of the validation results for the entire PDB, for sets of molecules sharing the same annotation (three-letter code) or the same PDB entry, and for user-defined selections of annotations or PDB entries. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Roll-up of validation results to a target application.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy

    2013-09-01

    Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.

  3. 42 CFR 476.84 - Changes as a result of DRG validation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO... in DRG assignment as a result of QIO validation activities. ...

  4. Results from the First Validation Phase of CAP code

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  5. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner dissatisfied with a change to the diagnostic or procedural coding information made by a QIO as a result of DRG...

  6. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  7. Validation Test Results for Orthogonal Probe Eddy Current Thruster Inspection System

    Science.gov (United States)

    Wincheski, Russell A.

    2007-01-01

    Recent nondestructive evaluation efforts within NASA have focused on an inspection system for the detection of intergranular cracking originating in the relief radius of Primary Reaction Control System (PCRS) Thrusters. Of particular concern is deep cracking in this area which could lead to combustion leakage in the event of through wall cracking from the relief radius into an acoustic cavity of the combustion chamber. In order to reliably detect such defects while ensuring minimal false positives during inspection, the Orthogonal Probe Eddy Current (OPEC) system has been developed and an extensive validation study performed. This report describes the validation procedure, sample set, and inspection results as well as comparing validation flaws with the response from naturally occuring damage.

  8. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  9. Validation results of satellite mock-up capturing experiment using nets

    Science.gov (United States)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  10. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  11. CosmoQuest:Using Data Validation for More Than Just Data Validation

    Science.gov (United States)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  12. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  13. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  14. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  15. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  16. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  17. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  18. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  19. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  20. Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.

    Science.gov (United States)

    Levinson, Cheri A; Rodebaugh, Thomas L

    2011-09-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.

  1. Comparative Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow r...... is that the comparative validation can be regarded as the main argument to continue the validation of the building simulation software for the buildings with the double skin façade with the empirical validation test cases.......The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow...

  2. The Mistra experiment for field containment code validation first results

    International Nuclear Information System (INIS)

    Caron-Charles, M.; Blumenfeld, L.

    2001-01-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  3. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  4. Shift Verification and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Gregory G [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  5. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  6. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Science.gov (United States)

    2010-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  7. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Science.gov (United States)

    2010-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  9. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Disclosure of accreditation, State and CMS... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a) Accreditation organization inspection results. CMS may disclose accreditation organization inspection results to...

  10. [Validation of the IBS-SSS].

    Science.gov (United States)

    Betz, C; Mannsdörfer, K; Bischoff, S C

    2013-10-01

    Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterised by abdominal pain, associated with stool abnormalities and changes in stool consistency. Diagnosis of IBS is based on characteristic symptoms and exclusion of other gastrointestinal diseases. A number of questionnaires exist to assist diagnosis and assessment of severity of the disease. One of these is the irritable bowel syndrome - severity scoring system (IBS-SSS). The IBS-SSS was validated 1997 in its English version. In the present study, the IBS-SSS has been validated in German language. To do this, a cohort of 60 patients with IBS according to the Rome III criteria, was compared with a control group of healthy individuals (n = 38). We studied sensitivity and reproducibility of the score, as well as the sensitivity to detect changes of symptom severity. The results of the German validation largely reflect the results of the English validation. The German version of the IBS-SSS is also a valid, meaningful and reproducible questionnaire with a high sensitivity to assess changes in symptom severity, especially in IBS patients with moderate symptoms. It is unclear if the IBS-SSS is also a valid questionnaire in IBS patients with severe symptoms because this group of patients was not studied. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Validity of proposed DSM-5 diagnostic criteria for nicotine use disorder: results from 734 Israeli lifetime smokers

    Science.gov (United States)

    Shmulewitz, D.; Wall, M.M.; Aharonovich, E.; Spivak, B.; Weizman, A.; Frisch, A.; Grant, B. F.; Hasin, D.

    2013-01-01

    Background The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) proposes aligning nicotine use disorder (NUD) criteria with those for other substances, by including the current DSM fourth edition (DSM-IV) nicotine dependence (ND) criteria, three abuse criteria (neglect roles, hazardous use, interpersonal problems) and craving. Although NUD criteria indicate one latent trait, evidence is lacking on: (1) validity of each criterion; (2) validity of the criteria as a set; (3) comparative validity between DSM-5 NUD and DSM-IV ND criterion sets; and (4) NUD prevalence. Method Nicotine criteria (DSM-IV ND, abuse and craving) and external validators (e.g. smoking soon after awakening, number of cigarettes per day) were assessed with a structured interview in 734 lifetime smokers from an Israeli household sample. Regression analysis evaluated the association between validators and each criterion. Receiver operating characteristic analysis assessed the association of the validators with the DSM-5 NUD set (number of criteria endorsed) and tested whether DSM-5 or DSM-IV provided the most discriminating criterion set. Changes in prevalence were examined. Results Each DSM-5 NUD criterion was significantly associated with the validators, with strength of associations similar across the criteria. As a set, DSM-5 criteria were significantly associated with the validators, were significantly more discriminating than DSM-IV ND criteria, and led to increased prevalence of binary NUD (two or more criteria) over ND. Conclusions All findings address previous concerns about the DSM-IV nicotine diagnosis and its criteria and support the proposed changes for DSM-5 NUD, which should result in improved diagnosis of nicotine disorders. PMID:23312475

  12. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    Science.gov (United States)

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  14. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  15. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  16. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  17. NVN 5694 intra laboratory validation. Feasibility study for interlaboratory- validation

    International Nuclear Information System (INIS)

    Voors, P.I.; Baard, J.H.

    1998-11-01

    Within the project NORMSTAR 2 a number of Dutch prenormative protocols have been defined for radioactivity measurements. Some of these protocols, e.g. the Dutch prenormative protocol NVN 5694, titled Methods for radiochemical determination of polonium-210 and lead-210, have not been validated, neither by intralaboratory nor interlaboratory studies. Validation studies are conducted within the framework of the programme 'Normalisatie and Validatie van Milieumethoden 1993-1997' (Standardization and Validation of test methods for environmental parameters) of the Dutch Ministry of Housing, Physical Planning and the Environment (VROM). The aims of this study were (a) a critical evaluation of the protocol, (b) investigation on the feasibility of an interlaboratory study, and (c) the interlaboratory validation of NVN 5694. The evaluation of the protocol resulted in a list of deficiencies varying from missing references to incorrect formulae. From the survey by interview it appeared that for each type of material, there are 4 to 7 laboratories willing to participate in a interlaboratory validation study. This reflects the situation in 1997. Consequently, if 4 or 6 (the minimal number) laboratories are participating and each laboratory analyses 3 subsamples, the uncertainty in the repeatability standard deviation is 49 or 40 %, respectively. If the ratio of reproducibility standard deviation to the repeatability standard deviation is equal to 1 or 2, then the uncertainty in the reproducibility standard deviation increases from 42 to 67 % and from 34 to 52 % for 4 or 6 laboratories, respectively. The intralaboratory validation was established on four different types of materials. Three types of materials (milkpowder condensate and filter) were prepared in the laboratory using the raw material and certified Pb-210 solutions, and one (sediment) was obtained from the IAEA. The ECN-prepared reference materials were used after testing on homogeneity. The pre-normative protocol can

  18. Transient FDTD simulation validation

    OpenAIRE

    Jauregui Tellería, Ricardo; Riu Costa, Pere Joan; Silva Martínez, Fernando

    2010-01-01

    In computational electromagnetic simulations, most validation methods have been developed until now to be used in the frequency domain. However, the EMC analysis of the systems in the frequency domain many times is not enough to evaluate the immunity of current communication devices. Based on several studies, in this paper we propose an alternative method of validation of the transients in time domain allowing a rapid and objective quantification of the simulations results.

  19. Urban roughness mapping validation techniques and some first results

    NARCIS (Netherlands)

    Bottema, M; Mestayer, PG

    1998-01-01

    Because of measuring problems related to evaluation of urban roughness parameters, a new approach using a roughness mapping tool has been tested: evaluation of roughness length z(o) and zero displacement z(d) from cadastral databases. Special attention needs to be given to the validation of the

  20. Verification, validation, and reliability of predictions

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1987-04-01

    The objective of predicting long-term performance should be to make reliable determinations of whether the prediction falls within the criteria for acceptable performance. Establishing reliable predictions of long-term performance of a waste repository requires emphasis on valid theories to predict performance. The validation process must establish the validity of the theory, the parameters used in applying the theory, the arithmetic of calculations, and the interpretation of results; but validation of such performance predictions is not possible unless there are clear criteria for acceptable performance. Validation programs should emphasize identification of the substantive issues of prediction that need to be resolved. Examples relevant to waste package performance are predicting the life of waste containers and the time distribution of container failures, establishing the criteria for defining container failure, validating theories for time-dependent waste dissolution that depend on details of the repository environment, and determining the extent of congruent dissolution of radionuclides in the UO 2 matrix of spent fuel. Prediction and validation should go hand in hand and should be done and reviewed frequently, as essential tools for the programs to design and develop repositories. 29 refs

  1. Principles of validation of diagnostic assays for infectious diseases

    International Nuclear Information System (INIS)

    Jacobson, R.H.

    1998-01-01

    Assay validation requires a series of inter-related processes. Assay validation is an experimental process: reagents and protocols are optimized by experimentation to detect the analyte with accuracy and precision. Assay validation is a relative process: its diagnostic sensitivity and diagnostic specificity are calculated relative to test results obtained from reference animal populations of known infection/exposure status. Assay validation is a conditional process: classification of animals in the target population as infected or uninfected is conditional upon how well the reference animal population used to validate the assay represents the target population; accurate predictions of the infection status of animals from test results (PV+ and PV-) are conditional upon the estimated prevalence of disease/infection in the target population. Assay validation is an incremental process: confidence in the validity of an assay increases over time when use confirms that it is robust as demonstrated by accurate and precise results; the assay may also achieve increasing levels of validity as it is upgraded and extended by adding reference populations of known infection status. Assay validation is a continuous process: the assay remains valid only insofar as it continues to provide accurate and precise results as proven through statistical verification. Therefore, the work required for validation of diagnostic assays for infectious diseases does not end with a time-limited series of experiments based on a few reference samples rather, to assure valid test results from an assay requires constant vigilance and maintenance of the assay, along with reassessment of its performance characteristics for each unique population of animals to which it is applied. (author)

  2. The measurement of instrumental ADL: content validity and construct validity

    DEFF Research Database (Denmark)

    Avlund, K; Schultz-Larsen, K; Kreiner, S

    1993-01-01

    do not depend on help. It is also possible to add the items in a valid way. However, to obtain valid IADL-scales, we omitted items that were highly relevant to especially elderly women, such as house-work items. We conclude that the criteria employed for this IADL-measure are somewhat contradictory....... showed that 14 items could be combined into two qualitatively different additive scales. The IADL-measure complies with demands for content validity, distinguishes between what the elderly actually do, and what they are capable of doing, and is a good discriminator among the group of elderly persons who...

  3. Assessment of teacher competence using video portfolios: reliability, construct validity and consequential validity

    NARCIS (Netherlands)

    Admiraal, W.; Hoeksma, M.; van de Kamp, M.-T.; van Duin, G.

    2011-01-01

    The richness and complexity of video portfolios endanger both the reliability and validity of the assessment of teacher competencies. In a post-graduate teacher education program, the assessment of video portfolios was evaluated for its reliability, construct validity, and consequential validity.

  4. Italian version of Dyspnoea-12: cultural-linguistic validation, quantitative and qualitative content validity study.

    Science.gov (United States)

    Caruso, Rosario; Arrigoni, Cristina; Groppelli, Katia; Magon, Arianna; Dellafiore, Federica; Pittella, Francesco; Grugnetti, Anna Maria; Chessa, Massimo; Yorke, Janelle

    2018-01-16

    Dyspnoea-12 is a valid and reliable scale to assess dyspneic symptom, considering its severity, physical and emotional components. However, it is not available in Italian version due to it was not yet translated and validated. For this reason, the aim of this study was to develop an Italian version Dyspnoea-12, providing a cultural and linguistic validation, supported by the quantitative and qualitative content validity. This was a methodological study, divided into two phases: phase one is related to the cultural and linguistic validation, phase two is related to test the quantitative and qualitative content validity. Linguistic validation followed a standardized translation process. Quantitative content validity was assessed computing content validity ratio (CVR) and index (I-CVIs and S-CVI) from expert panellists response. Qualitative content validity was assessed by the narrative analysis on the answers of three open-ended questions to the expert panellists, aimed to investigate the clarity and the pertinence of the Italian items. The translation process found a good agreement in considering clear the items in both the six involved bilingual expert translators and among the ten voluntary involved patients. CVR, I-CVIs and S-CVI were satisfactory for all the translated items. This study has represented a pivotal step to use Dyspnoea-12 amongst Italian patients. Future researches are needed to deeply investigate the Italian version of  Dyspnoea-12 construct validity and its reliability, and to describe how dyspnoea components (i.e. physical and emotional) impact the life of patients with cardiorespiratory diseases.

  5. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  6. Validation of Symptom Validity Tests Using a "Child-model" of Adult Cognitive Impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P. E. J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children's cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  7. Validation of the Danish PAROLE lexicon (upubliceret)

    DEFF Research Database (Denmark)

    Møller, Margrethe; Christoffersen, Ellen

    2000-01-01

    This validation is based on the Danish PAROLE lexicon dated June 20, 1998, downloaded on March 16, 1999. Subsequently, the developers of the lexicon have informed us that they have been revising the lexicon, in particular the morphological level. Morphological entries were originally generated...... automatically from a machine-readable version of the Official Danish Spelling Dictionary (Retskrivningsordbogen 1986, in the following RO86), and this resulted in some overgeneration, which the developers started eliminating after submitting the Danish PAROLE lexicon for validation. The present validation is......, however, based on the January 1997 version of the lexicon. The validation as such complies with the specifications described in ELRA validation manuals for lexical data, i.e. Underwood and Navaretta: "A Draft Manual for the Validation of Lexica, Final Report" [Underwood & Navaretta1997] and Braasch: "A...

  8. Validation of symptom validity tests using a "child-model" of adult cognitive impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P.E.J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children’s cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  9. Validation of EAF-2005 data

    International Nuclear Information System (INIS)

    Kopecky, J.

    2005-01-01

    Full text: Validation procedures applied on EAF-2003 starter file, which lead to the production of EAF-2005 library, are described. The results in terms of reactions with assigned quality scores in EAF-20005 are given. Further the extensive validation against the recent integral data is discussed together with the status of the final report 'Validation of EASY-2005 using integral measurements'. Finally, the novel 'cross section trend analysis' is presented with some examples of its use. This action will lead to the release of improved library EAF-2005.1 at the end of 2005, which shall be used as the starter file for EAF-2007. (author)

  10. Screening for postdeployment conditions: development and cross-validation of an embedded validity scale in the neurobehavioral symptom inventory.

    Science.gov (United States)

    Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G

    2014-01-01

    To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.

  11. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  12. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  13. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  14. Worldwide Protein Data Bank validation information: usage and trends.

    Science.gov (United States)

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  15. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  16. Validation Results for LEWICE 3.0

    Science.gov (United States)

    Wright, William B.

    2005-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  17. Validating MEDIQUAL Constructs

    Science.gov (United States)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  18. Validation and results of a questionnaire for functional bowel disease in out-patients

    Directory of Open Access Journals (Sweden)

    Skordilis Panagiotis

    2002-05-01

    Full Text Available Abstract Background The aim was to evaluate and validate a bowel disease questionnaire in patients attending an out-patient gastroenterology clinic in Greece. Methods This was a prospective study. Diagnosis was based on detailed clinical and laboratory evaluation. The questionnaire was tested on a pilot group of patients. Interviewer-administration technique was used. One-hundred-and-forty consecutive patients attending the out-patient clinic for the first time and fifty healthy controls selected randomly participated in the study. Reliability (kappa statistics and validity of the questionnaire were tested. We used logistic regression models and binary recursive partitioning for assessing distinguishing ability among irritable bowel syndrome (IBS, functional dyspepsia and organic disease patients. Results Mean time for questionnaire completion was 18 min. In test-retest procedure a good agreement was obtained (kappa statistics 0.82. There were 55 patients diagnosed as having IBS, 18 with functional dyspepsia (Rome I criteria, 38 with organic disease. Location of pain was a significant distinguishing factor, patients with functional dyspepsia having no lower abdominal pain (p Conclusions This questionnaire for functional bowel disease is a valid and reliable instrument that can distinguish satisfactorily between organic and functional disease in an out-patient setting.

  19. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    Science.gov (United States)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  20. On Line Validation Exercise (OLIVE: A Web Based Service for the Validation of Medium Resolution Land Products. Application to FAPAR Products

    Directory of Open Access Journals (Sweden)

    Marie Weiss

    2014-05-01

    Full Text Available The OLIVE (On Line Interactive Validation Exercise platform is dedicated to the validation of global biophysical products such as LAI (Leaf Area Index and FAPAR (Fraction of Absorbed Photosynthetically Active Radiation. It was developed under the framework of the CEOS (Committee on Earth Observation Satellites Land Product Validation (LPV sub-group. OLIVE has three main objectives: (i to provide a consistent and centralized information on the definition of the biophysical variables, as well as a description of the main available products and their performances (ii to provide transparency and traceability by an online validation procedure compliant with the CEOS LPV and QA4EO (Quality Assurance for Earth Observation recommendations (iii and finally, to provide a tool to benchmark new products, update product validation results and host new ground measurement sites for accuracy assessment. The functionalities and algorithms of OLIVE are described to provide full transparency of its procedures to the community. The validation process and typical results are illustrated for three FAPAR products: GEOV1 (VEGETATION sensor, MGVIo (MERIS sensor and MODIS collection 5 FPAR. OLIVE is available on the European Space Agency CAL/VAL portal, including full documentation, validation exercise results, and product extracts.

  1. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  2. Validering av vattenkraftmodeller i ARISTO

    OpenAIRE

    Lundbäck, Maja

    2013-01-01

    This master thesis was made to validate hydropower models of a turbine governor, Kaplan turbine and a Francis turbine in the power system simulator ARISTO at Svenska Kraftnät. The validation was made in three steps. The first step was to make sure the models was implement correctly in the simulator. The second was to compare the simulation results from the Kaplan turbine model to data from a real hydropower plant. The comparison was made to see how the models could generate simulation result ...

  3. Evaluation of convergent and discriminant validity of the Russian version of MMPI-2: First results

    Directory of Open Access Journals (Sweden)

    Emma I. Mescheriakova

    2015-06-01

    Full Text Available The paper presents the results of construct validity testing for a new version of the MMPI-2 (Minnesota Multiphasic Personality Inventory, which restandardization started in 1982 (J.N. Butcher, W.G. Dahlstrom, J.R. Graham, A. Tellegen, B. Kaemmer and is still going on. The professional community’s interest in this new version of the Inventory is determined by its advantage over the previous one in restructuring the inventory and adding new items which offer additional opportunities for psychodiagnostics and personality assessment. The construct validity testing was carried out using three up-to-date techniques, namely the Quality of Life and Satisfaction with Life questionnaire (a short version of Ritsner’s instrument adapted by E.I. Rasskazova, Janoff-Bulman’s World Assumptions Scale (adapted by O. Kravtsova, and the Character Strengths Assessment questionnaire developed by E. Osin based on Peterson and Seligman’s Values in Action Inventory of Strengths. These psychodiagnostic techniques were selected in line with the current trends in psychology, such as its orientation to positive phenomena as well as its interpretation of subjectivity potential as the need for self-determined, self-organized, self-realized and self-controlled behavior and the ability to accomplish it. The procedure of construct validity testing involved the «norm» group respondents, with the total sample including 205 people (62% were females, 32% were males. It was focused on the MMPI-2 additional and expanded scales (FI, BF, FP, S и К and six of its ten basic ones (D, Pd, Pa, Pt, Sc, Si. The results obtained confirmed construct validity of the scales concerned, and this allows the MMPI-2 to be applied to examining one’s personal potential instead of a set of questionnaires, facilitating, in turn, the personality researchers’ objectives. The paper discusses the first stage of this construct validity testing, the further stage highlighting the factor

  4. Construct validity of adolescents' self-reported big five personality traits: importance of conceptual breadth and initial validation of a short measure.

    Science.gov (United States)

    Morizot, Julien

    2014-10-01

    While there are a number of short personality trait measures that have been validated for use with adults, few are specifically validated for use with adolescents. To trust such measures, it must be demonstrated that they have adequate construct validity. According to the view of construct validity as a unifying form of validity requiring the integration of different complementary sources of information, this article reports the evaluation of content, factor, convergent, and criterion validities as well as reliability of adolescents' self-reported personality traits. Moreover, this study sought to address an inherent potential limitation of short personality trait measures, namely their limited conceptual breadth. In this study, starting with items from a known measure, after the language-level was adjusted for use with adolescents, items tapping fundamental primary traits were added to determine the impact of added conceptual breadth on the psychometric properties of the scales. The resulting new measure was named the Big Five Personality Trait Short Questionnaire (BFPTSQ). A group of expert judges considered the items to have adequate content validity. Using data from a community sample of early adolescents, the results confirmed the factor validity of the Big Five structure in adolescence as well as its measurement invariance across genders. More important, the added items did improve the convergent and criterion validities of the scales, but did not negatively affect their reliability. This study supports the construct validity of adolescents' self-reported personality traits and points to the importance of conceptual breadth in short personality measures. © The Author(s) 2014.

  5. Validity in Qualitative Evaluation

    OpenAIRE

    Vasco Lub

    2015-01-01

    This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate), the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of con...

  6. Empirical Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  7. The development of a self-administered dementia checklist: the examination of concurrent validity and discriminant validity.

    Science.gov (United States)

    Miyamae, Fumiko; Ura, Chiaki; Sakuma, Naoko; Niikawa, Hirotoshi; Inagaki, Hiroki; Ijuin, Mutsuo; Okamura, Tsuyoshi; Sugiyama, Mika; Awata, Shuichi

    2016-01-01

    The present study aims to develop a self-administered dementia checklist to enable community-residing older adults to realize their declining functions and start using necessary services. A previous study confirmed the factorial validity and internal reliability of the checklist. The present study examined its concurrent validity and discriminant validity. The authors conducted a 3-step study (a self-administered survey including the checklist, interviews by nurses, and interviews by doctors and psychologists) of 7,682 community-residing individuals who were over 65 years of age. The authors calculated Spearman's correlation coefficients between the scores of the checklist and the results of a psychological test to examine the concurrent validity. They also compared the average total scores of the checklist between groups with different Clinical Dementia Rating (CDR) scores to examine discriminant validity and conducted a receiver operating characteristic analysis to examine the discriminative power for dementia. The authors analyzed the data of 131 respondents who completed all 3 steps. The checklist scores were significantly correlated with the respondents' Mini-Mental State Examination and Frontal Assessment Battery scores. The checklist also significantly discriminated the patients with dementia (CDR = 1+) from those without dementia (CDR = 0 or 0.5). The optimal cut-off point for the two groups was 17/18 (sensitivity, 72.0%; specificity, 69.2%; positive predictive value, 69.2%; negative predictive value, 72.0%). This study confirmed the concurrent validity and discriminant validity of the self-administered dementia checklist. However, due to its insufficient discriminative power as a screening tool for older people with declining cognitive functions, the checklist is only recommended as an educational and public awareness tool.

  8. Congruent Validity of the Rathus Assertiveness Schedule.

    Science.gov (United States)

    Harris, Thomas L.; Brown, Nina W.

    1979-01-01

    The validity of the Rathus Assertiveness Schedule (RAS) was investigated by correlating it with the six Class I scales of the California Psychological Inventory on a sample of undergraduate students. Results supported the validity of the RAS. (JKS)

  9. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  10. Validity in Qualitative Evaluation

    Directory of Open Access Journals (Sweden)

    Vasco Lub

    2015-12-01

    Full Text Available This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate, the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of connecting them with aspects of evaluation in social policy. It argues that different purposes of qualitative evaluations can be linked with different scientific paradigms and perspectives, thus transcending unproductive paradigmatic divisions as well as providing a flexible yet rigorous validity framework for researchers and reviewers of qualitative evaluations.

  11. Global cue inconsistency diminishes learning of cue validity

    Directory of Open Access Journals (Sweden)

    Tony Wang

    2016-11-01

    Full Text Available We present a novel two-stage probabilistic learning task that examines the participants’ ability to learn and utilize valid cues across several levels of probabilistic feedback. In the first stage, participants sample from one of three cues that gives predictive information about the outcome of the second stage. Participants are rewarded for correct prediction of the outcome in stage two. Only one of the three cues gives valid predictive information and thus participants can maximise their reward by learning to sample from the valid cue. The validity of this predictive information, however, is reinforced across several levels of probabilistic feedback. A second manipulation involved changing the consistency of the predictive information in stage one and the outcome in stage two. The results show that participants, with higher probabilistic feedback, learned to utilise the valid cue. In inconsistent task conditions, however, participants were significantly less successful in utilising higher validity cues. We interpret this result as implying that learning in probabilistic categorization is based on developing a representation of the task that allows for goal-directed action.

  12. Student mathematical imagination instruments: construction, cultural adaptation and validity

    Science.gov (United States)

    Dwijayanti, I.; Budayasa, I. K.; Siswono, T. Y. E.

    2018-03-01

    Imagination has an important role as the center of sensorimotor activity of the students. The purpose of this research is to construct the instrument of students’ mathematical imagination in understanding concept of algebraic expression. The researcher performs validity using questionnaire and test technique and data analysis using descriptive method. Stages performed include: 1) the construction of the embodiment of the imagination; 2) determine the learning style questionnaire; 3) construct instruments; 4) translate to Indonesian as well as adaptation of learning style questionnaire content to student culture; 5) perform content validation. The results stated that the constructed instrument is valid by content validation and empirical validation so that it can be used with revisions. Content validation involves Indonesian linguists, english linguists and mathematics material experts. Empirical validation is done through a legibility test (10 students) and shows that in general the language used can be understood. In addition, a questionnaire test (86 students) was analyzed using a biserial point correlation technique resulting in 16 valid items with a reliability test using KR 20 with medium reability criteria. While the test instrument test (32 students) to find all items are valid and reliability test using KR 21 with reability is 0,62.

  13. An information architecture for validating courseware

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    Courseware validation should locate Learning Objects inconsistent with the courseware instructional design being used. In order for validation to take place it is necessary to identify the implicit and explicit information needed for validation. In this paper, we identify this information and formally define an information architecture to model courseware validation information explicitly. This promotes tool-support for courseware validation and its interoperability with the courseware specif...

  14. An assessment of the validity of inelastic design analysis methods by comparisons of predictions with test results

    International Nuclear Information System (INIS)

    Corum, J.M.; Clinard, J.A.; Sartory, W.K.

    1976-01-01

    The use of computer programs that employ relatively complex constitutive theories and analysis procedures to perform inelastic design calculations on fast reactor system components introduces questions of validation and acceptance of the analysis results. We may ask ourselves, ''How valid are the answers.'' These questions, in turn, involve the concepts of verification of computer programs as well as qualification of the computer programs and of the underlying constitutive theories and analysis procedures. This paper addresses the latter - the qualification of the analysis methods for inelastic design calculations. Some of the work underway in the United States to provide the necessary information to evaluate inelastic analysis methods and computer programs is described, and typical comparisons of analysis predictions with inelastic structural test results are presented. It is emphasized throughout that rather than asking ourselves how valid, or correct, are the analytical predictions, we might more properly question whether or not the combination of the predictions and the associated high-temperature design criteria leads to an acceptable level of structural integrity. It is believed that in this context the analysis predictions are generally valid, even though exact correlations between predictions and actual behavior are not obtained and cannot be expected. Final judgment, however, must be reserved for the design analyst in each specific case. (author)

  15. Lesson 6: Signature Validation

    Science.gov (United States)

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  16. Validation in the Absence of Observed Events.

    Science.gov (United States)

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  17. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    Science.gov (United States)

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  18. Regulatory perspectives on human factors validation

    International Nuclear Information System (INIS)

    Harrison, F.; Staples, L.

    2001-01-01

    Validation is an important avenue for controlling the genesis of human error, and thus managing loss, in a human-machine system. Since there are many ways in which error may intrude upon system operation, it is necessary to consider the performance-shaping factors that could introduce error and compromise system effectiveness. Validation works to this end by examining, through objective testing and measurement, the newly developed system, procedure or staffing level, in order to identify and eliminate those factors which may negatively influence human performance. It is essential that validation be done in a high-fidelity setting, in an objective and systematic manner, using appropriate measures, if meaningful results are to be obtained, In addition, inclusion of validation work in any design process can be seen as contributing to a good safety culture, since such activity allows licensees to eliminate elements which may negatively impact on human behaviour. (author)

  19. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Science.gov (United States)

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  20. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  1. Reconceptualising the external validity of discrete choice experiments.

    Science.gov (United States)

    Lancsar, Emily; Swait, Joffre

    2014-10-01

    External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.

  2. Assessment of juveniles testimonies’ validity

    Directory of Open Access Journals (Sweden)

    Dozortseva E.G.

    2015-12-01

    Full Text Available The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA by means of Criteria-Based Content Analysis (CBCA and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice

  3. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  4. Reliability and Validity of Qualitative and Operational Research Paradigm

    Directory of Open Access Journals (Sweden)

    Muhammad Bashir

    2008-01-01

    Full Text Available Both qualitative and quantitative paradigms try to find the same result; the truth. Qualitative studies are tools used in understanding and describing the world of human experience. Since we maintain our humanity throughout the research process, it is largely impossible to escape the subjective experience, even for the most experienced of researchers. Reliability and Validity are the issue that has been described in great deal by advocates of quantitative researchers. The validity and the norms of rigor that are applied to quantitative research are not entirely applicable to qualitative research. Validity in qualitative research means the extent to which the data is plausible, credible and trustworthy; and thus can be defended when challenged. Reliability and validity remain appropriate concepts for attaining rigor in qualitative research. Qualitative researchers have to salvage responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers’ judgments to the investigators themselves. There have different opinions on validity with some suggesting that the concepts of validity is incompatible with qualitative research and should be abandoned while others argue efforts should be made to ensure validity so as to lend credibility to the results. This paper is an attempt to clarify the meaning and use of reliability and validity in the qualitative research paradigm.

  5. Test of Gross Motor Development : Expert Validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-12-01

    Full Text Available The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motordevelopment. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by expertsand the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. Across-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionalsand 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls.Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated thatthe Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices ofconfirmatory factorial validity (χ2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tuckerand Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. ThePortuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  6. Test of Gross Motor Development: expert validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-01-01

    The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motor development. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by experts and the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. A cross-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionals and 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls. Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated that the Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices of confirmatory factorial validity (÷2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tucker and Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. The Portuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  7. Solution Validation for a Double Façade Prototype

    Directory of Open Access Journals (Sweden)

    Pau Fonseca i Casas

    2017-12-01

    Full Text Available A Solution Validation involves comparing the data obtained from the system that are implemented following the model recommendations, as well as the model results. This paper presents a Solution Validation that has been performed with the aim of certifying that a set of computer-optimized designs, for a double façade, are consistent with reality. To validate the results obtained through simulation models, based on dynamic thermal calculation and using Computational Fluid Dynamic techniques, a comparison with the data obtained by monitoring a real implemented prototype has been carried out. The new validated model can be used to describe the system thermal behavior in different climatic zones without having to build a new prototype. The good performance of the proposed double façade solution is confirmed since the validation assures there is a considerable energy saving, preserving and even improving interior comfort. This work shows all the processes in the Solution Validation depicting some of the problems we faced and represents an example of this kind of validation that often is not considered in a simulation project.

  8. Construct Validity of Neuropsychological Tests in Schizophrenia.

    Science.gov (United States)

    Allen, Daniel N.; Aldarondo, Felito; Goldstein, Gerald; Huegel, Stephen G.; Gilbertson, Mark; van Kammen, Daniel P.

    1998-01-01

    The construct validity of neuropsychological tests in patients with schizophrenia was studied with 39 patients who were evaluated with a battery of six tests assessing attention, memory, and abstract reasoning abilities. Results support the construct validity of the neuropsychological tests in patients with schizophrenia. (SLD)

  9. SHIELD verification and validation report

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    This document outlines the verification and validation effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system code. Along with its predecessors, SHIELD has been in use at the Savannah River Site (SRS) for more than ten years. During this time the code has been extensively tested and a variety of validation documents have been issued. The primary function of this report is to specify the features and capabilities for which SHIELD is to be considered validated, and to reference the documents that establish the validation

  10. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    Science.gov (United States)

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  11. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  12. Design and validation of a comprehensive fecal incontinence questionnaire.

    Science.gov (United States)

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  13. Assessment of validity with polytrauma Veteran populations.

    Science.gov (United States)

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  14. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    Science.gov (United States)

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  15. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  16. Construct validity of the Individual Work Performance Questionnaire.

    OpenAIRE

    Koopmans, L.; Bernaards, C.M.; Hildebrandt, V.H.; Vet, H.C.W. de; Beek, A.J. van der

    2014-01-01

    Objective: To examine the construct validity of the Individual Work Performance Questionnaire (IWPQ). Methods: A total of 1424 Dutch workers from three occupational sectors (blue, pink, and white collar) participated in the study. First, IWPQ scores were correlated with related constructs (convergent validity). Second, differences between known groups were tested (discriminative validity). Results: First, IWPQ scores correlated weakly to moderately with absolute and relative presenteeism, and...

  17. Internal Validity: A Must in Research Designs

    Science.gov (United States)

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  18. The Perceived Leadership Communication Questionnaire (PLCQ): Development and Validation.

    Science.gov (United States)

    Schneider, Frank M; Maier, Michaela; Lovrekovic, Sara; Retzbach, Andrea

    2015-01-01

    The Perceived Leadership Communication Questionnaire (PLCQ) is a short, reliable, and valid instrument for measuring leadership communication from both perspectives of the leader and the follower. Drawing on a communication-based approach to leadership and following a theoretical framework of interpersonal communication processes in organizations, this article describes the development and validation of a one-dimensional 6-item scale in four studies (total N = 604). Results from Study 1 and 2 provide evidence for the internal consistency and factorial validity of the PLCQ's self-rating version (PLCQ-SR)-a version for measuring how leaders perceive their own communication with their followers. Results from Study 3 and 4 show internal consistency, construct validity, and criterion validity of the PLCQ's other-rating version (PLCQ-OR)-a version for measuring how followers perceive the communication of their leaders. Cronbach's α had an average of.80 over the four studies. All confirmatory factor analyses yielded good to excellent model fit indices. Convergent validity was established by average positive correlations of.69 with subdimensions of transformational leadership and leader-member exchange scales. Furthermore, nonsignificant correlations with socially desirable responding indicated discriminant validity. Last, criterion validity was supported by a moderately positive correlation with job satisfaction (r =.31).

  19. Isotopic and criticality validation for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Fuentes, E.; Lancaster, D.; Rahimi, M.

    1997-01-01

    The techniques used for actinide-only burnup credit isotopic validation and criticality validation are presented and discussed. Trending analyses have been incorporated into both methodologies, requiring biases and uncertainties to be treated as a function of the trending parameters. The isotopic validation is demonstrated using the SAS2H module of SCALE 4.2, with the 27BURNUPLIB cross section library; correction factors are presented for each of the actinides in the burnup credit methodology. For the criticality validation, the demonstration is performed with the CSAS module of SCALE 4.2 and the 27BURNUPLIB, resulting in a validated upper safety limit

  20. The Treatment Validity of Autism Screening Instruments

    Science.gov (United States)

    Livanis, Andrew; Mouzakitis, Angela

    2010-01-01

    Treatment validity is a frequently neglected topic of screening instruments used to identify autism spectrum disorders. Treatment validity, however, should represent an important aspect of these instruments to link the resulting data to the selection of interventions as well as make decisions about treatment length and intensity. Research…

  1. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  2. Cultural adaptation and validation of an instrument on barriers for the use of research results.

    Science.gov (United States)

    Ferreira, Maria Beatriz Guimarães; Haas, Vanderlei José; Dantas, Rosana Aparecida Spadoti; Felix, Márcia Marques Dos Santos; Galvão, Cristina Maria

    2017-03-02

    to culturally adapt The Barriers to Research Utilization Scale and to analyze the metric validity and reliability properties of its Brazilian Portuguese version. methodological research conducted by means of the cultural adaptation process (translation and back-translation), face and content validity, construct validity (dimensionality and known groups) and reliability analysis (internal consistency and test-retest). The sample consisted of 335 nurses, of whom 43 participated in the retest phase. the validity of the adapted version of the instrument was confirmed. The scale investigates the barriers for the use of the research results in clinical practice. Confirmatory factorial analysis demonstrated that the Brazilian Portuguese version of the instrument is adequately adjusted to the dimensional structure the scale authors originally proposed. Statistically significant differences were observed among the nurses holding a Master's or Doctoral degree, with characteristics favorable to Evidence-Based Practice, and working at an institution with an organizational cultural that targets this approach. The reliability showed a strong correlation (r ranging between 0.77 and 0.84, pcultura organizacional dirigida hacia tal aproximación. La fiabilidad presentó correlación fuerte (r variando entre 0,77 y 0,84, pcultura organizacional direcionada para tal abordagem. A confiabilidade apresentou correlação forte (r variando entre 0,77e 0,84, p<0,001) e a consistência interna foi adequada (alfa de Cronbach variando entre 0,77 e 0,82) . a versão para o português brasileiro do instrumento The Barriers Scale demonstrou-se válida e confiável no grupo estudado.

  3. Method validation in plasma source optical emission spectroscopy (ICP-OES) - From samples to results

    International Nuclear Information System (INIS)

    Pilon, Fabien; Vielle, Karine; Birolleau, Jean-Claude; Vigneau, Olivier; Labet, Alexandre; Arnal, Nadege; Adam, Christelle; Camilleri, Virginie; Amiel, Jeanine; Granier, Guy; Faure, Joel; Arnaud, Regine; Beres, Andre; Blanchard, Jean-Marc; Boyer-Deslys, Valerie; Broudic, Veronique; Marques, Caroline; Augeray, Celine; Bellefleur, Alexandre; Bienvenu, Philippe; Delteil, Nicole; Boulet, Beatrice; Bourgarit, David; Brennetot, Rene; Fichet, Pascal; Celier, Magali; Chevillotte, Rene; Klelifa, Aline; Fuchs, Gilbert; Le Coq, Gilles; Mermet, Jean-Michel

    2017-01-01

    Even though ICP-OES (Inductively Coupled Plasma - Optical Emission Spectroscopy) is now a routine analysis technique, requirements for measuring processes impose a complete control and mastering of the operating process and of the associated quality management system. The aim of this (collective) book is to guide the analyst during all the measurement validation procedure and to help him to guarantee the mastering of its different steps: administrative and physical management of samples in the laboratory, preparation and treatment of the samples before measuring, qualification and monitoring of the apparatus, instrument setting and calibration strategy, exploitation of results in terms of accuracy, reliability, data covariance (with the practical determination of the accuracy profile). The most recent terminology is used in the book, and numerous examples and illustrations are given in order to a better understanding and to help the elaboration of method validation documents

  4. Verification and validation of RADMODL Version 1.0

    International Nuclear Information System (INIS)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V ampersand V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident

  5. Verification and validation of RADMODL Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.

  6. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  7. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  8. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  9. Satisfaction with information provided to Danish cancer patients: validation and survey results.

    Science.gov (United States)

    Ross, Lone; Petersen, Morten Aagaard; Johnsen, Anna Thit; Lundstrøm, Louise Hyldborg; Groenvold, Mogens

    2013-11-01

    To validate five items (CPWQ-inf) regarding satisfaction with information provided to cancer patients from health care staff, assess the prevalence of dissatisfaction with this information, and identify factors predicting dissatisfaction. The questionnaire was validated by patient-observer agreement and cognitive interviews. The prevalence of dissatisfaction was assessed in a cross-sectional sample of all cancer patients in contact with hospitals during the past year in three Danish counties. The validation showed that the CPWQ performed well. Between 3 and 23% of the 1490 participating patients were dissatisfied with each of the measured aspects of information. The highest level of dissatisfaction was reported regarding the guidance, support and help provided when the diagnosis was given. Younger patients were consistently more dissatisfied than older patients. The brief CPWQ performs well for survey purposes. The survey depicts the heterogeneous patient population encountered by hospital staff and showed that younger patients probably had higher expectations or a higher need for information and that those with more severe diagnoses/prognoses require extra care in providing information. Four brief questions can efficiently assess information needs. With increasing demands for information, a wide range of innovative initiatives is needed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  11. The validity of the 4-Skills Scan: A double validation study.

    Science.gov (United States)

    van Kernebeek, W G; de Kroon, M L A; Savelsbergh, G J P; Toussaint, H M

    2018-06-01

    Adequate gross motor skills are an essential aspect of a child's healthy development. Where physical education (PE) is part of the primary school curriculum, a strong curriculum-based emphasis on evaluation and support of motor skill development in PE is apparent. Monitoring motor development is then a task for the PE teacher. In order to fulfil this task, teachers need adequate tools. The 4-Skills Scan is a quick and easily manageable gross motor skill instrument; however, its validity has never been assessed. Therefore, the purpose of this study is to assess the construct and concurrent validity of both 4-Skills Scans (version 2007 and version 2015). A total of 212 primary school children (6 - 12 years old), was requested to participate in both versions of the 4-Skills Scan. For assessing construct validity, children covered an obstacle course with video recordings for observation by an expert panel. For concurrent validity, a comparison was made with the MABC-2, by calculating Pearson correlations. Multivariable linear regression analyses were performed to determine the contribution of each subscale to the construct of gross motor skills, according to the MABC-2 and the expert panel. Correlations between the 4-Skills Scans and expert valuations were moderate, with coefficients of .47 (version 2007) and .46 (version 2015). Correlations between the 4-Skills Scans and the MABC-2 (gross) were moderate (.56) for version 2007 and high (.64) for version 2015. It is concluded that both versions of the 4-Skills Scans are satisfactory valid instruments for assessing gross motor skills during PE lessons. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    Science.gov (United States)

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  13. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  14. Planck intermediate results: IV. the XMM-Newton validation programme for new Planck galaxy clusters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Delabrouille, J.; Ganga, K.

    2013-01-01

    We present the final results from the XMM-Newton validation follow-up of new Planck galaxy cluster candidates. We observed 15 new candidates, detected with signal-to-noise ratios between 4.0 and 6.1 in the 15.5-month nominal Planck survey. The candidates were selected using ancillary data flags d...

  15. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  16. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  17. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  18. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  19. CIPS Validation Data Plan

    International Nuclear Information System (INIS)

    Dinh, Nam

    2012-01-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  20. Detailed validation in PCDDF analysis. ISO17025 data from Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Kernick Carvalhaes, G.; Azevedo, J.A.; Azevedo, G.; Machado, M.; Brooks, P. [Analytical Solutions, Rio de Janeiro (Brazil)

    2004-09-15

    When we define validation method we can use the ISO standard 8402, in reference to this, 'validation' is the 'confirmation by the examination and supplying of objective evidences that the particular requirements for a specific intended use are fulfilled'. This concept is extremely important to guarantee the quality of results. Validation method is based on the combined use of different validation procedures, but in this selection we have to analyze the cost benefit conditions. We must focus on the critical elements, and these critical factors must be the essential elements for providing good properties and results. If we have a solid validation methodology and a research of the source of uncertainty of our analytical method, we can generate results with confidence and veracity. When analyzing these two considerations, validation method and uncertainty calculations, we found out that there are very few articles and papers about these subjects, and it is even more difficult to find such materials on dioxins and furans. This short paper describes a validation and uncertainty calculation methodology using traditional studies with a few adaptations, yet it shows a new idea of recovery study as a source of uncertainty.

  1. Contextual Validity in Hybrid Logic

    DEFF Research Database (Denmark)

    Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin

    2013-01-01

    interpretations. Moreover, such indexicals give rise to a special kind of validity—contextual validity—that interacts with ordinary logi- cal validity in interesting and often unexpected ways. In this paper we model these interactions by combining standard techniques from hybrid logic with insights from the work...... of Hans Kamp and David Kaplan. We introduce a simple proof rule, which we call the Kamp Rule, and first we show that it is all we need to take us from logical validities involving now to contextual validities involving now too. We then go on to show that this deductive bridge is strong enough to carry us...... to contextual validities involving yesterday, today and tomorrow as well....

  2. Validation Process Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  3. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  4. CFD validation experiments for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  5. Human Factors methods concerning integrated validation of nuclear power plant control rooms; Metodutveckling foer integrerad validering

    Energy Technology Data Exchange (ETDEWEB)

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia (Swedish Defence Research Agency, Information Systems, Linkoeping (Sweden))

    2010-02-15

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  6. MARS Validation Plan and Status

    International Nuclear Information System (INIS)

    Ahn, Seung-hoon; Cho, Yong-jin

    2008-01-01

    The KINS Reactor Thermal-hydraulic Analysis System (KINS-RETAS) under development is directed toward a realistic analysis approach of best-estimate (BE) codes and realistic assumptions. In this system, MARS is pivoted to provide the BE Thermal-Hydraulic (T-H) response in core and reactor coolant system to various operational transients and accidental conditions. As required for other BE codes, the qualification is essential to ensure reliable and reasonable accuracy for a targeted MARS application. Validation is a key element of the code qualification, and determines the capability of a computer code in predicting the major phenomena expected to occur. The MARS validation was made by its developer KAERI, on basic premise that its backbone code RELAP5/MOD3.2 is well qualified against analytical solutions, test or operational data. A screening was made to select the test data for MARS validation; some models transplanted from RELAP5, if already validated and found to be acceptable, were screened out from assessment. It seems to be reasonable, but does not demonstrate whether code adequacy complies with the software QA guidelines. Especially there may be much difficulty in validating the life-cycle products such as code updates or modifications. This paper presents the plan for MARS validation, and the current implementation status

  7. Validation studies of nursing diagnoses in neonatology

    Directory of Open Access Journals (Sweden)

    Pavlína Rabasová

    2016-03-01

    Full Text Available Aim: The objective of the review was the analysis of Czech and foreign literature sources and professional periodicals to obtain a relevant comprehensive overview of validation studies of nursing diagnoses in neonatology. Design: Review. Methods: The selection criterion was studies concerning the validation of nursing diagnoses in neonatology. To obtain data from relevant sources, the licensed professional databases EBSCO, Web of Science and Scopus were utilized. The search criteria were: date of publication - unlimited; academic periodicals - full text; peer-reviewed periodicals; search language - English, Czech and Slovak. Results: A total of 788 studies were found. Only 5 studies were eligible for content analysis, dealing specifically with validation of nursing diagnoses in neonatology. The analysis of the retrieved studies suggests that authors are most often concerned with identifying the defining characteristics of nursing diagnoses applicable to both the mother (parents and the newborn. The diagnoses were validated in the domains Role Relationship; Coping/Stress tolerance; Activity/Rest, and Elimination and Exchange. Diagnoses represented were from the field of dysfunctional physical needs as well as the field of psychosocial and spiritual needs. The diagnoses were as follows: Parental role conflict (00064; Impaired parenting (00056; Grieving (00136; Ineffective breathing pattern (00032; Impaired gas exchange (00030; and Impaired spontaneous ventilation (00033. Conclusion: Validation studies enable effective planning of interventions with measurable results and support clinical nursing practice.

  8. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  9. Validation of NAA Method for Urban Particulate Matter

    International Nuclear Information System (INIS)

    Woro Yatu Niken Syahfitri; Muhayatun; Diah Dwiana Lestiani; Natalia Adventini

    2009-01-01

    Nuclear analytical techniques have been applied in many countries for determination of environmental pollutant. Method of NAA (neutron activation analysis) representing one of nuclear analytical technique of that has low detection limits, high specificity, high precision, and accuracy for large majority of naturally occurring elements, and ability of non-destructive and simultaneous determination of multi-elemental, and can handle small sample size (< 1 mg). To ensure quality and reliability of the method, validation are needed to be done. A standard reference material, SRM NIST 1648 Urban Particulate Matter, has been used to validate NAA method. Accuracy and precision test were used as validation parameters. Particulate matter were validated for 18 elements: Ti, I, V, Br, Mn, Na, K, Cl, Cu, Al, As, Fe, Co, Zn, Ag, La, Cr, and Sm,. The result showed that the percent relative standard deviation of the measured elemental concentrations are found to be within ranged from 2 to 14,8% for most of the elements analyzed whereas Hor rat value in range 0,3-1,3. Accuracy test results showed that relative bias ranged from -11,1 to 3,6%. Based on validation results, it can be stated that NAA method is reliable for characterization particulate matter and other similar matrix samples to support air quality monitoring. (author)

  10. Results and validity of renal blood flow measurements using Xenon 133

    International Nuclear Information System (INIS)

    Serres, P.; Danet, B.; Guiraud, R.; Durand, D.; Ader, J.L.

    1975-01-01

    The renal blood flow was measured by external recording of the xenon 133 excretion curve. The study involved 45 patients with permanent high blood pressure and 7 transplant patients. The validity of the method was checked on 10 dogs. From the results it seems that the cortical blood flow, its fraction and the mean flow rate are the most representative of the renal haemodynamics parameters, from which may be established the repercussions of blood pressure on kidney vascularisation. Experiments are in progress on animals to check the compartment idea by comparing injections into the renal artery and into various kidney tissues in situ [fr

  11. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  12. Reliability and validity in a nutshell.

    Science.gov (United States)

    Bannigan, Katrina; Watson, Roger

    2009-12-01

    To explore and explain the different concepts of reliability and validity as they are related to measurement instruments in social science and health care. There are different concepts contained in the terms reliability and validity and these are often explained poorly and there is often confusion between them. To develop some clarity about reliability and validity a conceptual framework was built based on the existing literature. The concepts of reliability, validity and utility are explored and explained. Reliability contains the concepts of internal consistency and stability and equivalence. Validity contains the concepts of content, face, criterion, concurrent, predictive, construct, convergent (and divergent), factorial and discriminant. In addition, for clinical practice and research, it is essential to establish the utility of a measurement instrument. To use measurement instruments appropriately in clinical practice, the extent to which they are reliable, valid and usable must be established.

  13. Validating year 2000 compliance

    NARCIS (Netherlands)

    A. van Deursen (Arie); P. Klint (Paul); M.P.A. Sellink

    1997-01-01

    textabstractValidating year 2000 compliance involves the assessment of the correctness and quality of a year 2000 conversion. This entails inspecting both the quality of the conversion emph{process followed, and of the emph{result obtained, i.e., the converted system. This document provides an

  14. Cross-Cultural Adaptation and Validation of SNOT-20 in Portuguese

    Science.gov (United States)

    Bezerra, Thiago Freire Pinto; Piccirillo, Jay F.; Fornazieri, Marco Aurélio; Pilan, Renata R. de M.; Abdo, Tatiana Regina Teles; Pinna, Fabio de Rezende; Padua, Francini Grecco de Melo; Voegels, Richard Louis

    2011-01-01

    Introduction. Chronic rhinosinusitis is a highly prevalent disease, so it is necessary to create valid instruments to assess the quality of life of these patients. The SNOT-20 questionnaire was developed for this purpose as a specific test to evaluate the quality of life related to chronic rhinosinusitis. It was validated in the English language, and it has been used in most studies on this subject. Currently, there is no validated instrument for assessing this disease in Portuguese. Objective. Cross-cultural adaptation and validation of SNOT-20 in Portuguese. Patients and Methods. The SNOT-20 questionnaire underwent a meticulous process of cross-cultural adaptation and was evaluated by assessing its sensitivity, reliability, and validity. Results. The process resulted in an intelligible version of the questionnaire, the SNOT-20p. Internal consistency (Cronbach's alpha = 0.91, P cross-cultural adaptation and validation of the SNOT-20 questionnaire into Portuguese. PMID:21799671

  15. Brazilian Portuguese version of the Revised Fibromyalgia Impact Questionnaire (FIQR-Br): cross-cultural validation, reliability, and construct and structural validation.

    Science.gov (United States)

    Lupi, Jaqueline Basilio; Carvalho de Abreu, Daniela Cristina; Ferreira, Mariana Candido; Oliveira, Renê Donizeti Ribeiro de; Chaves, Thais Cristina

    2017-08-01

    This study aimed to culturally adapt and validate the Revised Fibromyalgia Impact Questionnaire (FIQR) to Brazilian Portuguese, by the use of analysis of internal consistency, reliability, and construct and structural validity. A total of 100 female patients with fibromyalgia participated in the validation process of the Brazilian Portuguese version of the FIQR (FIQR-Br).The intraclass correlation coefficient (ICC) was used for statistical analysis of reliability (test-retest), Cronbach's alpha for internal consistency, Pearson's rank correlation for construct validity, and confirmatory factor analysis (CFA) for structural validity. It was verified excellent levels of reliability, with ICC greater than 0.75 for all questions and domains of the FIQR-Br. For internal consistency, alpha values greater than 0.70 for the items and domains of the questionnaire were observed. Moderate (0.40  0.70) correlations were observed for the scores of domains and total score between the FIQR-Br and FIQ-Br. The structure of the three domains of the FIQR-Br was confirmed by CFA. The results of this study suggest that that the FIQR-Br is a reliable and valid instrument for assessing fibromyalgia-related impact, and supports its use in clinical settings and research. The structure of the three domains of the FIQR-Br was also confirmed. Implications for Rehabilitation Fibromyalgia is a chronic musculoskeletal disorder characterized by widespread and diffuse pain, fatigue, sleep disturbances, and depression. The disease significantly impairs patients' quality of life and can be highly disabling. To be used in multicenter research efforts, the Revised Fibromyalgia Impact Questionnaire (FIQR) must be cross-culturally validated and psychometrically tested. This paper will make available a new version of the FIQR-Br since another version already exists, but there are concerns about its measurement properties. The availability of an instrument adapted to and validated for Brazilian

  16. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  17. Validity of the Danish Prostate Symptom Score questionnaire in stroke

    DEFF Research Database (Denmark)

    Tibaek, S.; Dehlendorff, Christian

    2009-01-01

    Objective – To determine the content and face validity of the Danish Prostate Symptom Score (DAN-PSS-1) questionnaire in stroke patients. Materials and methods – Content validity was judged among an expert panel in neuro-urology. The judgement was measured by the content validity index (CVI). Face...... validity was indicated in a clinical sample of 482 stroke patients in a hospital-based, cross-sectional survey. Results – I-CVI was rated >0.78 (range 0.94–1.00) for 75% of symptom and bother items corresponding to adequate content validity. The expert panel rated the entire DAN-PSS-1 questionnaire highly...... questionnaire appears to be content and face valid for measuring lower urinary tract symptoms after stroke....

  18. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    Science.gov (United States)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  19. A validated RP-HPLC method for the determination of Irinotecan hydrochloride residues for cleaning validation in production area

    Directory of Open Access Journals (Sweden)

    Sunil Reddy

    2013-03-01

    Full Text Available Introduction: cleaning validation is an integral part of current good manufacturing practices in pharmaceutical industry. The main purpose of cleaning validation is to prove the effectiveness and consistency of cleaning in a given pharmaceutical production equipment to prevent cross contamination and adulteration of drug product with other active ingredient. Objective: a rapid, sensitive and specific reverse phase HPLC method was developed and validated for the quantitative determination of irinotecan hydrochloride in cleaning validation swab samples. Method: the method was validated using waters symmetry shield RP-18 (250mm x 4.6mm 5 µm column with isocratic mobile phase containing a mixture of 0.02 M potassium di-hydrogen ortho-phosphate, pH adjusted to 3.5 with ortho-phosphoric acid, methanol and acetonitrile (60:20:20 v/v/v. The flow rate of mobile phase was 1.0 mL/min with column temperature of 25°C and detection wavelength at 220nm. The sample injection volume was 100 µl. Results: the calibration curve was linear over a concentration range from 0.024 to 0.143 µg/mL with a correlation coefficient of 0.997. The intra-day and inter-day precision expressed as relative standard deviation were below 3.2%. The recoveries obtained from stainless steel, PCGI, epoxy, glass and decron cloth surfaces were more than 85% and there was no interference from the cotton swab. The detection limit (DL and quantitation limit (QL were 0.008 and 0.023 µg ml-1, respectively. Conclusion: the developed method was validated with respect to specificity, linearity, limit of detection and quantification, accuracy, precision and solution stability. The overall procedure can be used as part of a cleaning validation program in pharmaceutical manufacture of irinotecan hydrochloride.

  20. CIPS Validation Data Plan

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2012-03-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  1. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  2. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  3. DTU PMU Laboratory Development - Testing and Validation

    DEFF Research Database (Denmark)

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE...... standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to follow known patterns and provide confirmation about the test system to confirm the design and settings....... In a nutshell, having 2 PMUs that observe same signals provides validation of the operation and flags questionable results with more certainty. Moreover, the performance and accuracy of the DTU-PMU is tested acquiring good and precise results, when compared with a commercial phasor measurement device, PMU-1....

  4. Further Validation of the IDAS: Evidence of Convergent, Discriminant, Criterion, and Incremental Validity

    Science.gov (United States)

    Watson, David; O'Hara, Michael W.; Chmielewski, Michael; McDade-Montez, Elizabeth A.; Koffel, Erin; Naragon, Kristin; Stuart, Scott

    2008-01-01

    The authors explicated the validity of the Inventory of Depression and Anxiety Symptoms (IDAS; D. Watson et al., 2007) in 2 samples (306 college students and 605 psychiatric patients). The IDAS scales showed strong convergent validity in relation to parallel interview-based scores on the Clinician Rating version of the IDAS; the mean convergent…

  5. User's guide for signal validation software: Final report

    International Nuclear Information System (INIS)

    Swisher, V.I.

    1987-09-01

    Northeast Utilities has implemented a real-time signal validation program into the safety parameter display systems (SPDS) at Millstone Units 2 and 3. Signal validation has been incorporated to improve the reliability of the information being used in the SPDS. Signal validation uses Parity Space Vector Analysis to process SPDS sensor data. The Parity Space algorithm determines consistency among independent, redundant input measurements. This information is then used to calculate a validated estimate of that parameter. Additional logic is incorporated to compare partially redundant measurement data. In both plants the SPDS has been designed to monitor the status of critical safety functions (CSFs) and provide information that can be used with plant-specific emergency operating procedures (EOPs). However the CSF logic, EOPs, and complement of plant sensors vary for these plants due to their different design characteristics (MP2 - 870 MWe Combustion Engineering PWR, MP3 - 1150 MWe Westinghouse PWR). These differences in plant design and information requirements result in a variety of signal validation applications

  6. The Role of Generalizability in Validity.

    Science.gov (United States)

    Kane, Michael

    The relationship between generalizability and validity is explained, making four important points. The first is that generalizability coefficients provide upper bounds on validity. The second point is that generalization is one step in most interpretive arguments, and therefore, generalizability is a necessary condition for the validity of these…

  7. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    Energy Technology Data Exchange (ETDEWEB)

    SEXTON, R.A.

    2000-03-13

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation.

  8. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    International Nuclear Information System (INIS)

    SEXTON, R.A.

    2000-01-01

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation

  9. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  10. CONCURRENT VALIDITY OF THE STUDENT TEACHER PROFESSIONAL IDENTITY SCALE

    Directory of Open Access Journals (Sweden)

    Predrag Živković

    2018-04-01

    Full Text Available The main purpose of study was to examine concurrent validity of the Student Teachers Professional Identity Scale–STPIS (Fisherman and Abbot, 1998 that was for the first time used in Serbia. Indicators of concurrent validity was established by correlation with student teacher self-reported well-being, self-esteem, burnout stress and resilience. Based on the results we can conclude that the STPIS meets the criterion of concurrent validity. The implications of these results are important for researchers and decisions makers in teacher education

  11. Construct Validity: Advances in Theory and Methodology

    OpenAIRE

    Strauss, Milton E.; Smith, Gregory T.

    2009-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review f...

  12. Simulation Validation for Societal Systems

    National Research Council Canada - National Science Library

    Yahja, Alex

    2006-01-01

    .... There are however, substantial obstacles to validation. The nature of modeling means that there are implicit model assumptions, a complex model space and interactions, emergent behaviors, and uncodified and inoperable simulation and validation knowledge...

  13. Failure mode and effects analysis outputs: are they valid?

    Directory of Open Access Journals (Sweden)

    Shebl Nada

    2012-06-01

    Full Text Available Abstract Background Failure Mode and Effects Analysis (FMEA is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies

  14. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  15. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  16. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  17. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  18. Validation of method in instrumental NAA for food products sample

    International Nuclear Information System (INIS)

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  19. Failure mode and effects analysis outputs: are they valid?

    Science.gov (United States)

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  20. Challenges of forest landscape modeling - simulating large landscapes and validating results

    Science.gov (United States)

    Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson

    2011-01-01

    Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...

  1. Validation of comprehensive space radiation transport code

    International Nuclear Information System (INIS)

    Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.

    1998-01-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation

  2. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  3. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  4. Hanford Environmental Restoration data validation process for chemical and radiochemical analyses

    International Nuclear Information System (INIS)

    Adams, M.R.; Bechtold, R.A.; Clark, D.E.; Angelos, K.M.; Winter, S.M.

    1993-10-01

    Detailed procedures for validation of chemical and radiochemical data are used to assure consistent application of validation principles and support a uniform database of quality environmental data. During application of these procedures, it was determined that laboratory data packages were frequently missing certain types of documentation causing subsequent delays in meeting critical milestones in the completion of validation activities. A quality improvement team was assembled to address the problems caused by missing documentation and streamline the entire process. The result was the development of a separate data package verification procedure and revisions to the data validation procedures. This has resulted in a system whereby deficient data packages are immediately identified and corrected prior to validation and revised validation procedures which more closely match the common analytical reporting practices of laboratory service vendors

  5. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  6. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  7. Reliability and validity of risk analysis

    International Nuclear Information System (INIS)

    Aven, Terje; Heide, Bjornar

    2009-01-01

    In this paper we investigate to what extent risk analysis meets the scientific quality requirements of reliability and validity. We distinguish between two types of approaches within risk analysis, relative frequency-based approaches and Bayesian approaches. The former category includes both traditional statistical inference methods and the so-called probability of frequency approach. Depending on the risk analysis approach, the aim of the analysis is different, the results are presented in different ways and consequently the meaning of the concepts reliability and validity are not the same.

  8. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  9. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  10. Practical procedure for method validation in INAA- A tutorial

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2015-01-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  11. Practical procedure for method validation in INAA- A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: robsonpetroni@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  12. The Chimera of Validity

    Science.gov (United States)

    Baker, Eva L.

    2013-01-01

    Background/Context: Education policy over the past 40 years has focused on the importance of accountability in school improvement. Although much of the scholarly discourse around testing and assessment is technical and statistical, understanding of validity by a non-specialist audience is essential as long as test results drive our educational…

  13. Radiochemical verification and validation in the environmental data collection process

    International Nuclear Information System (INIS)

    Rosano-Reece, D.; Bottrell, D.; Bath, R.J.

    1994-01-01

    A credible and cost effective environmental data collection process should produce analytical data which meets regulatory and program specific requirements. Analytical data, which support the sampling and analysis activities at hazardous waste sites, undergo verification and independent validation before the data are submitted to regulators. Understanding the difference between verification and validation and their respective roles in the sampling and analysis process is critical to the effectiveness of a program. Verification is deciding whether the measurement data obtained are what was requested. The verification process determines whether all the requirements were met. Validation is more complicated than verification. It attempts to assess the impacts on data use, especially when requirements are not met. Validation becomes part of the decision-making process. Radiochemical data consists of a sample result with an associated error. Therefore, radiochemical validation is different and more quantitative than is currently possible for the validation of hazardous chemical data. Radiochemical data include both results and uncertainty that can be statistically compared to identify significance of differences in a more technically defensible manner. Radiochemical validation makes decisions about analyte identification, detection, and uncertainty for a batch of data. The process focuses on the variability of the data in the context of the decision to be made. The objectives of this paper are to present radiochemical verification and validation for environmental data and to distinguish the differences between the two operations

  14. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  15. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  16. Test validation of nuclear and fossil fuel control operators

    International Nuclear Information System (INIS)

    Moffie, D.J.

    1976-01-01

    To establish job relatedness, one must go through a procedure of concurrent and predictive validation. For concurrent validity a group of employees is tested and the test scores are related to performance concurrently or during the same time period. For predictive validity, individuals are tested but the results of these tests are not used at the time of employment. The tests are sealed and scored at a later date, and then related to job performance. Job performance data include ratings by supervisors, actual job performance indices, turnover, absenteeism, progress in training, etc. The testing guidelines also stipulate that content and construct validity can be used

  17. How Mathematicians Determine if an Argument Is a Valid Proof

    Science.gov (United States)

    Weber, Keith

    2008-01-01

    The purpose of this article is to investigate the mathematical practice of proof validation--that is, the act of determining whether an argument constitutes a valid proof. The results of a study with 8 mathematicians are reported. The mathematicians were observed as they read purported mathematical proofs and made judgments about their validity;…

  18. Item validity vs. item discrimination index: a redundancy?

    Science.gov (United States)

    Panjaitan, R. L.; Irawati, R.; Sujana, A.; Hanifah, N.; Djuanda, D.

    2018-03-01

    In several literatures about evaluation and test analysis, it is common to find that there are calculations of item validity as well as item discrimination index (D) with different formula for each. Meanwhile, other resources said that item discrimination index could be obtained by calculating the correlation between the testee’s score in a particular item and the testee’s score on the overall test, which is actually the same concept as item validity. Some research reports, especially undergraduate theses tend to include both item validity and item discrimination index in the instrument analysis. It seems that these concepts might overlap for both reflect the test quality on measuring the examinees’ ability. In this paper, examples of some results of data processing on item validity and item discrimination index were compared. It would be discussed whether item validity and item discrimination index can be represented by one of them only or it should be better to present both calculations for simple test analysis, especially in undergraduate theses where test analyses were included.

  19. Verification and Validation of a Fingerprint Image Registration Software

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2006-01-01

    Full Text Available The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a the validation of the source code with respect to the system requirements specification; (b the validation of the optimization algorithm, which is in the core of the registration system; and (c the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  20. Design description and validation results for the IFMIF High Flux Test Module as outcome of the EVEDA phase

    Directory of Open Access Journals (Sweden)

    F. Arbeiter

    2016-12-01

    Full Text Available During the Engineering Validation and Engineering Design Activities (EVEDA phase (2007-2014 of the International Fusion Materials Irradiation Facility (IFMIF, an advanced engineering design of the High Flux Test Module (HFTM has been developed with the objective to facilitate the controlled irradiation of steel samples in the high flux area directly behind the IFMIF neutron source. The development process addressed included manufacturing techniques, CAD, neutronic, thermal-hydraulic and mechanical analyses complemented by a series of validation activities. Validation included manufacturing of 1:1 parts and mockups, test of prototypes in the FLEX and HELOKA-LP helium loops of KIT for verification of the thermal and mechanical properties, and irradiation of specimen filled capsule prototypes in the BR2 test reactor. The prototyping activities were backed by several R&D studies addressing focused issues like handling of liquid NaK (as filling medium and insertion of Small Specimen Test Technique (SSTT specimens into the irradiation capsules. This paper provides an up-todate design description of the HFTM irradiation device, and reports on the achieved performance criteria related to the requirements. Results of the validation activities are accounted for and the most important issues for further development are identified.

  1. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  2. Validation of the Adolescent Meta-cognition Questionnaire Version

    Directory of Open Access Journals (Sweden)

    Kazem Khoramdel

    2012-03-01

    Full Text Available Background: The role and importance of meta-cognitive beliefs in creating and retaining of anxiety disorders were explained initially in meta-cognitive theory. The purpose of this study was to validate the Meta-cognitions Questionnaire-Adolescent version (MCQ-A in normal Iranian people and compare of meta-cognitive beliefs between adolescents with anxiety disorders and normal individuals.Materials and Method: This was a standardized study. First of all, the original version was translated into Persian then administered to 204 (101 boys and 103 girls adolescent aged 13 through 17 years. Theyhave been clustered randomly. They were selected from the schools of Isfahan, together with mood and feelings questionnaire and revised children's manifest anxiety scale. In order to assess reliability, method of internal consistency (Chronbach’s alpha and split-half coefficient was used, and also in order to assess validity, convergent validity, criterion validity and confirmatory factor analysis were used. Results: The results of correlation coefficient of convergent validity showed a relation between total score of (MCQ-A and its components with anxiety and depression except cognitive self-consciousness. Data were indicative of appropriate level of Coranbach’s alpha and split-half reliability coefficients of the MCQ-A and extracted factors. The results of factor analysis by principle components analysis and using varimax rotation showed 5 factors that account for 0.45% of the variance. Conclusion: MCQ-A has satisfactory psychometric properties in Iranian people

  3. Physics validation of detector simulation tools for LHC

    International Nuclear Information System (INIS)

    Beringer, J.

    2004-01-01

    Extensive studies aimed at validating the physics processes built into the detector simulation tools Geant4 and Fluka are in progress within all Large Hardon Collider (LHC) experiments, within the collaborations developing these tools, and within the LHC Computing Grid (LCG) Simulation Physics Validation Project, which has become the primary forum for these activities. This work includes detailed comparisons with test beam data, as well as benchmark studies of simple geometries and materials with single incident particles of various energies for which experimental data is available. We give an overview of these validation activities with emphasis on the latest results

  4. Validating presupposed versus focused text information.

    Science.gov (United States)

    Singer, Murray; Solar, Kevin G; Spear, Jackie

    2017-04-01

    There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.

  5. Integrated Validation System for a Thermal-hydraulic System Code, TASS/SMR-S

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hee-Kyung; Kim, Hyungjun; Kim, Soo Hyoung; Hwang, Young-Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyeon-Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-10-15

    Development including enhancement and modification of thermal-hydraulic system computer code is indispensable to a new reactor, SMART. Usually, a thermal-hydraulic system code validation is achieved by a comparison with the results of corresponding physical effect tests. In the reactor safety field, a similar concept, referred to as separate effect tests has been used for a long time. But there are so many test data for comparison because a lot of separate effect tests and integral effect tests are required for a code validation. It is not easy to a code developer to validate a computer code whenever a code modification is occurred. IVS produces graphs which shown the comparison the code calculation results with the corresponding test results automatically. IVS was developed for a validation of TASS/SMR-S code. The code validation could be achieved by a comparison code calculation results with corresponding test results. This comparison was represented as a graph for convenience. IVS is useful before release a new code version. The code developer can validate code result easily using IVS. Even during code development, IVS could be used for validation of code modification. The code developer could gain a confidence about his code modification easily and fast and could be free from tedious and long validation work. The popular software introduced in IVS supplies better usability and portability.

  6. In-Flight Validation of Mid and Thermal Infrared Remotely Sensed Data Using the Lake Tahoe and Salton Sea Automated Validation Sites

    Science.gov (United States)

    Hook, Simon J.

    2008-01-01

    The presentation includes an introduction, Lake Tahoe site layout and measurements, Salton Sea site layout and measurements, field instrument calibration and cross-calculations, data reduction methodology and error budgets, and example results for MODIS. Summary and conclusions are: 1) Lake Tahoe CA/NV automated validation site was established in 1999 to assess radiometric accuracy of satellite and airborne mid and thermal infrared data and products. Water surface temperatures range from 4-25C.2) Salton Sea CA automated validation site was established in 2008 to broaden range of available water surface temperatures and atmospheric water vapor test cases. Water surface temperatures range from 15-35C. 3) Sites provide all information necessary for validation every 2 mins (bulk temperature, skin temperature, air temperature, wind speed, wind direction, net radiation, relative humidity). 4) Sites have been used to validate mid and thermal infrared data and products from: ASTER, AATSR, ATSR2, MODIS-Terra, MODIS-Aqua, Landsat 5, Landsat 7, MTI, TES, MASTER, MAS. 5) Approximately 10 years of data available to help validate AVHRR.

  7. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  8. The validation of an infrared simulation system

    CSIR Research Space (South Africa)

    De Waal, A

    2013-08-01

    Full Text Available theoretical validation framework. This paper briefly describes the procedure used to validate software models in an infrared system simulation, and provides application examples of this process. The discussion includes practical validation techniques...

  9. Spacecraft early design validation using formal methods

    International Nuclear Information System (INIS)

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  10. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  11. Method validation in pharmaceutical analysis: from theory to practical optimization

    Directory of Open Access Journals (Sweden)

    Jaqueline Kaleian Eserian

    2015-01-01

    Full Text Available The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC analysis.   Type: Commentary

  12. Worst-case study for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagents: Methodology validation of total organic carbon

    International Nuclear Information System (INIS)

    Porto, Luciana Valeria Ferrari Machado

    2015-01-01

    (repeatability and intermediate precision), and accuracy (recovery) and they were defined as follows: 4% acidifying reagent, 2.5 ml oxidizing reagent, 4.5 minutes integration curve time, 3 minutes sparge time and linearity in 40-1000 μgL -1 range, with correlation coefficient (r) and residual sum of minimum squares (r 2 ) greater than 0.99 respectively. DL and QL for NPOC were 14.25 ppb e 47.52 ppb respectively, repeatability between 0.11 and 4.47%; the intermediate precision between 0.59 and 3.80% and accuracy between 97.05 and 102.90%. The analytical curve for Mibi was linear in 100-800 μgL -1 range with r and r 2 greater than 0.99, presenting similar parameters to NPOC analytical curves. The results obtained in this study demonstrated that the worst-case approach to cleaning validation is a simple and effective way to reduce the complexity and slowness of the validation process, and provide a costs reduction involved in these activities. All results obtained in NPOC method validation assays met the requirements and specifications recommended by the RE 899/2003 Resolution from ANVISA to consider the method validated. (author)

  13. Cleaning Validation of Fermentation Tanks

    DEFF Research Database (Denmark)

    Salo, Satu; Friis, Alan; Wirtanen, Gun

    2008-01-01

    Reliable test methods for checking cleanliness are needed to evaluate and validate the cleaning process of fermentation tanks. Pilot scale tanks were used to test the applicability of various methods for this purpose. The methods found to be suitable for validation of the clenlinees were visula...

  14. Toward valid and reliable brain imaging results in eating disorders.

    Science.gov (United States)

    Frank, Guido K W; Favaro, Angela; Marsh, Rachel; Ehrlich, Stefan; Lawson, Elizabeth A

    2018-03-01

    Human brain imaging can help improve our understanding of mechanisms underlying brain function and how they drive behavior in health and disease. Such knowledge may eventually help us to devise better treatments for psychiatric disorders. However, the brain imaging literature in psychiatry and especially eating disorders has been inconsistent, and studies are often difficult to replicate. The extent or severity of extremes of eating and state of illness, which are often associated with differences in, for instance hormonal status, comorbidity, and medication use, commonly differ between studies and likely add to variation across study results. Those effects are in addition to the well-described problems arising from differences in task designs, data quality control procedures, image data preprocessing and analysis or statistical thresholds applied across studies. Which of those factors are most relevant to improve reproducibility is still a question for debate and further research. Here we propose guidelines for brain imaging research in eating disorders to acquire valid results that are more reliable and clinically useful. © 2018 Wiley Periodicals, Inc.

  15. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  16. The proportion valid effect in covert orienting: strategic control or implicit learning?

    Science.gov (United States)

    Risko, Evan F; Stolz, Jennifer A

    2010-03-01

    It is well known that the difference in performance between valid and invalid trials in the covert orienting paradigm (i.e., the cueing effect) increases as the proportion of valid trials increases. This proportion valid effect is widely assumed to reflect "strategic" control over the distribution of attention. In the present experiments we determine if this effect results from an explicit strategy or implicit learning by probing participant's awareness of the proportion of valid trials. Results support the idea that the proportion valid effect in the covert orienting paradigm reflects implicit learning not an explicit strategy.

  17. Internal Cluster Validation on Earthquake Data in the Province of Bengkulu

    Science.gov (United States)

    Rini, D. S.; Novianti, P.; Fransiska, H.

    2018-04-01

    K-means method is an algorithm for cluster n object based on attribute to k partition, where k < n. There is a deficiency of algorithms that is before the algorithm is executed, k points are initialized randomly so that the resulting data clustering can be different. If the random value for initialization is not good, the clustering becomes less optimum. Cluster validation is a technique to determine the optimum cluster without knowing prior information from data. There are two types of cluster validation, which are internal cluster validation and external cluster validation. This study aims to examine and apply some internal cluster validation, including the Calinski-Harabasz (CH) Index, Sillhouette (S) Index, Davies-Bouldin (DB) Index, Dunn Index (D), and S-Dbw Index on earthquake data in the Bengkulu Province. The calculation result of optimum cluster based on internal cluster validation is CH index, S index, and S-Dbw index yield k = 2, DB Index with k = 6 and Index D with k = 15. Optimum cluster (k = 6) based on DB Index gives good results for clustering earthquake in the Bengkulu Province.

  18. Development and validation of the Stirling Eating Disorder Scales.

    Science.gov (United States)

    Williams, G J; Power, K G; Miller, H R; Freeman, C P; Yellowlees, A; Dowds, T; Walker, M; Parry-Jones, W L

    1994-07-01

    The development and reliability/validity check of an 80-item, 8-scale measure for use with eating disorder patients is presented. The Stirling Eating Disorder Scales (SEDS) assess anorexic dietary behavior, anorexic dietary cognitions, bulimic dietary behavior, bulimic dietary cognitions, high perceived external control, low assertiveness, low self-esteem, and self-directed hostility. The SEDS were administered to 82 eating disorder patients and 85 controls. Results indicate that the SEDS are acceptable in terms of internal consistency, reliability, group validity, and concurrent validity.

  19. Developing a validation for environmental sustainability

    Science.gov (United States)

    Adewale, Bamgbade Jibril; Mohammed, Kamaruddeen Ahmed; Nawi, Mohd Nasrun Mohd; Aziz, Zulkifli

    2016-08-01

    One of the agendas for addressing environmental protection in construction is to reduce impacts and make the construction activities more sustainable. This important consideration has generated several research interests within the construction industry, especially considering the construction damaging effects on the ecosystem, such as various forms of environmental pollution, resource depletion and biodiversity loss on a global scale. Using Partial Least Squares-Structural Equation Modeling technique, this study validates environmental sustainability (ES) construct in the context of large construction firms in Malaysia. A cross-sectional survey was carried out where data was collected from Malaysian large construction firms using a structured questionnaire. Results of this study revealed that business innovativeness and new technology are important in determining environmental sustainability (ES) of the Malaysian construction firms. It also established an adequate level of internal consistency reliability, convergent validity and discriminant validity for each of this study's constructs. And based on this result, it could be suggested that the indicators for organisational innovativeness dimensions (business innovativeness and new technology) are useful to measure these constructs in order to study construction firms' tendency to adopt environmental sustainability (ES) in their project execution.

  20. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  1. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    Science.gov (United States)

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  2. Validation of the Physician Teaching Motivation Questionnaire (PTMQ).

    Science.gov (United States)

    Dybowski, Christoph; Harendza, Sigrid

    2015-10-02

    Physicians play a major role as teachers in undergraduate medical education. Studies indicate that different forms and degrees of motivation can influence work performance in general and that teachers' motivation to teach can influence students' academic achievements in particular. Therefore, the aim of this study was to develop and to validate an instrument measuring teaching motivations in hospital-based physicians. We chose self-determination theory as a theoretical framework for item and scale development. It distinguishes between different dimensions of motivation depending on the amount of self-regulation and autonomy involved and its empirical evidence has been demonstrated in other areas of research. To validate the new instrument (PTMQ = Physician Teaching Motivation Questionnaire), we used data from a sample of 247 physicians from internal medicine and surgery at six German medical faculties. Structural equation modelling was conducted to confirm the factorial structure, correlation analyses and linear regressions were performed to examine concurrent and incremental validity. Structural equation modelling confirmed a good global fit for the factorial structure of the final instrument (RMSEA = .050, TLI = .957, SRMR = .055, CFI = .966). Cronbach's alphas indicated good internal consistencies for all scales (α = .75 - .89) except for the identified teaching motivation subscale with an acceptable internal consistency (α = .65). Tests of concurrent validity with global work motivation, perceived teaching competence, perceived teaching involvement and voluntariness of lesson allocation delivered theory-consistent results with slight deviations for some scales. Incremental validity over global work motivation in predicting perceived teaching involvement was also confirmed. Our results indicate that the PTMQ is a reliable, valid and therefore suitable instrument for assessing physicians' teaching motivation.

  3. Rapid Robot Design Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies will create a comprehensive software infrastructure for rapid validation of robot designs. The software will support push-button validation...

  4. Predictive validity of the Slovene Matura

    Directory of Open Access Journals (Sweden)

    Valentin Bucik

    2001-09-01

    Full Text Available Passing Matura is the last step of the secondary school graduation, but it is also the entrance ticket for the university. Besides, the summary score of Matura exam takes part in the selection process for the particular university studies in case of 'numerus clausus'. In discussing either aim of Matura important dilemmas arise, namely, is the Matura examination sufficiently exact and rightful procedure to, firstly, use its results for settling starting studying conditions and, secondly, to select validly, reliably and sensibly the best candidates for university studies. There are some questions concerning predictive validity of Matura that should be answered, e.g. (i does Matura as an enrollment procedure add to the qualitaty of the study; (ii is it a better selection tool than entrance examinations formerly used in different faculties in the case of 'numerus clausus'; and (iii is it reasonable to expect high predictive validity of Matura results for success at the university at all. Recent results show that in the last few years the dropout-rate is lower than before, the pass-rate between the first and the second year is higher and the average duration of study per student is shorter. It is clear, however, that it is not possible to simply predict the study success from the Matura results. There are too many factors influencing the success in the university studies. In most examined study programs the correlation between Matura results and study success is positive but moderate, therefore it can not be said categorically that only candidates accepted according to the Matura results are (or will be the best students. Yet it has been shown that Matura is a standardized procedure, comparable across different candidates entering university, and that – when compared entrance examinations – it is more objective, reliable, and hen ce more valid and fair a procedure. In addition, comparable procedures of university recruiting and selection can be

  5. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  6. A CFD validation roadmap for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  7. The Consequences of Consequential Validity.

    Science.gov (United States)

    Mehrens, William A.

    1997-01-01

    There is no agreement at present about the importance or meaning of the term "consequential validity." It is important that the authors of revisions to the "Standards for Educational and Psychological Testing" recognize the debate and relegate discussion of consequences to a context separate from the discussion of validity.…

  8. Validity in SSM: neglected areas

    NARCIS (Netherlands)

    Pala, O.; Vennix, J.A.M.; Mullekom, T.L. van

    2003-01-01

    Contrary to the prevailing notion in hard OR, in soft system methodology (SSM), validity seems to play a minor role. The primary reason for this is that SSM models are of a different type, they are not would-be descriptions of real-world situations. Therefore, establishing their validity, that is

  9. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  10. Validity and reliability of the NAB Naming Test.

    Science.gov (United States)

    Sachs, Bonnie C; Rush, Beth K; Pedraza, Otto

    2016-05-01

    Confrontation naming is commonly assessed in neuropsychological practice, but few standardized measures of naming exist and those that do are susceptible to the effects of education and culture. The Neuropsychological Assessment Battery (NAB) Naming Test is a 31-item measure used to assess confrontation naming. Despite adequate psychometric information provided by the test publisher, there has been limited independent validation of the test. In this study, we investigated the convergent and discriminant validity, internal consistency, and alternate forms reliability of the NAB Naming Test in a sample of adults (Form 1: n = 247, Form 2: n = 151) clinically referred for neuropsychological evaluation. Results indicate adequate-to-good internal consistency and alternate forms reliability. We also found strong convergent validity as demonstrated by relationships with other neurocognitive measures. We found preliminary evidence that the NAB Naming Test demonstrates a more pronounced ceiling effect than other commonly used measures of naming. To our knowledge, this represents the largest published independent validation study of the NAB Naming Test in a clinical sample. Our findings suggest that the NAB Naming Test demonstrates adequate validity and reliability and merits consideration in the test arsenal of clinical neuropsychologists.

  11. Validation of geotechnical software for repository performance assessment

    International Nuclear Information System (INIS)

    LeGore, T.; Hoover, J.D.; Khaleel, R.; Thornton, E.C.; Anantatmula, R.P.; Lanigan, D.C.

    1989-01-01

    An important step in the characterization of a high level nuclear waste repository is to demonstrate that geotechnical software, used in performance assessment, correctly models validation. There is another type of validation, called software validation. It is based on meeting the requirements of specifications documents (e.g. IEEE specifications) and does not directly address the correctness of the specifications. The process of comparing physical experimental results with the predicted results should incorporate an objective measure of the level of confidence regarding correctness. This paper reports on a methodology developed that allows the experimental uncertainties to be explicitly included in the comparison process. The methodology also allows objective confidence levels to be associated with the software. In the event of a poor comparison, the method also lays the foundation for improving the software

  12. Validation of Yoon's Critical Thinking Disposition Instrument.

    Science.gov (United States)

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  13. Validating the JobFit system functional assessment method

    Energy Technology Data Exchange (ETDEWEB)

    Jenny Legge; Robin Burgess-Limerick

    2007-05-15

    Workplace injuries are costing the Australian coal mining industry and its communities $410 Million a year. This ACARP study aims to meet those demands by developing a safe, reliable and valid pre-employment functional assessment tool. All JobFit System Pre-Employment Functional Assessments (PEFAs) consist of a musculoskeletal screen, balance test, aerobic fitness test and job-specific postural tolerances and material handling tasks. The results of each component are compared to the applicant's job demands and an overall PEFA score between 1 and 4 is given with 1 being the better score. The reliability study and validity study were conducted concurrently. The reliability study examined test-retest, intra-tester and inter-tester reliability of the JobFit System Functional Assessment Method. Overall, good to excellent reliability was found, which was sufficient to be used for comparison with injury data for determining the validity of the assessment. The overall assessment score and material handling tasks had the greatest reliability. The validity study compared the assessment results of 336 records from a Queensland underground and open cut coal mine with their injury records. A predictive relationship was found between PEFA score and the risk of a back/trunk/shoulder injury from manual handling. An association was also found between PEFA score of 1 and increased length of employment. Lower aerobic fitness test results had an inverse relationship with injury rates. The study found that underground workers, regardless of PEFA score, were more likely to have an injury when compared to other departments. No relationship was found between age and risk of injury. These results confirm the validity of the JobFit System Functional Assessment method.

  14. Validation for chromatographic and electrophoretic methods

    OpenAIRE

    Ribani, Marcelo; Bottoli, Carla Beatriz Grespan; Collins, Carol H.; Jardim, Isabel Cristina Sales Fontes; Melo, Lúcio Flávio Costa

    2004-01-01

    The validation of an analytical method is fundamental to implementing a quality control system in any analytical laboratory. As the separation techniques, GC, HPLC and CE, are often the principal tools used in such determinations, procedure validation is a necessity. The objective of this review is to describe the main aspects of validation in chromatographic and electrophoretic analysis, showing, in a general way, the similarities and differences between the guidelines established by the dif...

  15. Validation of Calculations in a Digital Thermometer Firmware

    Science.gov (United States)

    Batagelj, V.; Miklavec, A.; Bojkovski, J.

    2014-04-01

    State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.

  16. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  17. Towards natural language question generation for the validation of ontologies and mappings.

    Science.gov (United States)

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  18. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  19. Development and validation of sodium fire analysis code ASSCOPS

    International Nuclear Information System (INIS)

    Ohno, Shuji

    2001-01-01

    A version 2.1 of the ASSCOPS sodium fire analysis code was developed to evaluate the thermal consequences of a sodium leak and consequent fire in LMFBRs. This report describes the computational models and the validation studies using the code. The ASSCOPS calculates sodium droplet and pool fire, and consequential heat/mass transfer behavior. Analyses of sodium pool or spray fire experiments confirmed that this code and parameters used in the validation studies gave valid results on the thermal consequences of sodium leaks and fires. (author)

  20. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  1. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  2. Validation of the Reflux Disease Questionnaire into Greek

    Directory of Open Access Journals (Sweden)

    Eirini Oikonomidou

    2012-09-01

    Full Text Available Primary care physicians face challenges in diagnosing and managing gastroesophageal reflux disease (GERD. The Reflux Disease Questionnaire (RDQ meets the standards of validity, reliability, and practicability. This paper reports on the validation of the Greek translation of the RDQ. RDQ is a condition specific instrument. For the validation of the questionnaire, the internal consistency of its items was established using the alpha coefficient of Chronbach. The reproducibility (test-retest reliability was measured by kappa correlation coefficient and the criterion of validity was calculated against the diagnosis of another questionnaire already translated and validated into Greek (IDGP using kappa correlation coefficient. A factor analysis was also performed. Greek RDQ showed a high overall internal consistency (alpha value: 0.91 for individual comparison. All 8 items regarding heartburn and regurgitation, GERD, had good reproducibility (Cohen’s κ 0.60-0.79, while the remaining 4 items about dyspepsia had a moderate reproducibility (Cohen’s κ=’ 0.40-0.59 The kappa coefficient for criterion validity for GERD was rather poor (0.20, 95% CI: 0.04, 0.36 and the overall agreement between the results of the RDQ questionnaire and those based on the IDGP questionnaire was 70.5%. Factor analysis indicated 3 factors with Eigenvalue over 1.0, and responsible for 76.91% of variance. Regurgitation items correlated more strongly with the third component but pain behind sternum and upper stomach pain correlated with the second component. The Greek version of RDQ seems to be a reliable and valid instrument following the pattern of the original questionnaire, and could be used in primary care research in Greece.

  3. Guidelines for the verification and validation of expert system software and conventional software: Validation scenarios. Volume 6

    International Nuclear Information System (INIS)

    Mirsky, S.M.; Hayes, J.E.; Miller, L.A.

    1995-03-01

    This report is the sixth volume in a series of reports describing the results of the Expert System Verification and Validation (V ampersand V) project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V ampersand V of expert systems for use in nuclear power applications. This activity was concerned with the development of a methodology for selecting validation scenarios and subsequently applying it to two expert systems used for nuclear utility applications. Validation scenarios were defined and classified into five categories: PLANT, TEST, BASICS, CODE, and LICENSING. A sixth type, REGRESSION, is a composite of the others and refers to the practice of using trusted scenarios to ensure that modifications to software did not change unmodified functions. Rationale was developed for preferring scenarios selected from the categories in the order listed and for determining under what conditions to select scenarios from other types. A procedure incorporating all of the recommendations was developed as a generalized method for generating validation scenarios. The procedure was subsequently applied to two expert systems used in the nuclear industry and was found to be effective, given that an experienced nuclear engineer made the final scenario selections. A method for generating scenarios directly from the knowledge base component was suggested

  4. Guidelines for the verification and validation of expert system software and conventional software: Validation scenarios. Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    Mirsky, S.M.; Hayes, J.E.; Miller, L.A. [Science Applications International Corp., McLean, VA (United States)

    1995-03-01

    This report is the sixth volume in a series of reports describing the results of the Expert System Verification and Validation (V&V) project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V&V of expert systems for use in nuclear power applications. This activity was concerned with the development of a methodology for selecting validation scenarios and subsequently applying it to two expert systems used for nuclear utility applications. Validation scenarios were defined and classified into five categories: PLANT, TEST, BASICS, CODE, and LICENSING. A sixth type, REGRESSION, is a composite of the others and refers to the practice of using trusted scenarios to ensure that modifications to software did not change unmodified functions. Rationale was developed for preferring scenarios selected from the categories in the order listed and for determining under what conditions to select scenarios from other types. A procedure incorporating all of the recommendations was developed as a generalized method for generating validation scenarios. The procedure was subsequently applied to two expert systems used in the nuclear industry and was found to be effective, given that an experienced nuclear engineer made the final scenario selections. A method for generating scenarios directly from the knowledge base component was suggested.

  5. Site characterization and validation - Inflow to the validation drift

    International Nuclear Information System (INIS)

    Harding, W.G.C.; Black, J.H.

    1992-01-01

    Hydrogeological experiments have had an essential role in the characterization of the drift site on the Stripa project. This report focuses on the methods employed and the results obtained from inflow experiments performed on the excavated drift in stage 5 of the SCV programme. Inflows were collected in sumps on the floor, in plastic sheeting on the upper walls and ceiling, and measured by means of differential humidity of ventilated air at the bulkhead. Detailed evaporation experiments were also undertaken on uncovered areas of the excavated drift. The inflow distribution was determined on the basis of a system of roughly equal sized grid rectangles. The results have highlighted the overriding importance of fractures in the supply of water to the drift site. The validation drift experiment has revealed that in excess of 99% of inflow comes from a 5 m section corresponding to the 'H' zone, and that as much as 57% was observed coming from a single grid square (267). There was considerable heterogeneity even within the 'H' zone, with 38% of such samples areas yielding no flow at all. Model predictions in stage 4 underestimated the very substantial declines in inflow observed in the validation drift when compared to the SDE; this was especially so in the 'good' rock areas. Increased drawdowns in the drift have generated less flow and reduced head responses in nearby boreholes by a similar proportion. This behaviour has been the focus for considerable study in the latter part of the SCV project, and a number of potential processes have been proposed. These include 'transience', stress redistribution resulting from the creation of the drift, chemical precipitation, blast-induced dynamic unloading and related gas intrusion, and degassing. (au)

  6. DTU PMU Laboratory Development - Testing and Validation

    OpenAIRE

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.; Nielsen, Arne Hejde; Østergaard, Jacob

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to foll...

  7. Validating estimates of problematic drug use in England

    Directory of Open Access Journals (Sweden)

    Heatlie Heath

    2007-10-01

    Full Text Available Abstract Background UK Government expenditure on combatting drug abuse is based on estimates of illicit drug users, yet the validity of these estimates is unknown. This study aims to assess the face validity of problematic drug use (PDU and injecting drug use (IDU estimates for all English Drug Action Teams (DATs in 2001. The estimates were derived from a statistical model using the Multiple Indicator Method (MIM. Methods Questionnaire study, in which the 149 English Drug Action Teams were asked to evaluate the MIM estimates for their DAT. Results The response rate was 60% and there were no indications of selection bias. Of responding DATs, 64% thought the PDU estimates were about right or did not dispute them, while 27% had estimates that were too low and 9% were too high. The figures for the IDU estimates were 52% (about right, 44% (too low and 3% (too high. Conclusion This is the first UK study to determine the validity estimates of problematic and injecting drug misuse. The results of this paper highlight the need to consider criterion and face validity when evaluating estimates of the number of drug users.

  8. 45 CFR 162.1011 - Valid code sets.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public... ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates specified by the organization responsible for maintaining that code set. ...

  9. Validation of the TEXSAN thermal-hydraulic analysis program

    International Nuclear Information System (INIS)

    Burns, S.P.; Klein, D.E.

    1992-01-01

    The TEXSAN thermal-hydraulic analysis program has been developed by the University of Texas at Austin (UT) to simulate buoyancy driven fluid flow and heat transfer in spent fuel and high level nuclear waste (HLW) shipping applications. As part of the TEXSAN software quality assurance program, the software has been subjected to a series of test cases intended to validate its capabilities. The validation tests include many physical phenomena which arise in spent fuel and HLW shipping applications. This paper describes some of the principal results of the TEXSAN validation tests and compares them to solutions available in the open literature. The TEXSAN validation effort has shown that the TEXSAN program is stable and consistent under a range of operating conditions and provides accuracy comparable with other heat transfer programs and evaluation techniques. The modeling capabilities and the interactive user interface employed by the TEXSAN program should make it a useful tool in HLW transportation analysis

  10. A cross-validation package driving Netica with python

    Science.gov (United States)

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  11. Further Validation of the Coach Identity Prominence Scale

    Science.gov (United States)

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  12. Automated ensemble assembly and validation of microbial genomes

    Science.gov (United States)

    2014-01-01

    Background The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. Results To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Conclusions Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to

  13. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  14. Validation of non-formal and informal learning from a European perspective – linking validation arrangements with national qualifications frameworks

    Directory of Open Access Journals (Sweden)

    Borut Mikulec

    2015-12-01

    Full Text Available The paper analyses European policy on the validation of non-formal and informal learning, which is presented as a “salvation narrative” that can improve the functioning of the labour market, provide a way out from unemployment and strengthen the competitiveness of the economy. Taking as our starting point recent findings in adult education theory on the validation of non-formal and informal learning, we aim to prove the thesis that what European validation policy promotes is above all economic purpose and that it establishes a “Credential/Credit-exchange” model of validation of non-formal and informal learning. We proceed to ecxamine the effect of European VNIL policy in selected European countries where validation arrangements are linked to the qualifications framework. We find that the “Credential/ Credit-exchange” validation model was first established in a few individual European countries and then transferred, as a “successful” model, to the level of common European VNIL policy.

  15. Validation of self-reported erythema

    DEFF Research Database (Denmark)

    Petersen, B; Thieden, E; Lerche, C M

    2013-01-01

    Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data.......Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data....

  16. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    Science.gov (United States)

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  17. Development and validation of the short-form Adolescent Health Promotion Scale.

    Science.gov (United States)

    Chen, Mei-Yen; Lai, Li-Ju; Chen, Hsiu-Chih; Gaete, Jorge

    2014-10-26

    Health-promoting lifestyle choices of adolescents are closely related to current and subsequent health status. However, parsimonious yet reliable and valid screening tools are scarce. The original 40-item adolescent health promotion (AHP) scale was developed by our research team and has been applied to measure adolescent health-promoting behaviors worldwide. The aim of our study was to examine the psychometric properties of a newly developed short-form version of the AHP (AHP-SF) including tests of its reliability and validity. The study was conducted in nine middle and high schools in southern Taiwan. Participants were 814 adolescents randomly divided into two subgroups with equal size and homogeneity of baseline characteristics. The first subsample (calibration sample) was used to modify and shorten the factorial model while the second subsample (validation sample) was utilized to validate the result obtained from the first one. The psychometric testing of the AHP-SF included internal reliability of McDonald's omega and Cronbach's alpha, convergent validity, discriminant validity, and construct validity with confirmatory factor analysis (CFA). The results of the CFA supported a six-factor model and 21 items were retained in the AHP-SF with acceptable model fit. For the discriminant validity test, results indicated that adolescents with lower AHP-SF scores were more likely to be overweight or obese, skip breakfast, and spend more time watching TV and playing computer games. The AHP-SF also showed excellent internal consistency with a McDonald's omega of 0.904 (Cronbach's alpha 0.905) in the calibration group. The current findings suggest that the AHP-SF is a valid and reliable instrument for the evaluation of adolescent health-promoting behaviors. Primary health care providers and clinicians can use the AHP-SF to assess these behaviors and evaluate the outcome of health promotion programs in the adolescent population.

  18. Active Transportation Demand Management (ATDM) Trajectory Level Validation

    Data.gov (United States)

    Department of Transportation — The ATDM Trajectory Validation project developed a validation framework and a trajectory computational engine to compare and validate simulated and observed vehicle...

  19. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  20. Validity Semantics in Educational and Psychological Assessment

    Science.gov (United States)

    Hathcoat, John D.

    2013-01-01

    The semantics, or meaning, of validity is a fluid concept in educational and psychological testing. Contemporary controversies surrounding this concept appear to stem from the proper location of validity. Under one view, validity is a property of score-based inferences and entailed uses of test scores. This view is challenged by the…

  1. Validation of the Child Sport Cohesion Questionnaire

    Science.gov (United States)

    Martin, Luc J.; Carron, Albert V.; Eys, Mark A.; Loughead, Todd

    2013-01-01

    The purpose of the present study was to test the validity evidence of the Child Sport Cohesion Questionnaire (CSCQ). To accomplish this task, convergent, discriminant, and known-group difference validity were examined, along with factorial validity via confirmatory factor analysis (CFA). Child athletes (N = 290, M[subscript age] = 10.73 plus or…

  2. Establishing construct validity for the thyroid-specific patient reported outcome measure (ThyPRO)

    DEFF Research Database (Denmark)

    Watt, Torquil; Bjorner, Jakob Bue; Groenvold, Mogens

    2009-01-01

    , evaluating lack of convergent validity (item-own scale polyserial correlation correlation higher than item-own scale correlation) of the hypothesized scale structure. Analyses were repeated in clinical and sociodemographic subgroups and with Pearson...... complete convergent validity and only two instances of lack of discriminant validity. Pearson correlations yielded similar results. Across all subgroups, convergent validity was complete, and discriminant validity was found in 99.2% of tests. Lack of discriminant validity was mainly between physical...... correlations. Reliability was estimated by Cronbach's alpha, both conventionally and with polychoric correlations. RESULTS: In total, 904 patients (69%) responded. Initial multitrait scaling analysis identified 25 scaling errors. Twelve items were omitted from the scale structure, and a re-analysis showed...

  3. Validation of asthma recording in electronic health records: a systematic review

    Directory of Open Access Journals (Sweden)

    Nissen F

    2017-12-01

    Full Text Available Francis Nissen,1 Jennifer K Quint,2 Samantha Wilkinson,1 Hana Mullerova,3 Liam Smeeth,1 Ian J Douglas1 1Department of Non-Communicable Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK; 2National Heart and Lung Institute, Imperial College, London, UK; 3RWD & Epidemiology, GSK R&D, Uxbridge, UK Objective: To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background: Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research.Methods: We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV] were summarized in two tables.Results: Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%. Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion: Attaining high PPVs (>80% is possible using each of the discussed validation

  4. Validation of the Social Inclusion Scale with Students

    Directory of Open Access Journals (Sweden)

    Ceri Wilson

    2015-07-01

    Full Text Available Interventions (such as participatory arts projects aimed at increasing social inclusion are increasingly in operation, as social inclusion is proving to play a key role in recovery from mental ill health and the promotion of mental wellbeing. These interventions require evaluation with a systematically developed and validated measure of social inclusion; however, a “gold-standard” measure does not yet exist. The Social Inclusion Scale (SIS has three subscales measuring social isolation, relations and acceptance. This scale has been partially validated with arts and mental health project users, demonstrating good internal consistency. However, test-retest reliability and construct validity require assessment, along with validation in the general population. The present study aimed to validate the SIS in a sample of university students. Test-retest reliability, internal consistency, and convergent validity (one aspect of construct validity were assessed by comparing SIS scores with scores on other measures of social inclusion and related concepts. Participants completed the measures at two time-points seven-to-14 days apart. The SIS demonstrated high internal consistency and test-retest reliability, although convergent validity was less well-established and possible reasons for this are discussed. This systematic validation of the SIS represents a further step towards the establishment of a “gold-standard” measure of social inclusion.

  5. Validation of WIMS-AECL/(MULTICELL)/RFSP system by the results of phase-B test at Wolsung-II unit

    Energy Technology Data Exchange (ETDEWEB)

    Hong, In Seob; Min, Byung Joo; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-03-01

    The object of this study is the validation of WIMS-AECL lattice code which has been proposed for the substitution of POWDERPUFS-V(PPV) code. For the validation of this code, WIMS-AECL/(MULTICELL)/RFSP (lattice calculation/(incremental cross section calculation)/core calculation) code system has been used for the Post-Simulation of Phase-B physics Test at Wolsung-II unit. This code system had been used for the Wolsong-I and Point Lepraeu reactors, but after a few modifications of WIMS-AECL input values for Wolsong-II, the results of WIMS-AECL/RFSP code calculations are much improved to those of the old ones. Most of the results show good estimation except moderator temperature coefficient test. And the verification of this result must be done, which is one of the further work. 6 figs., 15 tabs. (Author)

  6. Assessment of Irrational Beliefs: The Question of Discriminant Validity.

    Science.gov (United States)

    Smith, Timothy W.; Zurawski, Raymond M.

    1983-01-01

    Evaluated discriminant validity in frequently used measures of irrational beliefs relative to measures of trait anxiety in college students (N=142). Results showed discriminant validity in the Rational Behavior Inventory but not in the Irrational Beliefs Test and correlated cognitive rather than somatic aspects of trait anxiety with both measures.…

  7. All Validity Is Construct Validity. Or Is It?

    Science.gov (United States)

    Kane, Michael

    2012-01-01

    Paul E. Newton's article on the consensus definition of validity tackles a number of big issues and makes a number of strong claims. I agreed with much of what he said, and I disagreed with a number of his claims, but I found his article to be consistently interesting and thought provoking (whether I agreed or not). I will focus on three general…

  8. INTRA - Maintenance and Validation. Final Report

    International Nuclear Information System (INIS)

    Edlund, Ove; Jahn, Hermann; Yitbarek, Z.

    2002-05-01

    The INTRA code is specified by the ITER Joint Central Team and the European Community as a reference code for safety analyses of Tokamak type fusion reactors. INTRA has been developed by GRS and Studsvik EcoSafe to analyse integrated behaviours such as pressurisation, chemical reactions and temperature transients inside the plasma chamber and adjacent rooms, following postulated accidents, e.g. ingress of coolant water or air. Important results of the ICE and EVITA experiments, which became available early 2001, were used to validate and improve specific INTRA models. Large efforts were spent on the behaviour of water and steam injection into a low-pressure volumes at high temperature as well as on the modelling of boiling of water in contact with hot surfaces. As a result of this a new version, INTRA/Mod4, was documented and issued. The work included implementation and validation of selected physical models in the code, maintaining code versions, preparation review and distribution of code documents, and monitoring of the code related activities being performed by the GRS under a separate contract. The INTRA/Mod4 Manual and Code Description is documented in four volumes: Volume 1 - Physical Modelling, Volume 2 - User's Manual, Volume 3 -Code Structure and Volume 4 - Validation

  9. Validity and Reliability in Social Science Research

    Science.gov (United States)

    Drost, Ellen A.

    2011-01-01

    In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…

  10. Development and Initial Validation of the Need Satisfaction and Need Support at Work Scales: A Validity-Focused Approach

    Directory of Open Access Journals (Sweden)

    Susanne Tafvelin

    2018-01-01

    Full Text Available Although the relevance of employee need satisfaction and manager need support have been examined, the integration of self-determination theory (SDT into work and organizational psychology has been hampered by the lack of validated measures. The purpose of the current study was to develop and validate measures of employees’ perception of need satisfaction (NSa-WS and need support (NSu-WS at work that were grounded in SDT. We used three Swedish samples (total 'N' = 1,430 to develop and validate our scales. We used a confirmatory approach including expert panels to assess item content relevance, confirmatory factor analysis for factorial validity, and associations with theoretically warranted outcomes to assess criterion-related validity. Scale reliability was also assessed. We found evidence of content, factorial, and criterion-related validity of our two scales of need satisfaction and need support at work. Further, the scales demonstrated high internal consistency. Our newly developed scales may be used in research and practice to further our understanding regarding how satisfaction and support of employee basic needs influence employee motivation, performance, and well-being. Our study makes a contribution to the current literature by providing (1 scales that are specifically designed for the work context, (2 an example of how expert panels can be used to assess content validity, and (3 testing of theoretically derived hypotheses that, although SDT is built on them, have not been examined before.

  11. Assessing students' communication skills: validation of a global rating.

    Science.gov (United States)

    Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose

    2008-12-01

    Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.

  12. The ALICE Software Release Validation cluster

    International Nuclear Information System (INIS)

    Berzano, D; Krzewicki, M

    2015-01-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future. (paper)

  13. Content validation applied to job simulation and written examinations

    International Nuclear Information System (INIS)

    Saari, L.M.; McCutchen, M.A.; White, A.S.; Huenefeld, J.C.

    1984-08-01

    The application of content validation strategies in work settings have become increasingly popular over the last few years, perhaps spurred by an acknowledgment in the courts of content validation as a method for validating employee selection procedures (e.g., Bridgeport Guardians v. Bridgeport Police Dept., 1977). Since criterion-related validation is often difficult to conduct, content validation methods should be investigated as an alternative for determining job related selection procedures. However, there is not yet consensus among scientists and professionals concerning how content validation should be conducted. This may be because there is a lack of clear cut operations for conducting content validation for different types of selection procedures. The purpose of this paper is to discuss two content validation approaches being used for the development of a licensing examination that involves a job simulation exam and a written exam. These represent variations in methods for applying content validation. 12 references

  14. Validation of psychoanalytic theories: towards a conceptualization of references.

    Science.gov (United States)

    Zachrisson, Anders; Zachrisson, Henrik Daae

    2005-10-01

    The authors discuss criteria for the validation of psychoanalytic theories and develop a heuristic and normative model of the references needed for this. Their core question in this paper is: can psychoanalytic theories be validated exclusively from within psychoanalytic theory (internal validation), or are references to sources of knowledge other than psychoanalysis also necessary (external validation)? They discuss aspects of the classic truth criteria correspondence and coherence, both from the point of view of contemporary psychoanalysis and of contemporary philosophy of science. The authors present arguments for both external and internal validation. Internal validation has to deal with the problems of subjectivity of observations and circularity of reasoning, external validation with the problem of relevance. They recommend a critical attitude towards psychoanalytic theories, which, by carefully scrutinizing weak points and invalidating observations in the theories, reduces the risk of wishful thinking. The authors conclude by sketching a heuristic model of validation. This model combines correspondence and coherence with internal and external validation into a four-leaf model for references for the process of validating psychoanalytic theories.

  15. Geostatistical validation and cross-validation of magnetometric measurements of soil pollution with Potentially Toxic Elements in problematic areas

    Science.gov (United States)

    Fabijańczyk, Piotr; Zawadzki, Jarosław

    2016-04-01

    Field magnetometry is fast method that was previously effectively used to assess the potential soil pollution. One of the most popular devices that are used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. In this connection, measured values of soil magnetic susceptibility have to be usually validated using more precise, but also much more expensive, chemical measurements. The goal of this study was to analyze validation methods of magnetometric measurements using chemical analyses of a concentration of elements in soil. Additionally, validation of surface measurements of soil magnetic susceptibility was performed using selected parameters of a distribution of magnetic susceptibility in a soil profile. Validation was performed using selected geostatistical measures of cross-correlation. The geostatistical approach was compared with validation performed using the classic statistics. Measurements were performed at selected areas located in the Upper Silesian Industrial Area in Poland, and in the selected parts of Norway. In these areas soil magnetic susceptibility was measured on the soil surface using a MS2D Bartington device and in the soil profile using MS2C Bartington device. Additionally, soil samples were taken in order to perform chemical measurements. Acknowledgment The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013.

  16. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  17. Validity of a Measure of Assertiveness

    Science.gov (United States)

    Galassi, John P.; Galassi, Merna D.

    1974-01-01

    This study was concerned with further validation of a measure of assertiveness. Concurrent validity was established for the College Self-Expression Scale using the method of contrasted groups and through correlations of self-and judges' ratings of assertiveness. (Author)

  18. Thermal-hydraulic codes validation for safety analysis of NPPs with RBMK

    International Nuclear Information System (INIS)

    Brus, N.A.; Ioussoupov, O.E.

    2000-01-01

    This work is devoted to validation of western thermal-hydraulic codes (RELAP5/MOD3 .2 and ATHLET 1.1 Cycle C) in application to Russian designed light water reactors. Such validation is needed due to features of RBMK reactor design and thermal-hydraulics in comparison with PWR and BWR reactors, for which these codes were developed and validated. These validation studies are concluded with a comparison of calculation results of modeling with the thermal-hydraulics codes with the experiments performed earlier using the thermal-hydraulics test facilities with the experimental data. (authors)

  19. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  20. Validity and Reliability Testing of an e-learning Questionnaire for Chemistry Instruction

    Science.gov (United States)

    Guspatni, G.; Kurniawati, Y.

    2018-04-01

    The aim of this paper is to examine validity and reliability of a questionnaire used to evaluate e-learning implementation in chemistry instruction. 48 questionnaires were filled in by students who had studied chemistry through e-learning system. The questionnaire consisted of 20 indicators evaluating students’ perception on using e-learning. Parametric testing was done as data were assumed to follow normal distribution. Item validity of the questionnaire was examined through item-total correlation using Pearson’s formula while its reliability was assessed with Cronbach’s alpha formula. Moreover, convergent validity was assessed to see whether indicators building a factor had theoretically the same underlying construct. The result of validity testing revealed 19 valid indicators while the result of reliability testing revealed Cronbach’s alpha value of .886. The result of factor analysis showed that questionnaire consisted of five factors, and each of them had indicators building the same construct. This article shows the importance of factor analysis to get a construct valid questionnaire before it is used as research instrument.

  1. Development and validation of a premature ejaculation diagnostic tool.

    Science.gov (United States)

    Symonds, Tara; Perelman, Michael A; Althof, Stanley; Giuliano, François; Martin, Mona; May, Kathryn; Abraham, Lucy; Crossland, Anna; Morris, Mark

    2007-08-01

    Diagnosis of premature ejaculation (PE) for clinical trial purposes has typically relied on intravaginal ejaculation latency time (IELT) for entry, but this parameter does not capture the multidimensional nature of PE. Therefore, the aim was to develop a brief, multidimensional, psychometrically validated instrument for diagnosing PE status. The questionnaire development involved three stages: (1) Five focus groups and six individual interviews were conducted to develop the content; (2) psychometric validation using three different groups of men; and (3) generation of a scoring system. For psychometric validation/scoring system development, data was collected from (1) men with PE based on clinician diagnosis, using DSM-IV-TR, who also had IELTs or =11 PE. The development and validation of this new PE diagnostic tool has resulted in a new, user-friendly, and brief self-report questionnaire for use in clinical trials to diagnose PE.

  2. Reliability and validity of the McDonald Play Inventory.

    Science.gov (United States)

    McDonald, Ann E; Vigen, Cheryl

    2012-01-01

    This study examined the ability of a two-part self-report instrument, the McDonald Play Inventory, to reliably and validly measure the play activities and play styles of 7- to 11-yr-old children and to discriminate between the play of neurotypical children and children with known learning and developmental disabilities. A total of 124 children ages 7-11 recruited from a sample of convenience and a subsample of 17 parents participated in this study. Reliability estimates yielded moderate correlations for internal consistency, total test intercorrelations, and test-retest reliability. Validity estimates were established for content and construct validity. The results suggest that a self-report instrument yields reliable and valid measures of a child's perceived play performance and discriminates between the play of children with and without disabilities. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  3. Validity and Reliability of the 8-Item Work Limitations Questionnaire.

    Science.gov (United States)

    Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C

    2017-12-01

    Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.

  4. Serial album validation for promotion of infant body weight control

    Directory of Open Access Journals (Sweden)

    Nathalia Costa Gonzaga Saraiva

    2018-05-01

    Full Text Available ABSTRACT Objective: to validate the content and appearance of a serial album for children aged from 7 to 10 years addressing the topic of prevention and control of body weight. Method: methodological study with descriptive nature. The validation process was attended by 33 specialists in educational technologies and/or in excess of infantile weight. The agreement index of 80% was the minimum considered to guarantee the validation of the material. Results: most of the specialists had a doctoral degree and a graduate degree in nursing. Regarding content, illustrations, layout and relevance, all items were validated and 69.7% of the experts considered the album as great. The overall agreement validation index for the educational technology was 0.88. Only the script-sheet 3 did not reach the cutoff point of the content validation index. Changes were made to the material, such as title change, inclusion of the school context and insertion of nutritionist and physical educator in the story narrated in the album. Conclusion: the proposed serial album was considered valid by experts regarding content and appearance, suggesting that this technology has the potential to contribute in health education by promoting healthy weight in the age group of 7 to 10 years.

  5. On the validation of risk analysis-A commentary

    International Nuclear Information System (INIS)

    Rosqvist, Tony

    2010-01-01

    Aven and Heide (2009) [1] provided interesting views on the reliability and validation of risk analysis. The four validation criteria presented are contrasted with modelling features related to the relative frequency-based and Bayesian approaches to risk analysis. In this commentary I would like to bring forth some issues on validation that partly confirm and partly suggest changes in the interpretation of the introduced validation criteria-especially, in the context of low probability-high consequence systems. The mental model of an expert in assessing probabilities is argued to be a key notion in understanding the validation of a risk analysis.

  6. Validation of KENO-based criticality calculations at Rocky Flats

    International Nuclear Information System (INIS)

    Felsher, P.D.; McKamy, J.N.; Monahan, S.P.

    1992-01-01

    In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation

  7. Validation of the Vanderbilt Holistic Face Processing Test.

    Science.gov (United States)

    Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  8. Validation of the Vanderbilt Holistic Face Processing Test.

    Directory of Open Access Journals (Sweden)

    Chao-Chih Wang

    2016-11-01

    Full Text Available The Vanderbilt Holistic Face Processing Test (VHPT-F is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014. In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom, which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  9. [MusiQol: international questionnaire investigating quality of life in multiple sclerosis: validation results for the German subpopulation in an international comparison].

    Science.gov (United States)

    Flachenecker, P; Vogel, U; Simeoni, M C; Auquier, P; Rieckmann, P

    2011-10-01

    The existing health-related quality of life questionnaires on multiple sclerosis (MS) only partially reflect the patient's point of view on the reduction of activities of daily living. Their development and validation was not performed in different languages. That is what prompted the development of the Multiple Sclerosis International Quality of Life (MusiQoL) Questionnaire as an international multidimensional measurement instrument. This paper presents this new development and the results of the German subgroup versus the total international sample. A total of 1,992 MS patients from 15 countries, including 209 German patients, took part in the study between January 2004 and February 2005. The patients took the MusiQoL survey at baseline and at 21±7 days as well as completing a symptom-related checklist and the SF-36 short form survey. Demographics, history and MS classification data were also generated. Reproducibility, sensitivity, convergent and discriminant validity were analysed. Convergent and discriminant validity and reproducibility were satisfactory for all dimensions of the MusiQoL. The dimensional scores correlated moderately but significantly with the SF-36 scores, but showed a discriminant validity in terms of gender, socioeconomic status and health status that was more pronounced in the overall population than in the German subpopulation. The highest correlations were observed between the MusiQoL dimension of activities of daily living and the Expanded Disability Status Scale (EDSS). The results of this study confirm the validity and reliability of MusiQoL as an instrument for measuring the quality of life of German and international MS patients.

  10. Site characterization and validation - Tracer migration experiment in the validation drift, report 2, part 1: performed experiments, results and evaluation

    International Nuclear Information System (INIS)

    Birgersson, L.; Widen, H.; Aagren, T.; Neretnieks, I.; Moreno, L.

    1992-01-01

    This report is the second of the two reports describing the tracer migration experiment where water and tracer flow has been monitored in a drift at the 385 m level in the Stripa experimental mine. The tracer migration experiment is one of a large number of experiments performed within the Site Characterization and Validation (SCV) project. The upper part of the 50 m long validation drift was covered with approximately 150 plastic sheets, in which the emerging water was collected. The water emerging into the lower part of the drift was collected in short boreholes, sumpholes. Sex different tracer mixtures were injected at distances between 10 and 25 m from the drift. The flowrate and tracer monitoring continued for ten months. Tracer breakthrough curves and flowrate distributions were used to study flow paths, velocities, hydraulic conductivities, dispersivities, interaction with the rock matrix and channelling effects within the rock. The present report describes the structure of the observations, the flowrate measurements and estimated hydraulic conductivities. The main part of this report addresses the interpretation of the tracer movement in fractured rock. The tracer movement as measured by the more than 150 individual tracer curves has been analysed with the traditional advection-dispersion model and a subset of the curves with the advection-dispersion-diffusion model. The tracer experiments have permitted the flow porosity, dispersion and interaction with the rock matrix to be studied. (57 refs.)

  11. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  12. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Planet Candidate Validation in K2 Crowded Fields

    Science.gov (United States)

    Rampalli, Rayna; Vanderburg, Andrew; Latham, David; Quinn, Samuel

    2018-01-01

    In just three years, the K2 mission has yielded some remarkable outcomes with the discovery of over 100 confirmed planets and 500 reported planet candidates to be validated. One challenge with this mission is the search for planets located in star-crowded regions. Campaign 13 is one such example, located towards the galactic plane in the constellation of Taurus. We subject the potential planetary candidates to a validation process involving spectroscopy to derive certain stellar parameters. Seeing-limited on/off imaging follow-up is also utilized in order to rule out false positives due to nearby eclipsing binaries. Using Markov chain Monte Carlo analysis, the best-fit parameters for each candidate are generated. These will be suitable for finding a candidate’s false positive probability through methods including feeding such parameters into the Validation of Exoplanet Signals using a Probabilistic Algorithm (VESPA). These techniques and results serve as important tools for conducting candidate validation and follow-up observations for space-based missions such as the upcoming TESS mission since TESS’s large camera pixels resemble K2’s star-crowded fields.

  14. Independent validation of the MMPI-2-RF Somatic/Cognitive and Validity scales in TBI Litigants tested for effort.

    Science.gov (United States)

    Youngjohn, James R; Wershba, Rebecca; Stevenson, Matthew; Sturgeon, John; Thomas, Michael L

    2011-04-01

    The MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) is replacing the MMPI-2 as the most widely used personality test in neuropsychological assessment, but additional validation studies are needed. Our study examines MMPI-2-RF Validity scales and the newly created Somatic/Cognitive scales in a recently reported sample of 82 traumatic brain injury (TBI) litigants who either passed or failed effort tests (Thomas & Youngjohn, 2009). The restructured Validity scales FBS-r (restructured symptom validity), F-r (restructured infrequent responses), and the newly created Fs (infrequent somatic responses) were not significant predictors of TBI severity. FBS-r was significantly related to passing or failing effort tests, and Fs and F-r showed non-significant trends in the same direction. Elevations on the Somatic/Cognitive scales profile (MLS-malaise, GIC-gastrointestinal complaints, HPC-head pain complaints, NUC-neurological complaints, and COG-cognitive complaints) were significant predictors of effort test failure. Additionally, HPC had the anticipated paradoxical inverse relationship with head injury severity. The Somatic/Cognitive scales as a group were better predictors of effort test failure than the RF Validity scales, which was an unexpected finding. MLS arose as the single best predictor of effort test failure of all RF Validity and Somatic/Cognitive scales. Item overlap analysis revealed that all MLS items are included in the original MMPI-2 Hy scale, making MLS essentially a subscale of Hy. This study validates the MMPI-2-RF as an effective tool for use in neuropsychological assessment of TBI litigants.

  15. The Validity and Reliability of the Mobbing Scale (MS)

    Science.gov (United States)

    Yaman, Erkan

    2009-01-01

    The aim of this research is to develop the Mobbing Scale and examine its validity and reliability. The sample of the study consisted of 515 persons from Sakarya and Bursa. In this study, construct validity, internal consistency, test-retest reliability, and item analysis of the scale were examined. As a result of factor analysis for construct…

  16. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  17. Assessing attitude toward same-sex marriage: scale development and validation.

    Science.gov (United States)

    Lannutti, Pamela J; Lachlan, Kenneth A

    2007-01-01

    This paper reports the results of three studies conducted to develop, refine, and validate a scale which assessed heterosexual adults' attitudes toward same-sex marriage, the Attitude Toward Same-Sex Marriage Scale (ASSMS). The need for such a scale is evidenced in the increasing importance of same-sex marriage in the political arena of the United States and other nations, as well as the growing body of empirical research examining same-sex marriage and related issues (e.g., Lannutti, 2005; Solomon, Rothblum, & Balsam, 2004). The results demonstrate strong reliability, convergent validity, and predictive validity for the ASSMS and suggest that the ASSMS may be adapted to measure attitudes toward civil unions and other forms of relational recognition for same-sex couples. Gender comparisons using the validated scale showed that in college and non-college samples, women had a significantly more positive attitude toward same-sex marriage than did men.

  18. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    Science.gov (United States)

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.

  19. Site characterization and validation - Tracer migration experiment in the validation drift, report 2, Part 2: breakthrough curves in the validation drift appendices 5-9

    International Nuclear Information System (INIS)

    Birgersson, L.; Widen, H.; Aagren, T.; Neretnieks, I.; Moreno, L.

    1992-01-01

    Flowrate curves for the 53 sampling areas in the validation drift with measureable flowrates are given. The sampling area 267 is treated as three separate sampling areas; 267:1, 267:2 and 267:3. The total flowrate for these three sampling areas is given in a separate plot. The flowrates are given in ml/h. The time is given in hours since April 27 00:00, 1990. Disturbances in flowrates are observed after 8500 hours due to opening of boreholes C1 and W1. Results from flowrate measurements after 8500 hours are therefore excluded. The tracer breakthrough curves for 38 sampling areas in the validation drift are given as concentration values versus time. The sampling area 267 is treated as three separate sampling areas; 267:1, 267:2 and 267:3. This gives a total of 40 breakthrough curves for each tracer. (au)

  20. Validation of the Classroom Behavior Inventory

    Science.gov (United States)

    Blunden, Dale; And Others

    1974-01-01

    Factor-analytic methods were used toassess contruct validity of the Classroom Behavior Inventory, a scale for rating behaviors associated with hyperactivity. The Classroom Behavior Inventory measures three dimensions of behavior: Hyperactivity, Hostility, and Sociability. Significant concurrent validity was obtained for only one Classroom Behavior…

  1. Validation of the Oxford Participation and Activities Questionnaire

    Directory of Open Access Journals (Sweden)

    Morley D

    2016-06-01

    Full Text Available David Morley, Sarah Dummett, Laura Kelly, Jill Dawson, Ray Fitzpatrick, Crispin Jenkinson Health Services Research Unit, Nuffield Department of Population Health, University of Oxford, Oxford, UK Purpose: There is growing interest in the management of long-term conditions and in keeping people active and participating in the community. Testing the effectiveness of interventions that aim to affect activities and participation can be challenging without a well-developed, valid, and reliable instrument. This study therefore aims to develop a patient-reported outcome measure, the Oxford Participation and Activities Questionnaire (Ox-PAQ, which is theoretically grounded in the World Health Organization's International Classification of Functioning, Disability, and Health (ICF and fully compliant with current best practice guidelines. Methods: Questionnaire items generated from patient interviews and based on the nine chapters of the ICF were administered by postal survey to 386 people with three neurological conditions: motor neuron disease, multiple sclerosis, and Parkinson's disease. Participants also completed the Medical Outcomes Study (MOS 36-Item Short Form Health Survey (SF-36 and EQ-5D-5L. Results: Thus, 334 participants completed the survey, a response rate of 86.5%. Factor analysis techniques identified three Ox-PAQ domains, consisting of 23 items, accounting for 72.8% of variance. Internal reliability for the three domains was high (Cronbach's α: 0.81–0.96, as was test–retest reliability (intraclass correlation: 0.83–0.92. Concurrent validity was demonstrated through highly significant relationships with relevant domains of the MOS SF-36 and the EQ-5D-5L. Assessment of known-groups validity identified significant differences in Ox-PAQ scores among the three conditions included in the survey. Conclusion: Results suggest that the Ox-PAQ is a valid and reliable measure of participation and activity. The measure will now be validated in

  2. Elaboration and Validation of the Medication Prescription Safety Checklist 1

    Science.gov (United States)

    Pires, Aline de Oliveira Meireles; Ferreira, Maria Beatriz Guimarães; do Nascimento, Kleiton Gonçalves; Felix, Márcia Marques dos Santos; Pires, Patrícia da Silva; Barbosa, Maria Helena

    2017-01-01

    ABSTRACT Objective: to elaborate and validate a checklist to identify compliance with the recommendations for the structure of medication prescriptions, based on the Protocol of the Ministry of Health and the Brazilian Health Surveillance Agency. Method: methodological research, conducted through the validation and reliability analysis process, using a sample of 27 electronic prescriptions. Results: the analyses confirmed the content validity and reliability of the tool. The content validity, obtained by expert assessment, was considered satisfactory as it covered items that represent the compliance with the recommendations regarding the structure of the medication prescriptions. The reliability, assessed through interrater agreement, was excellent (ICC=1.00) and showed perfect agreement (K=1.00). Conclusion: the Medication Prescription Safety Checklist showed to be a valid and reliable tool for the group studied. We hope that this study can contribute to the prevention of adverse events, as well as to the improvement of care quality and safety in medication use. PMID:28793128

  3. [Comparison of the Wechsler Memory Scale-III and the Spain-Complutense Verbal Learning Test in acquired brain injury: construct validity and ecological validity].

    Science.gov (United States)

    Luna-Lario, P; Pena, J; Ojeda, N

    2017-04-16

    To perform an in-depth examination of the construct validity and the ecological validity of the Wechsler Memory Scale-III (WMS-III) and the Spain-Complutense Verbal Learning Test (TAVEC). The sample consists of 106 adults with acquired brain injury who were treated in the Area of Neuropsychology and Neuropsychiatry of the Complejo Hospitalario de Navarra and displayed memory deficit as the main sequela, measured by means of specific memory tests. The construct validity is determined by examining the tasks required in each test over the basic theoretical models, comparing the performance according to the parameters offered by the tests, contrasting the severity indices of each test and analysing their convergence. The external validity is explored through the correlation between the tests and by using regression models. According to the results obtained, both the WMS-III and the TAVEC have construct validity. The TAVEC is more sensitive and captures not only the deficits in mnemonic consolidation, but also in the executive functions involved in memory. The working memory index of the WMS-III is useful for predicting the return to work at two years after the acquired brain injury, but none of the instruments anticipates the disability and dependence at least six months after the injury. We reflect upon the construct validity of the tests and their insufficient capacity to predict functionality when the sequelae become chronic.

  4. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    International Nuclear Information System (INIS)

    Apostolakis, J; Burkhardt, H; Ivanchenko, V N; Asai, M; Bagulya, A; Grichine, V; Brown, J M C; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Jacquemier, J; Guatelli, S; Incerti, S; Kadri, O; Maire, M; Urban, L; Pandola, L; Sawkey, D; Toshito, T; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed. (paper)

  5. Reliable and Valid Assessment of Point-of-care Ultrasonography

    DEFF Research Database (Denmark)

    Todsen, Tobias; Tolsgaard, Martin Grønnebæk; Olsen, Beth Härstedt

    2015-01-01

    physicians' OSAUS scores with diagnostic accuracy. RESULTS: The generalizability coefficient was high (0.81) and a D-study demonstrated that 1 assessor and 5 cases would result in similar reliability. The construct validity of the OSAUS scale was supported by a significant difference in the mean scores......OBJECTIVE: To explore the reliability and validity of the Objective Structured Assessment of Ultrasound Skills (OSAUS) scale for point-of-care ultrasonography (POC US) performance. BACKGROUND: POC US is increasingly used by clinicians and is an essential part of the management of acute surgical...... conditions. However, the quality of performance is highly operator-dependent. Therefore, reliable and valid assessment of trainees' ultrasonography competence is needed to ensure patient safety. METHODS: Twenty-four physicians, representing novices, intermediates, and experts in POC US, scanned 4 different...

  6. Validity and Reliability of the Upper Extremity Work Demands Scale.

    Science.gov (United States)

    Jacobs, Nora W; Berduszek, Redmar J; Dijkstra, Pieter U; van der Sluis, Corry K

    2017-12-01

    Purpose To evaluate validity and reliability of the upper extremity work demands (UEWD) scale. Methods Participants from different levels of physical work demands, based on the Dictionary of Occupational Titles categories, were included. A historical database of 74 workers was added for factor analysis. Criterion validity was evaluated by comparing observed and self-reported UEWD scores. To assess structural validity, a factor analysis was executed. For reliability, the difference between two self-reported UEWD scores, the smallest detectable change (SDC), test-retest reliability and internal consistency were determined. Results Fifty-four participants were observed at work and 51 of them filled in the UEWD twice with a mean interval of 16.6 days (SD 3.3, range = 10-25 days). Criterion validity of the UEWD scale was moderate (r = .44, p = .001). Factor analysis revealed that 'force and posture' and 'repetition' subscales could be distinguished with Cronbach's alpha of .79 and .84, respectively. Reliability was good; there was no significant difference between repeated measurements. An SDC of 5.0 was found. Test-retest reliability was good (intraclass correlation coefficient for agreement = .84) and all item-total correlations were >.30. There were two pairs of highly related items. Conclusion Reliability of the UEWD scale was good, but criterion validity was moderate. Based on current results, a modified UEWD scale (2 items removed, 1 item reworded, divided into 2 subscales) was proposed. Since observation appeared to be an inappropriate gold standard, we advise to investigate other types of validity, such as construct validity, in further research.

  7. Safe pediatric surgery: development and validation of preoperative interventions checklist

    Directory of Open Access Journals (Sweden)

    Maria Paula de Oliveira Pires

    2013-09-01

    Full Text Available OBJECTIVES: this study was aimed at developing and validating a checklist of preoperative pediatric interventions related to the safety of surgical patients. METHOD: methodological study concerning the construction and validation of an instrument with safe preoperative care indicators. The checklist was subject to validation through the Delphi technique, establishing a consensus level of 80%. RESULTS: five professional specialists in the area conducted the validation and a consensus on the content and the construct was reached after two applications of the Delphi technique. CONCLUSION: the "Safe Pediatric Surgery Checklist", simulating the preoperative trajectory of children, is an instrument capable of contributing to the preparation and promotion of safe surgery, as it identifies the presence or absence of measures required to promote patient safety.

  8. Mollusc reproductive toxicity tests - Development and validation of test guidelines

    DEFF Research Database (Denmark)

    Ducrot, Virginie; Holbech, Henrik; Kinnberg, Karin Lund

    . Draft standard operating procedures (SOPs) have been designed based upon literature and expert knowledge from project partners. Pre-validation studies have been implemented to validate the proposed test conditions and identify issues in performing the SOPs and analyzing test results. Pre-validation work......The Organisation for Economic Cooperation and Development is promoting the development and validation of mollusc toxicity tests within its test guidelines programme, eventually aiming for the standardization of mollusc apical toxicity tests. Through collaborative work between academia, industry...... and stakeholders, this study aims to develop innovative partial life-cycle tests on the reproduction of the freshwater gastropods Potamopyrgus antipodarum and Lymnaea stagnalis, which are relevant candidate species for the standardization of mollusc apical toxicity tests assessing reprotoxic effects of chemicals...

  9. Pragmatic controlled clinical trials in primary care: the struggle between external and internal validity

    Directory of Open Access Journals (Sweden)

    Birtwhistle Richard

    2003-12-01

    Full Text Available Abstract Background Controlled clinical trials of health care interventions are either explanatory or pragmatic. Explanatory trials test whether an intervention is efficacious; that is, whether it can have a beneficial effect in an ideal situation. Pragmatic trials measure effectiveness; they measure the degree of beneficial effect in real clinical practice. In pragmatic trials, a balance between external validity (generalizability of the results and internal validity (reliability or accuracy of the results needs to be achieved. The explanatory trial seeks to maximize the internal validity by assuring rigorous control of all variables other than the intervention. The pragmatic trial seeks to maximize external validity to ensure that the results can be generalized. However the danger of pragmatic trials is that internal validity may be overly compromised in the effort to ensure generalizability. We are conducting two pragmatic randomized controlled trials on interventions in the management of hypertension in primary care. We describe the design of the trials and the steps taken to deal with the competing demands of external and internal validity. Discussion External validity is maximized by having few exclusion criteria and by allowing flexibility in the interpretation of the intervention and in management decisions. Internal validity is maximized by decreasing contamination bias through cluster randomization, and decreasing observer and assessment bias, in these non-blinded trials, through baseline data collection prior to randomization, automating the outcomes assessment with 24 hour ambulatory blood pressure monitors, and blinding the data analysis. Summary Clinical trials conducted in community practices present investigators with difficult methodological choices related to maintaining a balance between internal validity (reliability of the results and external validity (generalizability. The attempt to achieve methodological purity can

  10. French Translation and Validation of Three Scales Evaluating Stigma in Mental Health

    Directory of Open Access Journals (Sweden)

    Carla Garcia

    2017-12-01

    Full Text Available ObjectiveThe concept of stigma refers to problems of knowledge (ignorance, attitudes (prejudice, and behavior (discrimination. Stigma may hinder access to care, housing, and work. In the context of implementation of programs such as “housing first” or “individual placement and support” in French speaking regions, validated instruments measuring stigma are necessary. “Attitudes to Mental Illness 2011” is a questionnaire that includes three scales measuring stigma through these three dimensions. This study aimed to translate, adapt, and validate these three scales in French.MethodsThe “Attitudes to Mental Illness 2011” questionnaire was translated into French and back-translated into English by an expert. Two hundred and sixty-eight nursing students completed the questionnaire. Content validity, face validity, internal validity, and convergent validity were assessed. Long-term reliability was also estimated over a three-month period.ResultsExperts and participants found that the questionnaire’s content validity and face validity were appropriate. The internal validities of the three scales were also considered adequate. Convergent validity indicated that the scales did indeed measure what they were supposed to. Long-term stability estimates were moderate; this pattern of results suggested that the construct targeted by the three scales is adequately measured but does not necessarily represent stable and enduring traits.ConclusionBecause of their good psychometric properties, these three scales can be used in French, either separately, to measure one specific dimension of stigma, or together, to assess stigma in its three dimensions. This would seem of paramount importance in evaluating campaigns against stigma since it allows measures to be adapted according to campaign goals and the target population.

  11. The Selective Mutism Questionnaire: Measurement Structure and Validity

    Science.gov (United States)

    Letamendi, Andrea M.; Chavira, Denise A.; Hitchcock, Carla A.; Roesch, Scott C.; Shipon-Blum, Elisa; Stein, Murray B.; Roesch, Scott C.

    2010-01-01

    Objective To evaluate the factor structure, reliability, and validity of the 17-item Selective Mutism Questionnaire. Method Diagnostic interviews were administered via telephone to 102 parents of children identified with selective mutism (SM) and 43 parents of children without SM from varying U.S. geographic regions. Children were between the ages of 3 and 11 inclusive and comprised 58% girls and 42% boys. SM diagnoses were determined using the Anxiety Disorders Interview Schedule for Children - Parent Version (ADIS-C/P); SM severity was assessed using the 17-item Selective Mutism Questionnaire (SMQ); and behavioral and affective symptoms were assessed using the Child Behavior Checklist (CBCL). An exploratory factor analysis (EFA) was conducted to investigate the dimensionality of the SMQ and a modified parallel analysis procedure was used to confirm EFA results. Internal consistency, construct validity, and incremental validity were also examined. Results The EFA yielded a 13-item solution consisting of three factors: a) Social Situations Outside of School, b) School Situations, and c) Home and Family Situations. Internal consistency of SMQ factors and total scale ranged from moderate to high. Convergent and incremental validity were also well supported. Conclusions Measure structure findings are consistent with the 3-factor solution found in a previous psychometric evaluation of the SMQ. Results also suggest that the SMQ provides useful and unique information in the prediction of SM phenomenon beyond other child anxiety measures. PMID:18698268

  12. Validating a dance-specific screening test for balance: preliminary results from multisite testing.

    Science.gov (United States)

    Batson, Glenna

    2010-09-01

    Few dance-specific screening tools adequately capture balance. The aim of this study was to administer and modify the Star Excursion Balance Test (oSEBT) to examine its utility as a balance screen for dancers. The oSEBT involves standing on one leg while lightly targeting with the opposite foot to the farthest distance along eight spokes of a star-shaped grid. This task simulates dance in the spatial pattern and movement quality of the gesturing limb. The oSEBT was validated for distance on athletes with history of ankle sprain. Thirty-three dancers (age 20.1 +/- 1.4 yrs) participated from two contemporary dance conservatories (UK and US), with or without a history of lower extremity injury. Dancers were verbally instructed (without physical demonstration) to execute the oSEBT and four modifications (mSEBT): timed (speed), timed with cognitive interference (answering questions aloud), and sensory disadvantaging (foam mat). Stepping strategies were tracked and performance strategies video-recorded. Unlike the oSEBT results, distances reached were not significant statistically (p = 0.05) or descriptively (i.e., shorter) for either group. Performance styles varied widely, despite sample homogeneity and instructions to control for strategy. Descriptive analysis of mSEBT showed an increased number of near-falls and decreased timing on the injured limb. Dancers appeared to employ variable strategies to keep balance during this test. Quantitative analysis is warranted to define balance strategies for further validation of SEBT modifications to determine its utility as a balance screening tool.

  13. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    Science.gov (United States)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some

  14. RESEM-CA: Validation and testing

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Vineeta; Carroll, William L.; Bourassa, Norman

    2002-09-01

    This report documents the results of an extended comparison of RESEM-CA energy and economic performance predictions with the recognized benchmark tool DOE2.1E to determine the validity and effectiveness of this tool for retrofit design and analysis. The analysis was a two part comparison of patterns of (1) monthly and annual energy consumption of a simple base-case building and controlled variations in it to explore the predictions of load components of each program, and (2) a simplified life-cycle cost analysis of the predicted effects of selected Energy Conservation Measures (ECMs). The study tries to analyze and/or explain the differences that were observed. On the whole, this validation study indicates that RESEM is a promising tool for retrofit analysis. As a result of this study some factors (incident solar radiation, outside air film coefficient, IR radiation) have been identified where there is a possibility of algorithmic improvements. These would have to be made in a way that does not sacrifice the speed of the tool, necessary for extensive parametric search of optimum ECM measures.

  15. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  16. Software aspects of the Geant4 validation repository

    Science.gov (United States)

    Dotti, Andrea; Wenzel, Hans; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto

    2017-10-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  17. Software Aspects of the Geant4 Validation Repository

    CERN Document Server

    Dotti, Andrea; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto

    2016-01-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientic Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  18. The fish sexual development test: an OECD test guideline proposal with possible relevance for environmental risk assessment. Results from the validation programme

    DEFF Research Database (Denmark)

    Holbech, Henrik; Brande-Lavridsen, Nanna; Kinnberg, Karin Lund

    2010-01-01

    The Fish Sexual Development Test (FSDT) has gone through two validations as an OECD test guideline for the detection of endocrine active chemicals with different modes of action. The validation has been finalized on four species: Zebrafish (Danio rerio), Japanese medaka (Oryzias latipes), three s...... as a population relevant endpoint and the results of the two validation rounds will be discussed in relation to environmental risk assessment and species selection....... for histology. For all three methods, the fish parts were numbered and histology could therefore be linked to the vitellogenin concentration in individual fish. The two core endocrine relevant endpoints were vitellogenin concentrations and phenotypic sex ratio. Change in the sex ratio is presented...

  19. Validity in assessment of prior learning

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne; Aarkrog, Vibe

    2015-01-01

    , the article discusses the need for specific criteria for assessment. The reliability and validity of the assessment procedures depend on whether the competences are well-defined, and whether the teachers are adequately trained for the assessment procedures. Keywords: assessment, prior learning, adult...... education, vocational training, lifelong learning, validity...

  20. Empirical Validation of Listening Proficiency Guidelines

    Science.gov (United States)

    Cox, Troy L.; Clifford, Ray

    2014-01-01

    Because listening has received little attention and the validation of ability scales describing multidimensional skills is always challenging, this study applied a multistage, criterion-referenced approach that used a framework of aligned audio passages and listening tasks to explore the validity of the ACTFL and related listening proficiency…

  1. Validation and Design Science Research in Information Systems

    NARCIS (Netherlands)

    Sol, H G; Gonzalez, Rafael A.; Mora, Manuel

    2012-01-01

    Validation within design science research in Information Systems (DSRIS) is much debated. The relationship of validation to artifact evaluation is still not clear. This chapter aims at elucidating several components of DSRIS in relation to validation. The role of theory and theorizing are an

  2. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    Science.gov (United States)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  3. Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025

    Science.gov (United States)

    Banegas, J. M.; Orué, M. W.

    2016-07-01

    Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.

  4. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    Science.gov (United States)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  5. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  6. Cooperative learning benefits scale: construction and validation studies

    Directory of Open Access Journals (Sweden)

    José Lopes

    2014-07-01

    Full Text Available The aim of this study was to develop and validate a scale of benefits of the Cooperative Learning (SBCL given the exiguity of instruments that evaluate these outputs of the method. The study resorted to a convenience sample comprised of 162 students, males and females, aged between 11 and 18 years. The final instrument has 23 items in a two-dimensional factor structure: psychological and academic benefits and social benefits. The results indicate that the SBCL present good psychometric properties (construct and discriminant validity and reliability. The results are discussed in light of the model of cooperative learning.

  7. Validation of Plutonium Radioisotopes Analysis Using Alpha Spectrometry

    International Nuclear Information System (INIS)

    Noor Fadzilah Yusof; Jalal Sharib; Mohd Tarmizi Ishak; Zulkifli Daud; Abdul Kadir Ishak

    2016-01-01

    This paper presents the validation of an established method used to detect plutonium (Pu) radioisotopes in marine environment samples. The separation method consists of sample digestion, anion exchange, purification, electroplating and counting by an alpha spectrometry. Applying the method on standard reference materials from marine environment, the results are validated using seven parameters, namely specificity, linearity, bias or accuracy, detection limit, precision/ repeatability, reproducibility/ ruggedness and robustness in accordance with International Organization for Standardization (ISO) guidelines. The findings were that the results obtained were in a good agreement and satisfactory compared to the provided readings from certificate of reference materials. (author)

  8. Validation of the transportation computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND

    International Nuclear Information System (INIS)

    Maheras, S.J.; Pippen, H.K.

    1995-05-01

    The computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND were used to estimate radiation doses from the transportation of radioactive material in the Department of Energy Programmatic Spent Nuclear Fuel Management and Idaho National Engineering Laboratory Environmental Restoration and Waste Management Programs Environmental Impact Statement. HIGHWAY and INTERLINE were used to estimate transportation routes for truck and rail shipments, respectively. RADTRAN 4 was used to estimate collective doses from incident-free transportation and the risk (probability x consequence) from transportation accidents. RISKIND was used to estimate incident-free radiation doses for maximally exposed individuals and the consequences from reasonably foreseeable transportation accidents. The purpose of this analysis is to validate the estimates made by these computer codes; critiques of the conceptual models used in RADTRAN 4 are also discussed. Validation is defined as ''the test and evaluation of the completed software to ensure compliance with software requirements.'' In this analysis, validation means that the differences between the estimates generated by these codes and independent observations are small (i.e., within the acceptance criterion established for the validation analysis). In some cases, the independent observations used in the validation were measurements; in other cases, the independent observations used in the validation analysis were generated using hand calculations. The results of the validation analyses performed for HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND show that the differences between the estimates generated using the computer codes and independent observations were small. Based on the acceptance criterion established for the validation analyses, the codes yielded acceptable results; in all cases the estimates met the requirements for successful validation

  9. Statistical Analysis Methods for Physics Models Verification and Validation

    CERN Document Server

    De Luca, Silvia

    2017-01-01

    The validation and verification process is a fundamental step for any software like Geant4 and GeantV, which aim to perform data simulation using physics models and Monte Carlo techniques. As experimental physicists, we have to face the problem to compare the results obtained using simulations with what the experiments actually observed. One way to solve the problem is to perform a consistency test. Within the Geant group, we developed a C++ compact library which will be added to the automated validation process on the Geant Validation Portal

  10. A new dataset validation system for the Planetary Science Archive

    Science.gov (United States)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that

  11. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  12. Validation of one-dimensional module of MARS 2.1 computer code by comparison with the RELAP5/MOD3.3 developmental assessment results

    International Nuclear Information System (INIS)

    Lee, Y. J.; Bae, S. W.; Chung, B. D.

    2003-02-01

    This report records the results of the code validation for the one-dimensional module of the MARS 2.1 thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 code development assessment problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS 2.1 code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The results suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  13. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    Directory of Open Access Journals (Sweden)

    Hazel Ekin Akmaz

    2018-05-01

    Full Text Available Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance

  14. Discomfort Intolerance Scale: A Study of Reliability and Validity

    Directory of Open Access Journals (Sweden)

    Kadir ÖZDEL

    2012-03-01

    Full Text Available Objective: Discomfort Intolerance Scale was developed by Norman B. Schmidt et al. to assess the individual differences of capacity to withstand physical perturbations or uncomfortable bodily states (2006. The aim of this study is to investigate the validity and reliability of Discomfort Intolerance Scale-Turkish Version (RDÖ. Method: From two different universities, total of 225 students (male=167, female=58 were participated in this study. In order to determine the criterion validity, Beck Anxiety Inventory (BAI and State-Trait Anxiety Inventory (STAI were used. Construct validity was evaluated by factor analysis after the Kaiser-Meyer-Olkin (KMO and Barlett test had been performed. To assess the test-retest reliability the scale was re-applied to 54 participants 6 weeks later. Results: To assess construct validity of DIS, factor analyses were performed using varimax principal components analysis with varimax rotation. The factor analysis resulted in two factors named “discomfort (in tolerance” and “discomfort avoidance”. The Cronbach’s alpha coefficient for the entire scale, discomfort-(intolerance subscale, discomfortavoidance subscale were, .592, .670, .600 respectively. Correlations between two factors of DIS, discomfort intolerance and discomfort avoidance, and Trait Anxiety Inventory of STAI (State-Trait Anxiety Inventory were statistically significant at the level of 0.05. Test-retest reliability was statistically significant at the level of 0.01. Conclusion: Analysis demonstrated that DIS had a satisfactory level of reliability and validity in Turkish university students.

  15. Validation of the Intestinal Part of the Prostate Cancer Questionnaire 'QUFW94': Psychometric Properties, Responsiveness, and Content Validity

    International Nuclear Information System (INIS)

    Reidunsdatter, Randi J.; Lund, Jo-Asmund; Fransson, Per; Widmark, Anders

    2010-01-01

    Purpose: Several treatment options are available for patients with prostate cancer. Applicable and valid self-assessment instruments for assessing health-related quality of life (HRQOL) are of paramount importance. The aim of this study was to explore the validity and responsiveness of the intestinal part of the prostate cancer-specific questionnaire QUFW94. Methods and Materials: The content of the intestinal part of QUFW94 was examined by evaluation of experienced clinicians and reviewing the literature. The psychometric properties and responsiveness were assessed by analyzing HRQOL data from the randomized study Scandinavian Prostate Cancer Group 7 (SPCG)/Swedish Association for Urological Oncology 3 (SFUO). Subscales were constructed by means of exploratory factor analyses. Internal consistency was assessed by Cronbach's alpha. Responsiveness was investigated by comparing baseline scores with the 4-year posttreatment follow-up. Results: The content validity was found acceptable, but some amendments were proposed. The factor analyses revealed two symptom scales. The first scale comprised five items regarding general stool problems, frequency, incontinence, need to plan toilet visits, and daily activity. Cronbach's alpha at 0.83 indicated acceptable homogeneity. The second scale was less consistent with a Cronbach's alpha at 0.55. The overall responsiveness was found to be very satisfactory. Conclusion: Two scales were identified in the bowel dimension of the QUFW94; the first one had good internal consistency. The responsiveness was excellent, and some modifications are suggested to strengthen the content validity.

  16. Validation of the Work-Life Balance Culture Scale (WLBCS).

    Science.gov (United States)

    Nitzsche, Anika; Jung, Julia; Kowalski, Christoph; Pfaff, Holger

    2014-01-01

    The purpose of this paper is to describe the theoretical development and initial validation of the newly developed Work-Life Balance Culture Scale (WLBCS), an instrument for measuring an organizational culture that promotes the work-life balance of employees. In Study 1 (N=498), the scale was developed and its factorial validity tested through exploratory factor analyses. In Study 2 (N=513), confirmatory factor analysis (CFA) was performed to examine model fit and retest the dimensional structure of the instrument. To assess construct validity, a priori hypotheses were formulated and subsequently tested using correlation analyses. Exploratory and confirmatory factor analyses revealed a one-factor model. Results of the bivariate correlation analyses may be interpreted as preliminary evidence of the scale's construct validity. The five-item WLBCS is a new and efficient instrument with good overall quality. Its conciseness makes it particularly suitable for use in employee surveys to gain initial insight into a company's perceived work-life balance culture.

  17. Is intercessory prayer valid nursing intervention?

    Science.gov (United States)

    Stang, Cecily Wellelr

    2011-01-01

    Is the use of intercessory prayer (IP) in modern nursing a valid practice? As discussed in current healthcare literature, IP is controversial, with authors offering support for and against the efficacy of the practice. This article reviews IP literature and research, concluding IP is a valid intervention for Christian nurses.

  18. Reliability and Validity of the Korean Version of the Cancer Stigma Scale.

    Science.gov (United States)

    So, Hyang Sook; Chae, Myeong Jeong; Kim, Hye Young

    2017-02-01

    In this study the reliability and validity of the Korean version of the Cancer Stigma Scale (KCSS) was evaluated. The KCSS was formed through translation and modification of Cataldo Lung Cancer Stigma Scale. The KCSS, Psychological Symptom Inventory (PSI), and European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Core 30 (EORTC QLQ-C30) were administered to 247 men and women diagnosed with one of the five major cancers. Construct validity, item convergent and discriminant validity, concurrent validity, known-group validity, and internal consistency reliability of the KCSS were evaluated. Exploratory factor analysis supported the construct validity with a six-factor solution; that explained 65.7% of the total variance. The six-factor model was validated by confirmatory factor analysis (Q (χ²/df)= 2.28, GFI=.84, AGFI=.81, NFI=.80, TLI=.86, RMR=.03, and RMSEA=.07). Concurrent validity was demonstrated with the QLQ-C30 (global: r=-.44; functional: r=-.19; symptom: r=.42). The KCSS had known-group validity. Cronbach's alpha coefficient for the 24 items was .89. The results of this study suggest that the 24-item KCSS has relatively acceptable reliability and validity and can be used in clinical research to assess cancer stigma and its impacts on health-related quality of life in Korean cancer patients. © 2017 Korean Society of Nursing Science

  19. Validation of the Drinking Motives Questionnaire

    DEFF Research Database (Denmark)

    Fernandes-Jesus, Maria; Beccaria, Franca; Demant, Jakob Johan

    2016-01-01

    • This paper assesses the validity of the DMQ-R (Cooper, 1994) among university students in six different European countries. • Results provide support for similar DMQ-R factor structures across countries. • Drinking motives have similar meanings among European university students....

  20. Certification Testing as an Illustration of Argument-Based Validation

    Science.gov (United States)

    Kane, Michael

    2004-01-01

    The theories of validity developed over the past 60 years are quite sophisticated, but the methodology of validity is not generally very effective. The validity evidence for major testing programs is typically much weaker than the evidence for more technical characteristics such as reliability. In addition, most validation efforts have a strong…

  1. Toward a Unified Validation Framework in Mixed Methods Research

    Science.gov (United States)

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  2. Operational validation - current status and opportunities for improvement

    International Nuclear Information System (INIS)

    Davey, E.

    2002-01-01

    The design of nuclear plant systems and operational practices is based on the application of multiple defenses to minimize the risk of occurrence of safety and production challenges and upsets. With such an approach, the effectiveness of individual or combinations of design and operational features in preventing upset challenges should be known. A longstanding industry concern is the adverse impact errors in human performance can have on plant safety and production. To minimize the risk of error occurrence, designers and operations staff routinely employ multiple design and operational defenses. However, the effectiveness of individual or combinations of defensive features in minimizing error occurrence are generally only known in a qualitative sense. More importantly, the margins to error or upset occurrence provided by combinations of design or operational features are generally not characterized during design or operational validation. This paper provides some observations and comments on current validation practice as it relates to operational human performance concerns. The paper also discusses opportunities for future improvement in validation practice in terms of the resilience of validation results to operating changes and characterization of margins to safety or production challenge. (author)

  3. Validation of Multidimensional Persian Version of the Work-Family Conflict Questionnaire among Nurses

    Directory of Open Access Journals (Sweden)

    M Mozafari

    2016-07-01

    Full Text Available Background: Several instruments have so far been developed in English language to measure the level of work-family conflict and further validation is required for non-English speakers. Objective: To test factorial structure and construct validity of the Persian version of work-family conflict scale among Iranian nurse. Methods: This study was conducted among 456 Iranian nurses working at public hospitals in 17 provinces from March 2015 to September 2015. We used a self-administrated questionnaire to collect information. Exploratory factor analysis was run using SPSS 21. Then, construct validity was evaluated using confirmatory factor analysis (CFA, convergent validity, and discriminant validity by AMOS 21. Results: Exploratory factor analysis extracted four dimensions that explained 65.5% of the variance observed. The results of confirmatory factor analysis showed that our data fitted the hypothesized four dimensional model of work-family conflict construct. The average variance extracted was used to establish convergent and discriminant validity. Conclusion: The Persian version of work-family conflict questionnaire is a valid and reliable instrument among Iranian nurses.

  4. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. VALIDITY OF THE EMOTIONAL INTELLIGENCE SCALE FOR USE IN SPORT

    Directory of Open Access Journals (Sweden)

    Andrew M. Lane

    2009-06-01

    Full Text Available This study investigated the factorial validity of the 33-item self-rated Emotional Intelligence Scale (EIS: Schutte et al., 1998 for use with athletes. In stage 1, content validity of the EIS was assessed by a panel of experts (n = 9. Items were evaluated in terms of whether they assessed EI related to oneself and EI focused on others. Content validity further examined items in terms of awareness, regulation, and utilization of emotions. Content validity results indicated items describe 6-factors: appraisal of own emotions, regulation of own emotions, utilization of own emotions, optimism, social skills, and appraisal of others emotions. Results highlighted 13-items which make no direct reference to emotional experiences, and therefore, it is questionable whether such items should be retained. Stage 2 tested two competing models: a single factor model, which is the typical way researchers use the EIS and the 5-factor model (optimism was discarded as it become a single-item scale fiolliwng stage 1 identified in stage 1. Confirmatory factor analysis (CFA results on EIS data from 1,681 athletes demonstrated unacceptable fit indices for the 33-item single factor model and acceptable fit indices for the 6-factor model. Data were re-analyzed after removing the 13-items lacking emotional content, and CFA results indicate partial support for single factor model, and further support for a five-factor model (optimism was discarded as a factor during item removal. Despite encouraging results for a reduced item version of the EIS, we suggest further validation work is needed

  6. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  7. Development and validation of a theoretical test in basic laparoscopy

    DEFF Research Database (Denmark)

    Strandbygaard, Jeanett; Maagaard, Mathilde; Larsen, Christian Rifbjerg

    2013-01-01

    for first-year residents in obstetrics and gynecology. This study therefore aimed to develop and validate a framework for a theoretical knowledge test, a multiple-choice test, in basic theory related to laparoscopy. METHODS: The content of the multiple-choice test was determined by conducting informal...... conversational interviews with experts in laparoscopy. The subsequent relevance of the test questions was evaluated using the Delphi method involving regional chief physicians. Construct validity was tested by comparing test results from three groups with expected different clinical competence and knowledge.......001). Internal consistency (Cronbach's alpha) was 0.82. There was no evidence of differential item functioning between the three groups tested. CONCLUSIONS: A newly developed knowledge test in basic laparoscopy proved to have content and construct validity. The formula for the development and validation...

  8. The Childbirth Experience Questionnaire (CEQ) - validation of its use in a Danish population

    DEFF Research Database (Denmark)

    Boie, Sidsel; Glavind, Julie; Uldbjerg, Niels

    experience is lacking. The Childbirth Experience Questionnaire (CEQ) was developed in Sweden in 2010 and validated in Swedish women, but never validated in a Danish setting, and population. The purpose of our study was to validate the CEQ as a reliable tool for measuring the childbirth experience in Danish......Title The Childbirth Experience Questionnaire (CEQ) - validation the use in a Danish population Introduction Childbirth experience is arguably as important as measuring birth outcomes such as mode of delivery or perinatal morbidity. A robust, validated, Danish tool for evaluating childbirth...... index of agreement between the two scores. Case description (mandatory for Clinical Report) Results (mandatory for Original Research) Face validity: All respondents stated that it was easy to understand and complete the questionnaire. Construct validity: Statistically significant higher CEQ scores were...

  9. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    Science.gov (United States)

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  10. 77 FR 27135 - HACCP Systems Validation

    Science.gov (United States)

    2012-05-09

    ... validation, the journal article should identify E.coli O157:H7 and other pathogens as the hazard that the..., or otherwise processes ground beef may determine that E. coli O157:H7 is not a hazard reasonably... specifications that require that the establishment's suppliers apply validated interventions to address E. coli...

  11. Terminology, Emphasis, and Utility in Validation

    Science.gov (United States)

    Kane, Michael T.

    2008-01-01

    Lissitz and Samuelsen (2007) have proposed an operational definition of "validity" that shifts many of the questions traditionally considered under validity to a separate category associated with the utility of test use. Operational definitions support inferences about how well people perform some kind of task or how they respond to some kind of…

  12. Reliability and validity of the foot and ankle outcome score: a validation study from Iran.

    Science.gov (United States)

    Negahban, Hossein; Mazaheri, Masood; Salavati, Mahyar; Sohani, Soheil Mansour; Askari, Marjan; Fanian, Hossein; Parnianpour, Mohamad

    2010-05-01

    The aims of this study were to culturally adapt and validate the Persian version of Foot and Ankle Outcome Score (FAOS) and present data on its psychometric properties for patients with different foot and ankle problems. The Persian version of FAOS was developed after a standard forward-backward translation and cultural adaptation process. The sample included 93 patients with foot and ankle disorders who were asked to complete two questionnaires: FAOS and Short-Form 36 Health Survey (SF-36). To determine test-retest reliability, 60 randomly chosen patients completed the FAOS again 2 to 6 days after the first administration. Test-retest reliability and internal consistency were assessed using intraclass correlation coefficient (ICC) and Cronbach's alpha, respectively. To evaluate convergent and divergent validity of FAOS compared to similar and dissimilar concepts of SF-36, the Spearman's rank correlation was used. Dimensionality was determined by assessing item-subscale correlation corrected for overlap. The results of test-retest reliability show that all the FAOS subscales have a very high ICC, ranging from 0.92 to 0.96. The minimum Cronbach's alpha level of 0.70 was exceeded by most subscales. The Spearman's correlation coefficient for convergent construct validity fell within 0.32 to 0.58 for the main hypotheses presented a priori between FAOS and SF-36 subscales. For dimensionality, the minimum Spearman's correlation coefficient of 0.40 was exceeded by most items. In conclusion, the results of our study show that the Persian version of FAOS seems to be suitable for Iranian patients with various foot and ankle problems especially lateral ankle sprain. Future studies are needed to establish stronger psychometric properties for patients with different foot and ankle problems.

  13. Italian Validation of Homophobia Scale (HS).

    Science.gov (United States)

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A

    2015-09-01

    The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self-reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. The aim of this study was to validate the HS in the Italian context. An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back into the English language. A psychologist and sexologist checked the translated items from a clinical point of view. We recruited 100 subjects aged18-65 for the Italian validation of the HS. The Pearson coefficient and Cronbach's α coefficient were performed to test the test-retest reliability and internal consistency. A sociodemographic questionnaire including the main information as age, geographic distribution, partnership status, education, religious orientation, and sex orientation was administrated together with the translated version of HS. The analysis of the internal consistency showed an overall Cronbach's α coefficient of 0.92. In the four domains, the Cronbach's α coefficient was 0.90 in behavior/negative affect, 0.94 in affect/behavioral aggression, and 0.92 in negative cognition, whereas in the total score was 0.86. The test-retest reliability showed the following results: the HS total score was r = 0.93 (P cognition was r = 0.75 (P validation of the HS revealed the use of this self-report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior.

  14. A validated battery of vocal emotional expressions

    Directory of Open Access Journals (Sweden)

    Pierre Maurage

    2007-11-01

    Full Text Available For a long time, the exploration of emotions focused on facial expression, and vocal expression of emotion has only recently received interest. However, no validated battery of emotional vocal expressions has been published and made available to the researchers’ community. This paper aims at validating and proposing such material. 20 actors (10 men recorded sounds (words and interjections expressing six basic emotions (anger, disgust, fear, happiness, neutral and sadness. These stimuli were then submitted to a double validation phase: (1 preselection by experts; (2 quantitative and qualitative validation by 70 participants. 195 stimuli were selected for the final battery, each one depicting a precise emotion. The ratings provide a complete measure of intensity and specificity for each stimulus. This paper provides, to our knowledge, the first validated, freely available and highly standardized battery of emotional vocal expressions (words and intonations. This battery could constitute an interesting tool for the exploration of prosody processing among normal and pathological populations, in neuropsychology as well as psychiatry. Further works are nevertheless needed to complement the present material.

  15. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  16. Detecting Symptom Exaggeration in Combat Veterans Using the MMPI-2 Symptom Validity Scales: A Mixed Group Validation

    Science.gov (United States)

    Tolin, David F.; Steenkamp, Maria M.; Marx, Brian P.; Litz, Brett T.

    2010-01-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the…

  17. Content and Construct Validity, Reliability, and Responsiveness of the Rheumatoid Arthritis Flare Questionnaire

    DEFF Research Database (Denmark)

    Bartlett, Susan J; Barbic, Skye P; Bykerk, Vivian P

    2017-01-01

    -FQ), and the voting results at OMERACT 2016. METHODS: Classic and modern psychometric methods were used to assess reliability, validity, sensitivity, factor structure, scoring, and thresholds. Interviews with patients and clinicians also assessed content validity, utility, and meaningfulness of RA-FQ scores. RESULTS......: People with RA in observational trials in Canada (n = 896) and France (n = 138), and an RCT in the Netherlands (n = 178) completed 5 items (11-point numerical rating scale) representing RA Flare core domains. There was moderate to high evidence of reliability, content and construct validity...... to identify and measure RA flares. Its review through OMERACT Filter 2.0 shows evidence of reliability, content and construct validity, and responsiveness. These properties merit its further validation as an outcome for clinical trials....

  18. THE GLOBAL TANDEM-X DEM: PRODUCTION STATUS AND FIRST VALIDATION RESULTS

    Directory of Open Access Journals (Sweden)

    M. Huber

    2012-07-01

    Full Text Available The TanDEM-X mission will derive a global digital elevation model (DEM with satellite SAR interferometry. Two radar satellites (TerraSAR-X and TanDEM-X will map the Earth in a resolution and accuracy with an absolute height error of 10m and a relative height error of 2m for 90% of the data. In order to fulfill the height requirements in general two global coverages are acquired and processed. Besides the final TanDEM-X DEM, an intermediate DEM with reduced accuracy is produced after the first coverage is completed. The last step in the whole workflow for generating the TanDEM-X DEM is the calibration of remaining systematic height errors and the merge of single acquisitions to 1°x1° DEM tiles. In this paper the current status of generating the intermediate DEM and first validation results based on GPS tracks, laser scanning DEMs, SRTM data and ICESat points are shown for different test sites.

  19. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  20. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  1. Fundamentals of critical analysis: the concept of validity and analysis essentials

    Directory of Open Access Journals (Sweden)

    Miguel Araujo Alonso

    2012-01-01

    Full Text Available Critical analysis of literature is an assessment process that allows the reader to get an idea of potential error in the results of a study, errors arising either from bias or confusion. Critical analysis attempts to establish whether the study meets expected criteria or methodological conditions. There are many checklists available that are commonly used to guide this analysis, but filling out a checklist is not tantamount to critical appraisal. Internal validity is defined as the extent to which a research finding actually represents the true relationship between exposure and outcome, considering the unique conditions in which the study was carried out. Attention must be given to the inclusion and exclusion criteria that were used, on the sampling methods, on the baseline characteristics of the patients that were enrolled in the study. External validity refers to the possibility of generalizing conclusions beyond the study sample or the study population. External validity includes population validity and ecological validity. Lastly, the article covers potential threats to external validity that must be considered when analyzing a study.

  2. Validation of Magnetic Reconstruction Codes for Real-Time Applications

    International Nuclear Information System (INIS)

    Mazon, D.; Murari, A.; Boulbe, C.; Faugeras, B.; Blum, J.; Svensson, J.; Quilichini, T.; Gelfusa, M.

    2010-01-01

    The real-time reconstruction of the plasma magnetic equilibrium in a tokamak is a key point to access high-performance regimes. Indeed, the shape of the plasma current density profile is a direct output of the reconstruction and has a leading effect for reaching a steady-state high-performance regime of operation. The challenge is thus to develop real-time methods and algorithms that reconstruct the magnetic equilibrium from the perspective of using these outputs for feedback control purposes. In this paper the validation of the JET real-time equilibrium reconstruction codes using both a Bayesian approach and a full equilibrium solver named Equinox will be detailed, the comparison being performed with the off-line equilibrium code EFIT (equilibrium fitting) or the real-time boundary reconstruction code XLOC (X-point local expansion). In this way a significant database, a methodology, and a strategy for the validation are presented. The validation of the results has been performed using a validated database of 130 JET discharges with a large variety of magnetic configurations. Internal measurements like polarimetry and motional Stark effect have been also used for the Equinox validation including some magnetohydrodynamic signatures for the assessment of the reconstructed safety profile and current density. (authors)

  3. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  4. MODIS Hotspot Validation over Thailand

    Directory of Open Access Journals (Sweden)

    Veerachai Tanpipat

    2009-11-01

    Full Text Available To ensure remote sensing MODIS hotspot (also known as active fire products or hotspots quality and precision in forest fire control and management in Thailand, an increased level of confidence is needed. Accuracy assessment of MODIS hotspots utilizing field survey data validation is described. A quantitative evaluation of MODIS hotspot products has been carried out since the 2007 forest fire season. The carefully chosen hotspots were scattered throughout the country and within the protected areas of the National Parks and Wildlife Sanctuaries. Three areas were selected as test sites for validation guidelines. Both ground and aerial field surveys were also conducted in this study by the Forest Fire Control Division, National Park, Wildlife and Plant Conversation Department, Ministry of Natural Resources and Environment, Thailand. High accuracy of 91.84 %, 95.60% and 97.53% for the 2007, 2008 and 2009 fire seasons were observed, resulting in increased confidence in the use of MODIS hotspots for forest fire control and management in Thailand.

  5. Validation of new CFD release by Ground-Coupled Heat Transfer Test Cases

    Directory of Open Access Journals (Sweden)

    Sehnalek Stanislav

    2017-01-01

    Full Text Available In this article is presented validation of ANSYS Fluent with IEA BESTEST Task 34. Article stars with outlook to the topic, afterward are described steady-state cases used for validation. Thereafter is mentioned implementation of these cases on CFD. Article is concluded with presentation of the simulated results with a comparison of those from already validated simulation software by IEA. These validation shows high correlation with an older version of tested ANSYS as well as with other main software. The paper ends by discussion with an outline of future research.

  6. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  7. Self-perceived Coparenting of Nonresident Fathers: Scale Development and Validation.

    Science.gov (United States)

    Dyer, W Justin; Fagan, Jay; Kaufman, Rebecca; Pearson, Jessica; Cabrera, Natasha

    2017-11-16

    This study reports on the development and validation of the Fatherhood Research and Practice Network coparenting perceptions scale for nonresident fathers. Although other measures of coparenting have been developed, this is the first measure developed specifically for low-income, nonresident fathers. Focus groups were conducted to determine various aspects of coparenting. Based on this, a scale was created and administered to 542 nonresident fathers. Participants also responded to items used to examine convergent and predictive validity (i.e., parental responsibility, contact with the mother, father self-efficacy and satisfaction, child behavior problems, and contact and engagement with the child). Factor analyses and reliability tests revealed three distinct and reliable perceived coparenting factors: undermining, alliance, and gatekeeping. Validity tests suggest substantial overlap between the undermining and alliance factors, though undermining was uniquely related to child behavior problems. The alliance and gatekeeping factors showed strong convergent validity and evidence for predictive validity. Taken together, results suggest this relatively short measure (11 items) taps into three coparenting dimensions significantly predictive of aspects of individual and family life. © 2017 Family Process Institute.

  8. Airborne campaigns for CryoSat pre-launch calibration and validation

    DEFF Research Database (Denmark)

    Hvidegaard, Sine Munk; Forsberg, René; Skourup, Henriette

    2010-01-01

    From 2003 to 2008 DTU Space together with ESA and several international partners carried out airborne and ground field campaigns in preparation for CryoSat validation; called CryoVEx: CryoSat Validation Experiments covering the main ice caps in Greenland, Canada and Svalbard and sea ice in the Ar......From 2003 to 2008 DTU Space together with ESA and several international partners carried out airborne and ground field campaigns in preparation for CryoSat validation; called CryoVEx: CryoSat Validation Experiments covering the main ice caps in Greenland, Canada and Svalbard and sea ice...... in the Arctic Ocean. The main goal of the airborne surveys was to acquire coincident scanning laser and CryoSat type radar elevation measurements of the surface; either sea ice or land ice. Selected lines have been surveyed along with detailed mapping of validation sites coordinated with insitu field work...... and helicopter electromagnetic surveying. This paper summarises the pre-launch campaigns and presents some of the result from the coincident measurement from airborne and ground observations....

  9. Preliminary Validation of the Child Abuse Potential Inventory in Turkey

    Science.gov (United States)

    Kutsal, Ebru; Pasli, Figen; Isikli, Sedat; Sahin, Figen; Yilmaz, Gokce; Beyazova, Ufuk

    2011-01-01

    This study aims to provide preliminary findings on the validity of Child Abuse Potential Inventory (CAP Inventory) on Turkish sample of 23 abuser and 47 nonabuser parents. To investigate validity in two groups, Minnesota Multiphasic Personality Inventory (MMPI) Psychopathic Deviate (MMPI-PD) scale is also used along with CAP. The results show…

  10. Evaluating the Predictive Validity of Graduate Management Admission Test Scores

    Science.gov (United States)

    Sireci, Stephen G.; Talento-Miller, Eileen

    2006-01-01

    Admissions data and first-year grade point average (GPA) data from 11 graduate management schools were analyzed to evaluate the predictive validity of Graduate Management Admission Test[R] (GMAT[R]) scores and the extent to which predictive validity held across sex and race/ethnicity. The results indicated GMAT verbal and quantitative scores had…

  11. Redundant sensor validation by using fuzzy logic

    International Nuclear Information System (INIS)

    Holbert, K.E.; Heger, A.S.; Alang-Rashid, N.K.

    1994-01-01

    This research is motivated by the need to relax the strict boundary of numeric-based signal validation. To this end, the use of fuzzy logic for redundant sensor validation is introduced. Since signal validation employs both numbers and qualitative statements, fuzzy logic provides a pathway for transforming human abstractions into the numerical domain and thus coupling both sources of information. With this transformation, linguistically expressed analysis principles can be coded into a classification rule-base for signal failure detection and identification

  12. Validation of the prosthetic esthetic index

    DEFF Research Database (Denmark)

    Özhayat, Esben B; Dannemand, Katrine

    2014-01-01

    OBJECTIVES: In order to diagnose impaired esthetics and evaluate treatments for these, it is crucial to evaluate all aspects of oral and prosthetic esthetics. No professionally administered index currently exists that sufficiently encompasses comprehensive prosthetic esthetics. This study aimed...... to validate a new comprehensive index, the Prosthetic Esthetic Index (PEI), for professional evaluation of esthetics in prosthodontic patients. MATERIAL AND METHODS: The content, criterion, and construct validity; the test-retest, inter-rater, and internal consistency reliability; and the sensitivity...... furthermore distinguish between participants and controls, indicating sufficient sensitivity. CONCLUSION: The PEI is considered a valid and reliable instrument involving sufficient aspects for assessment of the professionally evaluated esthetics in prosthodontic patients. CLINICAL RELEVANCE...

  13. Development and validation of the Child Oral Health Impact Profile - Preschool version.

    Science.gov (United States)

    Ruff, R R; Sischo, L; Chinn, C H; Broder, H L

    2017-09-01

    The Child Oral Health Impact Profile (COHIP) is a validated instrument created to measure the oral health-related quality of life of school-aged children. The purpose of this study was to develop and validate a preschool version of the COHIP (COHIP-PS) for children aged 2-5. The COHIP-PS was developed and validated using a multi-stage process consisting of item selection, face validity testing, item impact testing, reliability and validity testing, and factor analysis. A cross-sectional convenience sample of caregivers having children 2-5 years old from four groups completed item clarity and impact forms. Groups were recruited from pediatric health clinics or preschools/daycare centers, speech clinics, dental clinics, or cleft/craniofacial centers. Participants had a variety of oral health-related conditions, including caries, congenital orofacial anomalies, and speech/language deficiencies such as articulation and language disorders. COHIP-PS. The COHIP-PS was found to have acceptable internal validity (a = 0.71) and high test-retest reliability (0.87), though internal validity was below the accepted threshold for the community sample. While discriminant validity results indicated significant differences across study groups, the overall magnitude of differences was modest. Results from confirmatory factor analyses support the use of a four-factor model consisting of 11 items across oral health, functional well-being, social-emotional well-being, and self-image domains. Quality of life is an integral factor in understanding and assessing children's well-being. The COHIP-PS is a validated oral health-related quality of life measure for preschool children with cleft or other oral conditions. Copyright© 2017 Dennis Barber Ltd.

  14. The dialogic validation

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2005-01-01

    This paper is inspired by dialogism and the title is a paraphrase on Bakhtin's (1981) "The Dialogic Imagination". The paper investigates how dialogism can inform the process of validating inquiry-based qualitative research. The paper stems from a case study on the role of recognition...

  15. Validation of the Dyadic Coping Inventory with Chinese couples: Factorial structure, measurement invariance, and construct validity.

    Science.gov (United States)

    Xu, Feng; Hilpert, Peter; Randall, Ashley K; Li, Qiuping; Bodenmann, Guy

    2016-08-01

    The Dyadic Coping Inventory (DCI, Bodenmann, 2008) assesses how couples support each other when facing individual (e.g., workload) and common (e.g., parenting) stressors. Specifically, the DCI measures partners' perceptions of their own (Self) and their partners' behaviors (Partner) when facing individual stressors, and partners' common coping behaviors when facing common stressors (Common). To date, the DCI has been validated in 6 different languages from individualistic Western cultures; however, because culture can affect interpersonal interactions, it is unknown whether the DCI is a reliable measure of coping behaviors for couples living in collectivistic Eastern cultures. Based on data from 474 Chinese couples (N = 948 individuals), the current study examined the Chinese version of the DCI's factorial structure, measurement invariance (MI), and construct validity of test scores. Using 3 cultural groups (China, Switzerland, and the United States [U.S.]), confirmatory factor analysis revealed a 5-factor structure regarding Self and Partner and a 2-factor structure regarding Common dyadic coping (DC). Results from analyses of MI indicated that the DCI subscales met the criteria for configural, metric, and full/partial scalar invariance across cultures (Chinese-Swiss and Chinese-U.S.) and genders (Chinese men and women). Results further revealed good construct validity of the DCI test scores. In all, the Chinese version of the DCI can be used for measuring Chinese couples' coping behaviors, and is available for cross-cultural studies examining DC behaviors between Western and Eastern cultures. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. The validation of Huffaz Intelligence Test (HIT)

    Science.gov (United States)

    Rahim, Mohd Azrin Mohammad; Ahmad, Tahir; Awang, Siti Rahmah; Safar, Ajmain

    2017-08-01

    In general, a hafiz who can memorize the Quran has many specialties especially in respect to their academic performances. In this study, the theory of multiple intelligences introduced by Howard Gardner is embedded in a developed psychometric instrument, namely Huffaz Intelligence Test (HIT). This paper presents the validation and the reliability of HIT of some tahfiz students in Malaysia Islamic schools. A pilot study was conducted involving 87 huffaz who were randomly selected to answer the items in HIT. The analysis method used includes Partial Least Square (PLS) on reliability, convergence and discriminant validation. The study has validated nine intelligences. The findings also indicated that the composite reliabilities for the nine types of intelligences are greater than 0.8. Thus, the HIT is a valid and reliable instrument to measure the multiple intelligences among huffaz.

  17. Validation of Robotic Surgery Simulator (RoSS).

    Science.gov (United States)

    Kesavadas, Thenkurussi; Stegemann, Andrew; Sathyaseelan, Gughan; Chowriappa, Ashirwad; Srimathveeravalli, Govindarajan; Seixas-Mikelus, Stéfanie; Chandrasekhar, Rameella; Wilding, Gregory; Guru, Khurshid

    2011-01-01

    Recent growth of daVinci Robotic Surgical System as a minimally invasive surgery tool has led to a call for better training of future surgeons. In this paper, a new virtual reality simulator, called RoSS is presented. Initial results from two studies - face and content validity, are very encouraging. 90% of the cohort of expert robotic surgeons felt that the simulator was excellent or somewhat close to the touch and feel of the daVinci console. Content validity of the simulator received 90% approval in some cases. These studies demonstrate that RoSS has the potential of becoming an important training tool for the daVinci surgical robot.

  18. Method Validation Procedure in Gamma Spectroscopy Laboratory

    International Nuclear Information System (INIS)

    El Samad, O.; Baydoun, R.

    2008-01-01

    The present work describes the methodology followed for the application of ISO 17025 standards in gamma spectroscopy laboratory at the Lebanese Atomic Energy Commission including the management and technical requirements. A set of documents, written procedures and records were prepared to achieve the management part. The technical requirements, internal method validation was applied through the estimation of trueness, repeatability , minimum detectable activity and combined uncertainty, participation in IAEA proficiency tests assure the external method validation, specially that the gamma spectroscopy lab is a member of ALMERA network (Analytical Laboratories for the Measurements of Environmental Radioactivity). Some of these results are presented in this paper. (author)

  19. Internal and external validation of an ESTRO delineation guideline

    DEFF Research Database (Denmark)

    Eldesoky, Ahmed R.; Yates, Esben Svitzer; Nyeng, Tine B

    2016-01-01

    Background and purpose To internally and externally validate an atlas based automated segmentation (ABAS) in loco-regional radiation therapy of breast cancer. Materials and methods Structures of 60 patients delineated according to the ESTRO consensus guideline were included in four categorized...... and axillary nodal levels and poor agreement for interpectoral, internal mammary nodal regions and LADCA. Correcting ABAS significantly improved all the results. External validation of ABAS showed comparable results. Conclusions ABAS is a clinically useful tool for segmenting structures in breast cancer loco...

  20. Validation of OPERA3D PCMI Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Jeun, Ji Hoon; Choi, Jae Myung; Yoo, Jong Sung [KEPCO Nuclear Fuel Co., Daejeon (Korea, Republic of); Cheng, G.; Sim, K. S.; Chassie, Girma [Candu Energy INC.,Ontario (Canada)

    2013-10-15

    This report will describe introduction of validation of OPERA3D code, and validation results that are directly related with PCMI phenomena. OPERA3D was developed for the PCMI analysis and validated using the in-pile measurement data. Fuel centerline temperature and clad strain calculation results shows close expectations with measurement data. Moreover, 3D FEM fuel model of OPERA3D shows slight hour glassing behavior of fuel pellet in contact case. Further optimization will be conducted for future application of OPERA3D code. Nuclear power plant consists of many complicated systems, and one of the important objects of all the systems is maintaining nuclear fuel integrity. However, it is inevitable to experience PCMI (Pellet Cladding Mechanical Interaction) phenomena at current operating reactors and next generation reactors for advanced safety and economics as well. To evaluate PCMI behavior, many studies are on-going to develop 3-dimensional fuel performance evaluation codes. Moreover, these codes are essential to set the safety limits for the best estimated PCMI phenomena aimed for high burnup fuel.

  1. DESIGN AND VALIDATION OF A CARDIORESPIRATORY ...

    African Journals Online (AJOL)

    UJA

    This study aimed to validate the 10x20m test for children aged 3 to 6 years in order ... obtained adequate parameters of reliability and validity in healthy children aged 3 ... and is a determinant of cardiovascular risk in preschool children (Bürgi et al., ... (Seca 222, Hamburg, Germany), and weight (kg) that was recorded with a ...

  2. Empirical Validation of Building Simulation Software : Modeling of Double Facades

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  3. Results from the radiometric validation of Sentinel-3 optical sensors using natural targets

    Science.gov (United States)

    Fougnie, Bertrand; Desjardins, Camille; Besson, Bruno; Bruniquel, Véronique; Meskini, Naceur; Nieke, Jens; Bouvet, Marc

    2016-09-01

    The recently launched SENTINEL-3 mission measures sea surface topography, sea/land surface temperature, and ocean/land surface colour with high accuracy. The mission provides data continuity with the ENVISAT mission through acquisitions by multiple sensing instruments. Two of them, OLCI (Ocean and Land Colour Imager) and SLSTR (Sea and Land Surface Temperature Radiometer) are optical sensors designed to provide continuity with Envisat's MERIS and AATSR instruments. During the commissioning, in-orbit calibration and validation activities are conducted. Instruments are in-flight calibrated and characterized primarily using on-board devices which include diffusers and black body. Afterward, vicarious calibration methods are used in order to validate the OLCI and SLSTR radiometry for the reflective bands. The calibration can be checked over dedicated natural targets such as Rayleigh scattering, sunglint, desert sites, Antarctica, and tentatively deep convective clouds. Tools have been developed and/or adapted (S3ETRAC, MUSCLE) to extract and process Sentinel-3 data. Based on these matchups, it is possible to provide an accurate checking of many radiometric aspects such as the absolute and interband calibrations, the trending correction, the calibration consistency within the field-of-view, and more generally this will provide an evaluation of the radiometric consistency for various type of targets. Another important aspect will be the checking of cross-calibration between many other instruments such as MERIS and AATSR (bridge between ENVISAT and Sentinel-3), MODIS (bridge to the GSICS radiometric standard), as well as Sentinel-2 (bridge between Sentinel missions). The early results, based on the available OLCI and SLSTR data, will be presented and discussed.

  4. Validation of KENO V.a: Comparison with critical experiments

    International Nuclear Information System (INIS)

    Jordan, W.C.; Landers, N.F.; Petrie, L.M.

    1986-12-01

    Section 1 of this report documents the validation of KENO V.a against 258 critical experiments. Experiments considered were primarily high or low enriched uranium systems. The results indicate that the KENO V.a Monte Carlo Criticality Program accurately calculates a broad range of critical experiments. A substantial number of the calculations showed a positive or negative bias in excess of 1 1/2% in k-effective (k/sub eff/). Classes of criticals which show a bias include 3% enriched green blocks, highly enriched uranyl fluoride slab arrays, and highly enriched uranyl nitrate arrays. If these biases are properly taken into account, the KENO V.a code can be used with confidence for the design and criticality safety analysis of uranium-containing systems. Sections 2 of this report documents the results of investigation into the cause of the bias observed in Sect. 1. The results of this study indicate that the bias seen in Sect. 1 is caused by code bias, cross-section bias, reporting bias, and modeling bias. There is evidence that many of the experiments used in this validation and in previous validations are not adequately documented. The uncertainty in the experimental parameters overshadows bias caused by the code and cross sections and prohibits code validation to better than about 1% in k/sub eff/. 48 refs., 19 figs., 19 tabs

  5. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.

  6. Validation of the probabilistic approach for the analysis of PWR transients

    International Nuclear Information System (INIS)

    Amesz, J.; Francocci, G.F.; Clarotti, C.

    1978-01-01

    This paper reviews the pilot study at present being carried out on the validation of probabilistic methodology with real data coming from the operational records of the PWR power station at Obrigheim (KWO, Germany) operating since 1969. The aim of this analysis is to validate the a priori predictions of reactor transients performed by a probabilistic methodology, with the posteriori analysis of transients that actually occurred at a power station. Two levels of validation have been distinguished: (a) validation of the rate of occurrence of initiating events; (b) validation of the transient-parameter amplitude (i.e., overpressure) caused by the above mentioned initiating events. The paper describes the a priori calculations performed using a fault-tree analysis by means of a probabilistic code (SALP 3) and event-trees coupled with a PWR system deterministic computer code (LOOP 7). Finally the principle results of these analyses are presented and critically reviewed

  7. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  8. Furthering our Understanding of Land Surface Interactions using SVAT modelling: Results from SimSphere's Validation

    Science.gov (United States)

    North, Matt; Petropoulos, George; Ireland, Gareth; Rendal, Daisy; Carlson, Toby

    2015-04-01

    With current predicted climate change, there is an increased requirement to gain knowledge on the terrestrial biosphere, for numerous agricultural, hydrological and meteorological applications. To this end, Soil Vegetation Atmospheric Transfer (SVAT) models are quickly becoming the preferred scientific tool to monitor, at fine temporal and spatial resolutions, detailed information on numerous parameters associated with Earth system interactions. Validation of any model is critical to assess its accuracy, generality and realism to distinctive ecosystems and subsequently acts as important step before its operational distribution. In this study, the SimSphere SVAT model has been validated to fifteen different sites of the FLUXNET network, where model performance was statistically evaluated by directly comparing the model predictions vs in situ data, for cloud free days with a high energy balance closure. Specific focus is given to the models ability to simulate parameters associated with the energy balance, namely Shortwave Incoming Solar Radiation (Rg), Net Radiation (Rnet), Latent Heat (LE), Sensible Heat (H), Air Temperature at 1.3m (Tair 1.3m) and Air temperature at 50m (Tair 50m). Comparisons were performed for a number distinctive ecosystem types and for 150 days in total using in-situ data from ground observational networks acquired from the year 2011 alone. Evaluation of the models' coherence to reality was evaluated on the basis of a series of statistical parameters including RMSD, R2, Scatter, Bias, MAE , NASH index, Slope and Intercept. Results showed good to very good agreement between predicted and observed datasets, particularly so for LE, H, Tair 1.3m and Tair 50m where mean error distribution values indicated excellent model performance. Due to the systematic underestimation, poorer simulation accuracies were exhibited for Rg and Rnet, yet all values reported are still analogous to other validatory studies of its kind. In overall, the model

  9. Global Land Product Validation Protocols: An Initiative of the CEOS Working Group on Calibration and Validation to Evaluate Satellite-derived Essential Climate Variables

    Science.gov (United States)

    Guillevic, P. C.; Nickeson, J. E.; Roman, M. O.; camacho De Coca, F.; Wang, Z.; Schaepman-Strub, G.

    2016-12-01

    The Global Climate Observing System (GCOS) has specified the need to systematically produce and validate Essential Climate Variables (ECVs). The Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and in particular its subgroup on Land Product Validation (LPV) is playing a key coordination role leveraging the international expertise required to address actions related to the validation of global land ECVs. The primary objective of the LPV subgroup is to set standards for validation methods and reporting in order to provide traceable and reliable uncertainty estimates for scientists and stakeholders. The Subgroup is comprised of 9 focus areas that encompass 10 land surface variables. The activities of each focus area are coordinated by two international co-leads and currently include leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FAPAR), vegetation phenology, surface albedo, fire disturbance, snow cover, land cover and land use change, soil moisture, land surface temperature (LST) and emissivity. Recent additions to the focus areas include vegetation indices and biomass. The development of best practice validation protocols is a core activity of CEOS LPV with the objective to standardize the evaluation of land surface products. LPV has identified four validation levels corresponding to increasing spatial and temporal representativeness of reference samples used to perform validation. Best practice validation protocols (1) provide the definition of variables, ancillary information and uncertainty metrics, (2) describe available data sources and methods to establish reference validation datasets with SI traceability, and (3) describe evaluation methods and reporting. An overview on validation best practice components will be presented based on the LAI and LST protocol efforts to date.

  10. Distress Tolerance Scale: A Study of Reliability and Validity

    Directory of Open Access Journals (Sweden)

    Ahmet Emre SARGIN

    2012-11-01

    Full Text Available Objective: Distress Tolerance Scale (DTS is developed by Simons and Gaher in order to measure individual differences in the capacity of distress tolerance.The aim of this study is to assess the reliability and validity of the Turkish version of DTS. Method: One hundred and sixty seven university students (male=66, female=101 participated in this study. Beck Anxiety Inventory (BAI, State-trait Anxiety Inventory (STAI and Discomfort Intolerance Scale (DIS were used to determine the criterion validity. Construct validity was evaluated with factor analysis after the Kaiser-Meyer-Olkin (KMO and Barlett test had been performed. To assess the test-retest reliability, the scale was re-applied to 79 participants six weeks later. Results: To assess construct validity, factor analyses were performed using varimax principal components analysis with varimax rotation. While there were factors in the original study, our factor analysis resulted in three factors. Cronbach’s alpha coefficients for the entire scale and tolerance, regulation, self-efficacy subscales were .89, .90, .80 and .64 respectively. There were correlations at the level of 0.01 between the Trait Anxiety Inventory of STAI and BAI, and all the subscales of DTS and also between the State Anxiety Inventory and regulation subscale. Both of the subscales of DIS were correlated with the entire subscale and all the subscales except regulation at the level of 0.05.Test-retest reliability was statistically significant at the level of 0.01. Conclusion: Analysis demonstrated that DTS had a satisfactory level of reliability and validity in Turkish university students.

  11. Validation od computational model ALDERSON/EGSnrc for chest radiography

    International Nuclear Information System (INIS)

    Muniz, Bianca C.; Santos, André L. dos; Menezes, Claudio J.M.

    2017-01-01

    To perform dose studies in situations of exposure to radiation, without exposing individuals, the numerical dosimetry uses Computational Exposure Models (ECM). Composed essentially by a radioactive source simulator algorithm, a voxel phantom representing the human anatomy and a Monte Carlo code, the ECMs must be validated to determine the reliability of the physical array representation. The objective of this work is to validate the ALDERSON / EGSnrc MCE by through comparisons between the experimental measurements obtained with the ionization chamber and virtual simulations using Monte Carlo Method to determine the ratio of the input and output radiation dose. Preliminary results of these comparisons showed that the ECM reproduced the results of the experimental measurements performed with the physical phantom with a relative error of less than 10%, validating the use of this model for simulations of chest radiographs and estimates of radiation doses in tissues in the irradiated structures

  12. A Complete Reporting of MCNP6 Validation Results for Electron Energy Deposition in Single-Layer Extended Media for Source Energies <= 1-MeV

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, David A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hughes, Henry Grady [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-04

    In this paper, we expand on previous validation work by Dixon and Hughes. That is, we present a more complete suite of validation results with respect to to the well-known Lockwood energy deposition experiment. Lockwood et al. measured energy deposition in materials including beryllium, carbon, aluminum, iron, copper, molybdenum, tantalum, and uranium, for both single- and multi-layer 1-D geometries. Source configurations included mono-energetic, mono-directional electron beams with energies of 0.05-MeV, 0.1-MeV, 0.3- MeV, 0.5-MeV, and 1-MeV, in both normal and off-normal angles of incidence. These experiments are particularly valuable for validating electron transport codes, because they are closely represented by simulating pencil beams incident on 1-D semi-infinite slabs with and without material interfaces. Herein, we include total energy deposition and energy deposition profiles for the single-layer experiments reported by Lockwood et al. (a more complete multi-layer validation will follow in another report).

  13. Overview of SCIAMACHY validation: 2002 2004

    Science.gov (United States)

    Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.

    2005-08-01

    SCIAMACHY, on board Envisat, is now in operation for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. The actual validation of the operational SCIAMACHY processors established at DLR on behalf of ESA has been hampered by data distribution and processor problems. Since first data releases in summer 2002, operational processors were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, remaining processor problems cause major errors preventing from scientific usability in other periods and domains. Untied to the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products (both columns and profiles) already have acceptable, if not excellent, quality

  14. PIV Data Validation Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  15. Validating Teacher Commitment Scale Using a Malaysian Sample

    Directory of Open Access Journals (Sweden)

    Lei Mee Thien

    2014-05-01

    Full Text Available This study attempts to validate an integrative Teacher Commitment scale using rigorous scale validation procedures. An adapted questionnaire with 17 items was administered to 600 primary school teachers in Penang, Malaysia. Data were analyzed using exploratory factor analysis (EFA and confirmatory factor analysis (CFA with SPSS 19.0 and AMOS 19.0, respectively. The results support Teacher Commitment as a multidimensional construct with its four underlying dimensions: Commitment to Student, Commitment to Teaching, Commitment to School, and Commitment to Profession. A validated Teacher Commitment scale with 13 items measured can be proposed to be used as an evaluative tool to assess the level to which teachers are committed to their students’ learning, teaching, school, and profession. The Teacher Commitment scale would also facilitate the identifications of factors that influence teachers’ quality of work life and school effectiveness. The practical implications, school cultural influence, and methodological limitations are discussed.

  16. Software for validating parameters retrieved from satellite

    Digital Repository Service at National Institute of Oceanography (India)

    Muraleedharan, P.M.; Sathe, P.V.; Pankajakshan, T.

    -channel Scanning Microwave Radiometer (MSMR) onboard the Indian satellites Occansat-1 during 1999-2001 were validated using this software as a case study. The program has several added advantages over the conventional method of validation that involves strenuous...

  17. DESCQA: Synthetic Sky Catalog Validation Framework

    Science.gov (United States)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  18. Ensuring validity in qualitative international business research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman; Skaates, Maria Anne

    2002-01-01

    The purpose of this paper is to provide an account of how the validity issue related to qualitative research strategies within the IB field may be grasped from an at least partially subjectivist point of view. In section two, we will first assess via the aforementioned literature review the extent...... to which the validity issue has been treated in qualitative research contributions published in six leading English-language journals which publish IB research. Thereafter, in section three, we will discuss our findings and relate them to (a) various levels of a research project and (b) the existing...... literature on potential validity problems from a more subjectivist point of view. As a part of this step, we will demonstrate that the assumptions of objectivist and subjectivist ontologies and their corresponding epistemologies merit different canons for assessing research validity. In the subsequent...

  19. Excellent cross-cultural validity, intra-test reliability and construct validity of the dutch rivermead mobility index in patients after stroke undergoing rehabilitation

    NARCIS (Netherlands)

    Roorda, Leo D.; Green, John; De Kluis, Kiki R. A.; Molenaar, Ivo W.; Bagley, Pam; Smith, Jane; Geurts, Alexander C. H.

    2008-01-01

    Objective: To investigate the cross-cultural validity of international Dutch-English comparisons when using the Dutch Rivermead Mobility Index (RMI), and the intra-test reliability and construct validity of the Dutch RMI. Methods: Cross-cultural validity was studied in a combined data-set of Dutch

  20. Validation of a Russian Language Oswestry Disability Index Questionnaire.

    Science.gov (United States)

    Yu, Elizabeth M; Nosova, Emily V; Falkenstein, Yuri; Prasad, Priya; Leasure, Jeremi M; Kondrashov, Dimitriy G

    2016-11-01

    Study Design  Retrospective reliability and validity study. Objective  To validate a recently translated Russian language version of the Oswestry Disability Index (R-ODI) using standardized methods detailed from previous validations in other languages. Methods  We included all subjects who were seen in our spine surgery clinic, over the age of 18, and fluent in the Russian language. R-ODI was translated by six bilingual people and combined into a consensus version. R-ODI and visual analog scale (VAS) questionnaires for leg and back pain were distributed to subjects during both their initial and follow-up visits. Test validity, stability, and internal consistency were measured using standardized psychometric methods. Results Ninety-seven subjects participated in the study. No change in the meaning of the questions on R-ODI was noted with translation from English to Russian. There was a significant positive correlation between R-ODI and VAS scores for both the leg and back during both the initial and follow-up visits ( p  Russian-speaking population in the United States.

  1. [Spanish validation of Game Addiction Scale for Adolescents (GASA)].

    Science.gov (United States)

    Lloret Irles, Daniel; Morell Gomis, Ramon; Marzo Campos, Juan Carlos; Tirado González, Sonia

    The aim of this study is to adapt and validate the Game Addiction Scale for Adolescents (GASA) to the Spanish youth population. Cultural adaptation and validation study. Secondary Education centres. Two independent studies were conducted on a group of 466 young people with a mean age of 15.27 years (13-18, SD: 1.83) and 48.7% ♀ and on another group of 566, with a mean age of 21.24 years (19-26; SD: 1.86) 44.1% ♀. Addiction to video games (GASA); Game behavior (Game habits usage questionnaire), Impulsiveness (Plutchik Impulsiveness Scale) and Group Pressure (Ad hoc questionnaire). The Spanish version of GASA has shown good reliability and true to the original scale factor structure. As regards criterion validity, GASA scores are significantly different according to four criteria related to problem gambling: Game intensity and frequency, impulsiveness, and peer pressure. The results show that the adapted version GASA is adequate and a valid tool for assessing problematic gaming behaviour. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  2. Validity of the Eating Attitude Test among Exercisers.

    Science.gov (United States)

    Lane, Helen J; Lane, Andrew M; Matheson, Hilary

    2004-12-01

    Theory testing and construct measurement are inextricably linked. To date, no published research has looked at the factorial validity of an existing eating attitude inventory for use with exercisers. The Eating Attitude Test (EAT) is a 26-item measure that yields a single index of disordered eating attitudes. The original factor analysis showed three interrelated factors: Dieting behavior (13-items), oral control (7-items), and bulimia nervosa-food preoccupation (6-items). The primary purpose of the study was to examine the factorial validity of the EAT among a sample of exercisers. The second purpose was to investigate relationships between eating attitudes scores and selected psychological constructs. In stage one, 598 regular exercisers completed the EAT. Confirmatory factor analysis (CFA) was used to test the single-factor, a three-factor model, and a four-factor model, which distinguished bulimia from food pre-occupation. CFA of the single-factor model (RCFI = 0.66, RMSEA = 0.10), the three-factor-model (RCFI = 0.74; RMSEA = 0.09) showed poor model fit. There was marginal fit for the 4-factor model (RCFI = 0.91, RMSEA = 0.06). Results indicated five-items showed poor factor loadings. After these 5-items were discarded, the three models were re-analyzed. CFA results indicated that the single-factor model (RCFI = 0.76, RMSEA = 0.10) and three-factor model (RCFI = 0.82, RMSEA = 0.08) showed poor fit. CFA results for the four-factor model showed acceptable fit indices (RCFI = 0.98, RMSEA = 0.06). Stage two explored relationships between EAT scores, mood, self-esteem, and motivational indices toward exercise in terms of self-determination, enjoyment and competence. Correlation results indicated that depressed mood scores positively correlated with bulimia and dieting scores. Further, dieting was inversely related with self-determination toward exercising. Collectively, findings suggest that a 21-item four-factor model shows promising validity coefficients among

  3. Validation of the 4P's Plus screen for substance use in pregnancy validation of the 4P's Plus.

    Science.gov (United States)

    Chasnoff, I J; Wells, A M; McGourty, R F; Bailey, L K

    2007-12-01

    The purpose of this study is to validate the 4P's Plus screen for substance use in pregnancy. A total of 228 pregnant women enrolled in prenatal care underwent screening with the 4P's Plus and received a follow-up clinical assessment for substance use. Statistical analyses regarding reliability, sensitivity, specificity, and positive and negative predictive validity of the 4Ps Plus were conducted. The overall reliability for the five-item measure was 0.62. Seventy-four (32.5%) of the women had a positive screen. Sensitivity and specificity were very good, at 87 and 76%, respectively. Positive predictive validity was low (36%), but negative predictive validity was quite high (97%). Of the 31 women who had a positive clinical assessment, 45% were using less than 1 day per week. The 4P's Plus reliably and effectively screens pregnant women for risk of substance use, including those women typically missed by other perinatal screening methodologies.

  4. Validation of the Information/Communications Technology Literacy Test

    Science.gov (United States)

    2016-10-01

    Technical Report 1360 Validation of the Information /Communications Technology Literacy Test D. Matthew Trippe Human Resources Research...TITLE AND SUBTITLE Validation of the Information /Communications Technology Literacy Test 5a. CONTRACT OR GRANT NUMBER W91WAS-09-D-0013 5b...validate a measure of cyber aptitude, the Information /Communications Technology Literacy Test (ICTL), in predicting trainee performance in Information

  5. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  6. Learning Style Scales: a valid and reliable questionnaire

    Directory of Open Access Journals (Sweden)

    Abdolghani Abdollahimohammad

    2014-08-01

    Full Text Available Purpose: Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. Methods: A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF with oblimin rotation, a scree plot test, and parallel analysis (PA. The reliability of LSS was tested using Cronbach’s α, corrected item-total correlation, and test-retest. Results: Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach’s α was > 0.70 for all subscales in both study populations. The corrected item-total correlations were > 0.30 for the items in each component. Conclusion: The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments.

  7. Doubtful outcome of the validation of the Rome II questionnaire: validation of a symptom based diagnostic tool

    Directory of Open Access Journals (Sweden)

    Nylin Henry BO

    2009-12-01

    Full Text Available Abstract Background Questionnaires are used in research and clinical practice. For gastrointestinal complaints the Rome II questionnaire is internationally known but not validated. The aim of this study was to validate a printed and a computerized version of Rome II, translated into Swedish. Results from various analyses are reported. Methods Volunteers from a population based colonoscopy study were included (n = 1011, together with patients seeking general practice (n = 45 and patients visiting a gastrointestinal specialists' clinic (n = 67. The questionnaire consists of 38 questions concerning gastrointestinal symptoms and complaints. Diagnoses are made after a special code. Our validation included analyses of the translation, feasibility, predictability, reproducibility and reliability. Kappa values and overall agreement were measured. The factor structures were confirmed using a principal component analysis and Cronbach's alpha was used to test the internal consistency. Results and Discussion Translation and back translation showed good agreement. The questionnaire was easy to understand and use. The reproducibility test showed kappa values of 0.60 for GERS, 0.52 for FD, and 0.47 for IBS. Kappa values and overall agreement for the predictability when the diagnoses by the questionnaire were compared to the diagnoses by the clinician were 0.26 and 90% for GERS, 0.18 and 85% for FD, and 0.49 and 86% for IBS. Corresponding figures for the agreement between the printed and the digital version were 0.50 and 92% for GERS, 0.64 and 95% for FD, and 0.76 and 95% for IBS. Cronbach's alpha coefficient for GERS was 0.75 with a span per item of 0.71 to 0.76. For FD the figures were 0.68 and 0.54 to 0.70 and for IBS 0.61 and 0.56 to 0.66. The Rome II questionnaire has never been thoroughly validated before even if diagnoses made by the Rome criteria have been compared to diagnoses made in clinical practice. Conclusion The accuracy of the Swedish version of

  8. Integrated Design Validation: Combining Simulation and Formal Verification for Digital Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Lun Li

    2006-04-01

    Full Text Available The correct design of complex hardware continues to challenge engineers. Bugs in a design that are not uncovered in early design stages can be extremely expensive. Simulation is a predominantly used tool to validate a design in industry. Formal verification overcomes the weakness of exhaustive simulation by applying mathematical methodologies to validate a design. The work described here focuses upon a technique that integrates the best characteristics of both simulation and formal verification methods to provide an effective design validation tool, referred as Integrated Design Validation (IDV. The novelty in this approach consists of three components, circuit complexity analysis, partitioning based on design hierarchy, and coverage analysis. The circuit complexity analyzer and partitioning decompose a large design into sub-components and feed sub-components to different verification and/or simulation tools based upon known existing strengths of modern verification and simulation tools. The coverage analysis unit computes the coverage of design validation and improves the coverage by further partitioning. Various simulation and verification tools comprising IDV are evaluated and an example is used to illustrate the overall validation process. The overall process successfully validates the example to a high coverage rate within a short time. The experimental result shows that our approach is a very promising design validation method.

  9. A valid licence

    NARCIS (Netherlands)

    Spoolder, H.A.M.; Ingenbleek, P.T.M.

    2010-01-01

    A valid licence Tuesday, April 20, 2010 Dr Hans Spoolder and Dr Paul Ingenbleek, of Wageningen University and Research Centres, share their thoughts on improving farm animal welfare in Europe At the presentation of the European Strategy 2020 on 3rd March, President Barroso emphasised the need for

  10. Validity and Reliability of Farsi Version of Youth Sport Environment Questionnaire.

    Science.gov (United States)

    Eshghi, Mohammad Ali; Kordi, Ramin; Memari, Amir Hossein; Ghaziasgar, Ahmad; Mansournia, Mohammad-Ali; Zamani Sani, Seyed Hojjat

    2015-01-01

    The Youth Sport Environment Questionnaire (YSEQ) had been developed from Group Environment Questionnaire, a well-known measure of team cohesion. The aim of this study was to adapt and examine the reliability and validity of the Farsi version of the YSEQ. This version was completed by 455 athletes aged 13-17 years. Results of confirmatory factor analysis indicated that two-factor solution showed a good fit to the data. The results also revealed that the Farsi YSEQ showed high internal consistency, test-retest reliability, and good concurrent validity. This study indicated that the Farsi version of the YSEQ is a valid and reliable measure to assess team cohesion in sport setting.

  11. Development of a validation test for self-reported abstinence from smokeless tobacco products: preliminary results

    International Nuclear Information System (INIS)

    Robertson, J.B.; Bray, J.T.

    1988-01-01

    Using X-ray fluorescence spectrometry, 11 heavy elements at concentrations that are easily detectable have been identified in smokeless tobacco products. These concentrations were found to increase in cheek epithelium samples of the user after exposure to smokeless tobacco. This feasibility study suggests that the level of strontium in the cheek epithelium could be a valid measure of recent smokeless tobacco use. It also demonstrates that strontium levels become undetectable within several days of smokeless tobacco cessation. This absence of strontium could validate a self-report of abstinence from smokeless tobacco. Finally, the X-ray spectrum of heavy metal content of cheek epithelium from smokeless tobacco users could itself provide a visual stimulus to further motivate the user to terminate the use of smokeless tobacco products

  12. Student nurses' perceptions of mental health care: Validation of a questionnaire

    NARCIS (Netherlands)

    Corine Latour; Hanneke Hoekstra; Alex van der Heijden; prof Berno van Meijel; Jaap van der Bijl

    2011-01-01

    This article describes the results of a study into the psychometric properties of a questionnaire about student nurses' perceptions of mental health care. The questionnaire was constructed in 2008, but has not yet been tested in terms of construct validity and reliability. A validated questionnaire

  13. Psychosocial risk assessment: French validation of the Copenhagen Psychosocial Questionnaire (COPSOQ)

    DEFF Research Database (Denmark)

    Dupret, E; Bocerean, C; Teherani, M

    2012-01-01

    of the scales, exploratory factor analysis, concurrent validity analysis) gave satisfactory results and demonstrated the validity of the French COPSOQ. CONCLUSIONS: A new questionnaire is now available in French. A large body of data is currently being gathered in view of comparing occupations and types...

  14. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    Science.gov (United States)

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  15. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  16. Internal Consistency, Retest Reliability, and their Implications For Personality Scale Validity

    Science.gov (United States)

    McCrae, Robert R.; Kurtz, John E.; Yamagata, Shinji; Terracciano, Antonio

    2010-01-01

    We examined data (N = 34,108) on the differential reliability and validity of facet scales from the NEO Inventories. We evaluated the extent to which (a) psychometric properties of facet scales are generalizable across ages, cultures, and methods of measurement; and (b) validity criteria are associated with different forms of reliability. Composite estimates of facet scale stability, heritability, and cross-observer validity were broadly generalizable. Two estimates of retest reliability were independent predictors of the three validity criteria; none of three estimates of internal consistency was. Available evidence suggests the same pattern of results for other personality inventories. Internal consistency of scales can be useful as a check on data quality, but appears to be of limited utility for evaluating the potential validity of developed scales, and it should not be used as a substitute for retest reliability. Further research on the nature and determinants of retest reliability is needed. PMID:20435807

  17. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  19. Fuel Cell and Hydrogen Technology Validation | Hydrogen and Fuel Cells |

    Science.gov (United States)

    NREL Fuel Cell and Hydrogen Technology Validation Fuel Cell and Hydrogen Technology Validation The NREL technology validation team works on validating hydrogen fuel cell electric vehicles; hydrogen fueling infrastructure; hydrogen system components; and fuel cell use in early market applications such as

  20. A Validation Study of the Impression Replica Technique.

    Science.gov (United States)

    Segerström, Sofia; Wiking-Lima de Faria, Johanna; Braian, Michael; Ameri, Arman; Ahlgren, Camilla

    2018-04-17

    To validate the well-known and often-used impression replica technique for measuring fit between a preparation and a crown in vitro. The validation consisted of three steps. First, a measuring instrument was validated to elucidate its accuracy. Second, a specimen consisting of male and female counterparts was created and validated by the measuring instrument. Calculations were made for the exact values of three gaps between the male and female. Finally, impression replicas were produced of the specimen gaps and sectioned into four pieces. The replicas were then measured with the use of a light microscope. The values received from measuring the specimen were then compared with the values received from the impression replicas, and the technique was thereby validated. The impression replica technique overvalued all measured gaps. Depending on location of the three measuring sites, the difference between the specimen and the impression replicas varied from 47 to 130 μm. The impression replica technique overestimates gaps within the range of 2% to 11%. The validation of the replica technique enables the method to be used as a reference when testing other methods for evaluating fit in dentistry. © 2018 by the American College of Prosthodontists.

  1. Guided exploration of physically valid shapes for furniture design

    KAUST Repository

    Umetani, Nobuyuki

    2012-07-01

    Geometric modeling and the physical validity of shapes are traditionally considered independently. This makes creating aesthetically pleasing yet physically valid models challenging. We propose an interactive design framework for efficient and intuitive exploration of geometrically and physically valid shapes. During any geometric editing operation, the proposed system continuously visualizes the valid range of the parameter being edited. When one or more constraints are violated after an operation, the system generates multiple suggestions involving both discrete and continuous changes to restore validity. Each suggestion also comes with an editing mode that simultaneously adjusts multiple parameters in a coordinated way to maintain validity. Thus, while the user focuses on the aesthetic aspects of the design, our computational design framework helps to achieve physical realizability by providing active guidance to the user. We demonstrate our framework on plankbased furniture design with nail-joint and frictional constraints. We use our system to design a range of examples, conduct a user study, and also fabricate a physical prototype to test the validity and usefulness of the system. © 2012 ACM 0730-0301/2012/08- ART86.

  2. Development and validation of the Alcohol Myopia Scale.

    Science.gov (United States)

    Lac, Andrew; Berger, Dale E

    2013-09-01

    Alcohol myopia theory conceptualizes the ability of alcohol to narrow attention and how this demand on mental resources produces the impairments of self-inflation, relief, and excess. The current research was designed to develop and validate a scale based on this framework. People who were alcohol users rated items representing myopic experiences arising from drinking episodes in the past month. In Study 1 (N = 260), the preliminary 3-factor structure was supported by exploratory factor analysis. In Study 2 (N = 289), the 3-factor structure was substantiated with confirmatory factor analysis, and it was superior in fit to an empirically indefensible 1-factor structure. The final 14-item scale was evaluated with internal consistency reliability, discriminant validity, convergent validity, criterion validity, and incremental validity. The alcohol myopia scale (AMS) illuminates conceptual underpinnings of this theory and yields insights for understanding the tunnel vision that arises from intoxication.

  3. Soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  4. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  5. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  6. The Nursing Diagnosis of risk for pressure ulcer: content validation

    Directory of Open Access Journals (Sweden)

    Cássia Teixeira dos Santos

    2016-01-01

    Full Text Available Abstract Objective: to validate the content of the new nursing diagnosis, termed risk for pressure ulcer. Method: the content validation with a sample made up of 24 nurses who were specialists in skin care from six different hospitals in the South and Southeast of Brazil. Data collection took place electronically, through an instrument constructed using the SurveyMonkey program, containing a title, definition, and 19 risk factors for the nursing diagnosis. The data were analyzed using Fehring's method and descriptive statistics. The project was approved by a Research Ethics Committee. Results: title, definition and seven risk factors were validated as "very important": physical immobilization, pressure, surface friction, shearing forces, skin moisture, alteration in sensation and malnutrition. Among the other risk factors, 11 were validated as "important": dehydration, obesity, anemia, decrease in serum albumin level, prematurity, aging, smoking, edema, impaired circulation, and decrease in oxygenation and in tissue perfusion. The risk factor of hyperthermia was discarded. Conclusion: the content validation of these components of the nursing diagnosis corroborated the importance of the same, being able to facilitate the nurse's clinical reasoning and guiding clinical practice in the preventive care for pressure ulcers.

  7. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Gougar, Hans [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation, verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to

  8. The Predictive Validity of Projective Measures.

    Science.gov (United States)

    Suinn, Richard M.; Oskamp, Stuart

    Written for use by clinical practitioners as well as psychological researchers, this book surveys recent literature (1950-1965) on projective test validity by reviewing and critically evaluating studies which shed light on what may reliably be predicted from projective test results. Two major instruments are covered: the Rorschach and the Thematic…

  9. Validation of the Netherlands pacemaker patient registry

    NARCIS (Netherlands)

    Dijk, WA; Kingma, T; Hooijschuur, CAM; Dassen, WRM; Hoorntje, JCA; van Gelder, LM

    1997-01-01

    This paper deals with the validation of the information stored in the Netherlands central pacemaker patient database. At this moment the registry database contains information on more than 70500 patients, 85000 pacemakers and 90000 leads. The validation procedures consisted of an internal

  10. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  11. Development and Validation of Multi-Dimensional Personality ...

    African Journals Online (AJOL)

    This study was carried out to establish the scientific processes for the development and validation of Multi-dimensional Personality Inventory (MPI). The process of development and validation occurred in three phases with five components of Agreeableness, Conscientiousness, Emotional stability, Extroversion, and ...

  12. An information architecture for courseware validation

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    A lack of pedagogy in courseware can lead to learner rejec- tion. It is therefore vital that pedagogy is a central concern of courseware construction. Courseware validation allows the course creator to specify pedagogical rules and principles which courseware must conform to. In this paper we investigate the information needed for courseware valida- tion and propose an information architecture to be used as a basis for validation.

  13. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  14. Validation of a clinical critical thinking skills test in nursing

    Directory of Open Access Journals (Sweden)

    Sujin Shin

    2015-01-01

    Full Text Available Purpose: The purpose of this study was to develop a revised version of the clinical critical thinking skills test (CCTS and to subsequently validate its performance. Methods: This study is a secondary analysis of the CCTS. Data were obtained from a convenience sample of 284 college students in June 2011. Thirty items were analyzed using item response theory and test reliability was assessed. Test-retest reliability was measured using the results of 20 nursing college and graduate school students in July 2013. The content validity of the revised items was analyzed by calculating the degree of agreement between instrument developer intention in item development and the judgments of six experts. To analyze response process validity, qualitative data related to the response processes of nine nursing college students obtained through cognitive interviews were analyzed. Results: Out of initial 30 items, 11 items were excluded after the analysis of difficulty and discrimination parameter. When the 19 items of the revised version of the CCTS were analyzed, levels of item difficulty were found to be relatively low and levels of discrimination were found to be appropriate or high. The degree of agreement between item developer intention and expert judgments equaled or exceeded 50%. Conclusion: From above results, evidence of the response process validity was demonstrated, indicating that subjects respondeds as intended by the test developer. The revised 19-item CCTS was found to have sufficient reliability and validity and will therefore represents a more convenient measurement of critical thinking ability.

  15. Validation of MCNP and WIMS-AECL/DRAGON/RFSP for ACR-1000 applications

    International Nuclear Information System (INIS)

    Bromley, Blair P.; Adams, Fred P.; Zeller, Michael B.; Watts, David G.; Shukhman, Boris V.; Pencer, Jeremy

    2008-01-01

    This paper gives a summary of the validation of the reactor physics codes WIMS-AECL, DRAGON, RFSP and MCNP5, which are being used in the design, operation, and safety analysis of the ACR-1000 R . The standards and guidelines being followed for code validation of the suite are established in CSA Standard N286.7-99 and ANS Standard ANS-19.3-2005. These codes are being validated for the calculation of key output parameters associated with various reactor physics phenomena of importance during normal operations and postulated accident conditions in an ACR-1000 reactor. Experimental data from a variety of sources are being used for validation. The bulk of the validation data is from critical experiments in the ZED-2 research reactor with ACR-type lattices. To supplement and complement ZED-2 data, qualified and applicable data are being taken from other power and research reactors, such as existing CANDU R units, FUGEN, NRU and SPERT research reactors, and the DCA critical facility. MCNP simulations of the ACR-1000 are also being used for validating WIMS-AECL/ DRAGON/RFSP, which involves extending the validation results for MCNP through the assistance of TSUNAMI analyses. Code validation against commissioning data in the first-build ACR-1000 will be confirmatory. The code validation is establishing the biases and uncertainties in the calculations of the WIMS-AECL/DRAGON/RFSP suite for the evaluation of various key parameters of importance in the reactor physics analysis of the ACR-1000. (authors)

  16. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  17. Validation of Medical Tourism Service Quality Questionnaire (MTSQQ) for Iranian Hospitals.

    Science.gov (United States)

    Qolipour, Mohammad; Torabipour, Amin; Khiavi, Farzad Faraji; Malehi, Amal Saki

    2017-03-01

    Assessing service quality is one of the basic requirements to develop the medical tourism industry. There is no valid and reliable tool to measure service quality of medical tourism. This study aimed to determine the reliability and validity of a Persian version of medical tourism service quality questionnaire for Iranian hospitals. To validate the medical tourism service quality questionnaire (MTSQQ), a cross-sectional study was conducted on 250 Iraqi patients referred to hospitals in Ahvaz (Iran) from 2015. To design a questionnaire and determine its content validity, the Delphi Technique (3 rounds) with the participation of 20 medical tourism experts was used. Construct validity of the questionnaire was assessed through exploratory and confirmatory factor analysis. Reliability was assessed using Cronbach's alpha coefficient. Data were analyzed by Excel 2007, SPSS version18, and Lisrel l8.0 software. The content validity of the questionnaire with CVI=0.775 was confirmed. According to exploratory factor analysis, the MTSQQ included 31 items and 8 dimensions (tangibility, reliability, responsiveness, assurance, empathy, exchange and travel facilities, technical and infrastructure facilities and safety and security). Construct validity of the questionnaire was confirmed, based on the goodness of fit quantities of model (RMSEA=0.032, CFI= 0.98, GFI=0.88). Cronbach's alpha coefficient was 0.837 and 0.919 for expectation and perception questionnaire. The results of the study showed that the medical tourism SERVQUAL questionnaire with 31 items and 8 dimensions was a valid and reliable tool to measure service quality of medical tourism in Iranian hospitals.

  18. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    Science.gov (United States)

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  19. The validation of evacuation simulation models through the analysis of behavioural uncertainty

    International Nuclear Information System (INIS)

    Lovreglio, Ruggiero; Ronchi, Enrico; Borri, Dino

    2014-01-01

    Both experimental and simulation data on fire evacuation are influenced by a component of uncertainty caused by the impact of the unexplained variance in human behaviour, namely behavioural uncertainty (BU). Evacuation model validation studies should include the study of this type of uncertainty during the comparison of experiments and simulation results. An evacuation model validation procedure is introduced in this paper to study the impact of BU. This methodology is presented through a case study for the comparison between repeated experimental data and simulation results produced by FDS+Evac, an evacuation model for the simulation of human behaviour in fire, which makes use of distribution laws. - Highlights: • Validation of evacuation models is investigated. • Quantitative evaluation of behavioural uncertainty is performed. • A validation procedure is presented through an evacuation case study

  20. Assessing the validity of parenting measures in a sample of chinese adolescents.

    Science.gov (United States)

    Supple, Andrew J; Peterson, Gary W; Bush, Kevin R

    2004-09-01

    The purpose of this study was to assess the construct validity of adolescent-report parenting behavior measures (primarily derived from the Parental Behavior Measure) in a sample of 480 adolescents from Beijing, China. Results suggest that maternal support, monitoring, and autonomy granting were valid measures when assessing maternal socialization strategies and Chinese adolescent development. Measures of punitiveness and love withdrawal demonstrated limited validity, whereas maternal positive induction demonstrated little validity. The major implications of these results are that measures of "negative" parenting that included physical or psychological manipulations may not have salience for the development of Chinese adolescents. Moreover, researchers and clinicians should question the applicability of instruments and measures designed to assess family process when working with individuals in families from diverse cultural backgrounds. Copyright 2004 American Psychological Association

  1. Role of calibration, validation, and relevance in multi-level uncertainty integration

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Calibration of model parameters is an essential step in predicting the response of a complicated system, but the lack of data at the system level makes it impossible to conduct this quantification directly. In such a situation, system model parameters are estimated using tests at lower levels of complexity which share the same model parameters with the system. For such a multi-level problem, this paper proposes a methodology to quantify the uncertainty in the system level prediction by integrating calibration, validation and sensitivity analysis at different levels. The proposed approach considers the validity of the models used for parameter estimation at lower levels, as well as the relevance at the lower level to the prediction at the system level. The model validity is evaluated using a model reliability metric, and models with multivariate output are considered. The relevance is quantified by comparing Sobol indices at the lower level and system level, thus measuring the extent to which a lower level test represents the characteristics of the system so that the calibration results can be reliably used in the system level. Finally the results of calibration, validation and relevance analysis are integrated in a roll-up method to predict the system output. - Highlights: • Relevance analysis to quantify the closeness of two models. • Stochastic model reliability metric to integrate multiple validation experiments. • Extend the model reliability metric to deal with multivariate output. • Roll-up formula to integrate calibration, validation, and relevance.

  2. Validation of new prognostic and predictive scores by sequential testing approach

    International Nuclear Information System (INIS)

    Nieder, Carsten; Haukland, Ellinor; Pawinski, Adam; Dalhaug, Astrid

    2010-01-01

    Background and Purpose: For practitioners, the question arises how their own patient population differs from that used in large-scale analyses resulting in new scores and nomograms and whether such tools actually are valid at a local level and thus can be implemented. A recent article proposed an easy-to-use method for the in-clinic validation of new prediction tools with a limited number of patients, a so-called sequential testing approach. The present study evaluates this approach in scores related to radiation oncology. Material and Methods: Three different scores were used, each predicting short overall survival after palliative radiotherapy (bone metastases, brain metastases, metastatic spinal cord compression). For each scenario, a limited number of consecutive patients entered the sequential testing approach. The positive predictive value (PPV) was used for validation of the respective score and it was required that the PPV exceeded 80%. Results: For two scores, validity in the own local patient population could be confirmed after entering 13 and 17 patients, respectively. For the third score, no decision could be reached even after increasing the sample size to 30. Conclusion: In-clinic validation of new predictive tools with sequential testing approach should be preferred over uncritical adoption of tools which provide no significant benefit to local patient populations. Often the necessary number of patients can be reached within reasonable time frames even in small oncology practices. In addition, validation is performed continuously as the data are collected. (orig.)

  3. Validation of new prognostic and predictive scores by sequential testing approach

    Energy Technology Data Exchange (ETDEWEB)

    Nieder, Carsten [Radiation Oncology Unit, Nordland Hospital, Bodo (Norway); Inst. of Clinical Medicine, Univ. of Tromso (Norway); Haukland, Ellinor; Pawinski, Adam; Dalhaug, Astrid [Radiation Oncology Unit, Nordland Hospital, Bodo (Norway)

    2010-03-15

    Background and Purpose: For practitioners, the question arises how their own patient population differs from that used in large-scale analyses resulting in new scores and nomograms and whether such tools actually are valid at a local level and thus can be implemented. A recent article proposed an easy-to-use method for the in-clinic validation of new prediction tools with a limited number of patients, a so-called sequential testing approach. The present study evaluates this approach in scores related to radiation oncology. Material and Methods: Three different scores were used, each predicting short overall survival after palliative radiotherapy (bone metastases, brain metastases, metastatic spinal cord compression). For each scenario, a limited number of consecutive patients entered the sequential testing approach. The positive predictive value (PPV) was used for validation of the respective score and it was required that the PPV exceeded 80%. Results: For two scores, validity in the own local patient population could be confirmed after entering 13 and 17 patients, respectively. For the third score, no decision could be reached even after increasing the sample size to 30. Conclusion: In-clinic validation of new predictive tools with sequential testing approach should be preferred over uncritical adoption of tools which provide no significant benefit to local patient populations. Often the necessary number of patients can be reached within reasonable time frames even in small oncology practices. In addition, validation is performed continuously as the data are collected. (orig.)

  4. Validation of the UCLA Child Post traumatic stress disorder-reaction index in Zambia

    Directory of Open Access Journals (Sweden)

    Cohen Judith A

    2011-09-01

    Full Text Available Abstract Background Sexual violence against children is a major global health and human rights problem. In order to address this issue there needs to be a better understanding of the issue and the consequences. One major challenge in accomplishing this goal has been a lack of validated child mental health assessments in low-resource countries where the prevalence of sexual violence is high. This paper presents results from a validation study of a trauma-focused mental health assessment tool - the UCLA Post-traumatic Stress Disorder - Reaction Index (PTSD-RI in Zambia. Methods The PTSD-RI was adapted through the addition of locally relevant items and validated using local responses to three cross-cultural criterion validity questions. Reliability of the symptoms scale was assessed using Cronbach alpha analyses. Discriminant validity was assessed comparing mean scale scores of cases and non-cases. Concurrent validity was assessed comparing mean scale scores to a traumatic experience index. Sensitivity and specificity analyses were run using receiver operating curves. Results Analysis of data from 352 youth attending a clinic specializing in sexual abuse showed that this adapted PTSD-RI demonstrated good reliability, with Cronbach alpha scores greater than .90 on all the evaluated scales. The symptom scales were able to statistically significantly discriminate between locally identified cases and non-cases, and higher symptom scale scores were associated with increased numbers of trauma exposures which is an indication of concurrent validity. Sensitivity and specificity analyses resulted in an adequate area under the curve, indicating that this tool was appropriate for case definition. Conclusions This study has shown that validating mental health assessment tools in a low-resource country is feasible, and that by taking the time to adapt a measure to the local context, a useful and valid Zambian version of the PTSD-RI was developed to detect

  5. Validation of gamma irradiator controls for quality and regulatory compliance

    International Nuclear Information System (INIS)

    Harding, R.B.; Pinteric, F.J.A.

    1995-01-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic - Validation of Irradiator Controls - is a significant regulatory compliance and operations issues within the irradiator suppliers' and users' community. (author)

  6. European Validation of the Integral Code ASTEC (EVITA)

    International Nuclear Information System (INIS)

    Allelein, H.-J.; Neu, K.; Dorsselaere, J.P. Van

    2005-01-01

    The main objective of the European Validation of the Integral Code ASTEC (EVITA) project is to distribute the severe accident integral code ASTEC to European partners in order to apply the validation strategy issued from the VASA project (4th EC FWP). Partners evaluate the code capability through validation on reference experiments and plant applications accounting for severe accident management measures, and compare results with reference codes. The basis version V0 of ASTEC (Accident Source Term Evaluation Code)-commonly developed and basically validated by GRS and IRSN-was made available in late 2000 for the EVITA partners on their individual platforms. Users' training was performed by IRSN and GRS. The code portability on different computers was checked to be correct. A 'hot line' assistance was installed continuously available for EVITA code users. The actual version V1 has been released to the EVITA partners end of June 2002. It allows to simulate the front-end phase by two new modules:- for reactor coolant system 2-phase simplified thermal hydraulics (5-equation approach) during both front-end and core degradation phases; - for core degradation, based on structure and main models of ICARE2 (IRSN) reference mechanistic code for core degradation and on other simplified models. Next priorities are clearly identified: code consolidation in order to increase the robustness, extension of all plant applications beyond the vessel lower head failure and coupling with fission product modules, and continuous improvements of users' tools. As EVITA has very successfully made the first step into the intention to provide end-users (like utilities, vendors and licensing authorities) with a well validated European integral code for the simulation of severe accidents in NPPs, the EVITA partners strongly recommend to continue validation, benchmarking and application of ASTEC. This work will continue in Severe Accident Research Network (SARNET) in the 6th Framework Programme

  7. Face and Convergent Validity of Persian Version of Rapid Office Strain Assessment (ROSA Checklist

    Directory of Open Access Journals (Sweden)

    Afrouz Armal

    2016-01-01

    Full Text Available Objective: The aim of this work was the translation, cultural adaptation and validation of the Persian version of the Rapid Office Stress Assessment (ROSA checklist. Material & Methods: This methodological study was conducted according of IQOLA method. 100 office worker were selected in order to carry out a psychometric evaluation of the ROSA checklist by performing validity (face and convergent analyses. The convergent validity was evaluated using RULA checklist. Results: Upon major changes made to the ROSA checklist during the translation/cultural adaptation process, face validity of the Persian version was obtained. Spearman correlation coefficient between total score of ROSA check list and RULA checklist was significant (r=0.76, p<0.0001. Conclusion: The results indicated that the translated version of the ROSA checklist is acceptable in terms of face validity, convergent validity in target society, and hence provides a useful instrument for assessing Iranian office workers

  8. Current Status of the Validation of the Atmospheric Chemistry Instruments on Envisat

    Science.gov (United States)

    Lecomte, P.; Koopman, R.; Zehner, C.; Laur, H.; Attema, E.; Wursteisen, P.; Snoeij, P.

    2003-04-01

    . As a first step the intention is to arrive at a first quality assessment of the data products for near-real time distribution. This core validation was performed during the commissioning and validation phase of Envisat. The results of this exercise have been presented at the Envisat Validation Workshop. It was already anticipated early in the program that more work needed to be done after this workshop on all Envisat data products both for near-real time and for off-line distribution. The algorithms designed to derive estimates of the atmospheric constitutes need to be verified. For this a large number of correlative observations under a wide range of conditions are needed to arrive at a representative and statistically significant data quality assessment, and to provide insight into sources of error both in the Envisat data and the correlative data sets. In order to achieve this within the tight time schedule the best use must be made of the available resources. For the Atmospheric Chemistry Instruments on Envisat it has therefore been decided to plan a joint geophysical validation programme that is not instrument specific but serves all three instruments. For the co-ordination of the activities the Atmospheric Chemistry Validation Team was formed (ACVT). The ACVT methods can roughly be categorised into different approaches and consistent with these the group is divided into different subgroups on · balloon and aircraft campaigns · ground-based measurements · model assimilation and satellite intercomparison The data coming from the various validation campaigns are stored within a central data storage facility established at the Norwegian Institute for Air Research (NILU) in Norway. NILU provides access to correlative measurements from sensors on-board satellites, aircraft, balloons and ships, as well as from ground-based instruments and numerical models, such as that of the ECMWF. Particular emphasis has been put on the quality control of such data. Users are

  9. The development, validation and initial results of an integrated model for determining the environmental sustainability of biogas production pathways

    NARCIS (Netherlands)

    Pierie, Frank; van Someren, Christian; Benders, René M.J.; Bekkering, Jan; van Gemert, Wim; Moll, Henri C.

    2016-01-01

    Biogas produced through Anaerobic Digestion can be seen as a flexible and storable energy carrier. However, the environmental sustainability and efficiency of biogas production is not fully understood. Within this article the use, operation, structure, validation, and results of a model for the

  10. Validation of Structures in the Protein Data Bank.

    Science.gov (United States)

    Gore, Swanand; Sanz García, Eduardo; Hendrickx, Pieter M S; Gutmanas, Aleksandras; Westbrook, John D; Yang, Huanwang; Feng, Zukang; Baskaran, Kumaran; Berrisford, John M; Hudson, Brian P; Ikegawa, Yasuyo; Kobayashi, Naohiro; Lawson, Catherine L; Mading, Steve; Mak, Lora; Mukhopadhyay, Abhik; Oldfield, Thomas J; Patwardhan, Ardan; Peisach, Ezra; Sahni, Gaurav; Sekharan, Monica R; Sen, Sanchayita; Shao, Chenghua; Smart, Oliver S; Ulrich, Eldon L; Yamashita, Reiko; Quesada, Martha; Young, Jasmine Y; Nakamura, Haruki; Markley, John L; Berman, Helen M; Burley, Stephen K; Velankar, Sameer; Kleywegt, Gerard J

    2017-12-05

    The Worldwide PDB recently launched a deposition, biocuration, and validation tool: OneDep. At various stages of OneDep data processing, validation reports for three-dimensional structures of biological macromolecules are produced. These reports are based on recommendations of expert task forces representing crystallography, nuclear magnetic resonance, and cryoelectron microscopy communities. The reports provide useful metrics with which depositors can evaluate the quality of the experimental data, the structural model, and the fit between them. The validation module is also available as a stand-alone web server and as a programmatically accessible web service. A growing number of journals require the official wwPDB validation reports (produced at biocuration) to accompany manuscripts describing macromolecular structures. Upon public release of the structure, the validation report becomes part of the public PDB archive. Geometric quality scores for proteins in the PDB archive have improved over the past decade. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Validity and Fairness

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…

  12. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  13. Social anxiety questionnaire (SAQ): Development and preliminary validation.

    Science.gov (United States)

    Łakuta, Patryk

    2018-05-30

    The Social Anxiety Questionnaire (SAQ) was designed to assess five dimensions of social anxiety as posited by the Clark and Wells' (1995; Clark, 2001) cognitive model. The development of the SAQ involved generation of an item pool, followed by a verification of content validity and the theorized factor structure (Study 1). The final version of the SAQ was then assessed for reliability, temporal stability (test re-test reliability), and construct, criterion-related, and contrasted-group validity (Study 2, 3, and 4). Following a systematic process, the results provide support for the SAQ as reliable, and both theoretically and empirically valid measure. A five-factor structure of the SAQ verified and replicated through confirmatory factor analyses reflect five dimensions of social anxiety: negative self-processing; self-focused attention and self-monitoring; safety behaviours; somatic and cognitive symptoms; and anticipatory and post-event rumination. Results suggest that the SAQ possesses good psychometric properties, while recognizing that additional validation is a required future research direction. It is important to replicate these findings in diverse populations, including a large clinical sample. The SAQ is a promising measure that supports social anxiety as a multidimensional construct, and the foundational role of self-focused cognitive processes in generation and maintenance of social anxiety symptoms. The findings make a significant contribution to the literature, moreover, the SAQ is a first instrument that offers to assess all, proposed by the Clark-Wells model, specific cognitive-affective, physiological, attitudinal, and attention processes related to social anxiety. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    Science.gov (United States)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  15. Translation, adaptation and validation of "Community Integration Questionnaire"

    Directory of Open Access Journals (Sweden)

    Helena Maria Silveira Fraga-Maia

    2015-05-01

    Full Text Available Objective: To translate, adapt, and validate the "Community Integration Questionnaire (CIQ," a tool that evaluates community integration after traumatic brain injury (TBI.Methods: A study of 61 TBI survivors was carried out. The appraisal of the measurement equivalence was based on a reliability assessment by estimating inter-rater agreement, item-scale correlation and internal consistency of CIQ scales, concurrent validity, and construct validity.Results: Inter-rater agreement ranged from substantial to almost perfect. The item-scale correlations were generally higher between the items and their respective domains, whereas the intra-class correlation coefficients were high for both the overall scale and the CIQ domains. The correlation between the CIQ and Disability Rating Scale (DRS, the Extended Glasgow Outcome Scale (GOSE, and the Rancho Los Amigos Level of Cognitive Functioning Scale (RLA reached values considered satisfactory. However, the factor analysis generated four factors (dimensions that did not correspond with the dimensional structure of the original tool.Conclusion: The resulting tool herein may be useful in globally assessing community integration after TBI in the Brazilian context, at least until new CIQ psychometric assessment studies are developed with larger samples.

  16. Validation of self-reported cellular phone use

    DEFF Research Database (Denmark)

    Samkange-Zeeb, Florence; Berg, Gabriele; Blettner, Maria

    2004-01-01

    BACKGROUND: In recent years, concern has been raised over possible adverse health effects of cellular telephone use. In epidemiological studies of cancer risk associated with the use of cellular telephones, the validity of self-reported cellular phone use has been problematic. Up to now there is ......BACKGROUND: In recent years, concern has been raised over possible adverse health effects of cellular telephone use. In epidemiological studies of cancer risk associated with the use of cellular telephones, the validity of self-reported cellular phone use has been problematic. Up to now...... there is very little information published on this subject. METHODS: We conducted a study to validate the questionnaire used in an ongoing international case-control study on cellular phone use, the "Interphone study". Self-reported cellular phone use from 68 of 104 participants who took part in our study...... was compared with information derived from the network providers over a period of 3 months (taken as the gold standard). RESULTS: Using Spearman's rank correlation, the correlation between self-reported phone use and information from the network providers for cellular phone use in terms of the number of calls...

  17. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    Science.gov (United States)

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  18. An introduction to use of the USACE HTRW program's data validation guidelines engineering manual

    International Nuclear Information System (INIS)

    Becker, L.D.; Coats, K.H.

    1994-01-01

    Data validation has been defined by regulatory agencies as a systematic process (consisting of data editing, screening, checking, auditing, verification, certification, and review) for comparing data to established criteria in order to provide assurance that data are adequate for their intended use. A problem for the USACE HTRW Program was that clearly defined data validation guidelines were available only for analytical data quality level IV. These functional data validation guidelines were designed for validation of data produced using protocols from the US E.P.A.'s Contract Laboratory Program (CLP). Unfortunately, USACE experience demonstrates that these level IV functional data validation guidelines were being used to validate data not produced under the CLP. The resulting data validation product was less than satisfactory for USACE HTRW needs. Therefore, the HTRW-MCX initiated an Engineering Manual (EM) for validation of analytical data quality levels other than IV. This EM is entitle ''USACE HTRW Data Validation Guidelines.'' Use of the EM is required for validation of analytical data relating to projects under the jurisdiction of the Department of the Army, Corps of Engineers, Hazardous, Toxic, and Radioactive Waste Program. These data validation guidelines include procedures and checklists for technical review of analytical data at quality levels I, II, III, and V

  19. Validation plays the role of a "bridge" in connecting remote sensing research and applications

    Science.gov (United States)

    Wang, Zhiqiang; Deng, Ying; Fan, Yida

    2018-07-01

    Remote sensing products contribute to improving earth observations over space and time. Uncertainties exist in products of different levels; thus, validation of these products before and during their applications is critical. This study discusses the meaning of validation in depth and proposes a new definition of reliability for use with such products. In this context, validation should include three aspects: a description of the relevant uncertainties, quantitative measurement results and a qualitative judgment that considers the needs of users. A literature overview is then presented evidencing improvements in the concepts associated with validation. It shows that the root mean squared error (RMSE) is widely used to express accuracy; increasing numbers of remote sensing products have been validated; research institutes contribute most validation efforts; and sufficient validation studies encourage the application of remote sensing products. Validation plays a connecting role in the distribution and application of remote sensing products. Validation connects simple remote sensing subjects with other disciplines, and it connects primary research with practical applications. Based on the above findings, it is suggested that validation efforts that include wider cooperation among research institutes and full consideration of the needs of users should be promoted.

  20. Validation of a survey tool for use in cross-cultural studies

    Directory of Open Access Journals (Sweden)

    Costa FA

    2008-09-01

    Full Text Available There is a need for tools to measure the information patients need in order for healthcare professionals in general, and particularly pharmacists, to communicate effectively and play an active part in the way patients manage their medicines. Previous research has developed and validated constructs to measure patients’ desires for information and their perceptions of how useful their medicines are. It is important to develop these tools for use in different settings and countries so that best practice is shared and is based on the best available evidence. Objectives: this project sought to validate of a survey tool measuring the “Extent of Information Desired” (EID, the “Perceived Utility of Medicines” (PUM, and the “Anxiety about Illness” (AI that had been previously translated for use with Portuguese patients. Methods: The scales were validated in a patient sample of 596: construct validity was explored in Factor analysis (PCA and internal consistency analysed using Cronbach’s alpha. Criterion validity was explored correlating scores to the AI scale and patients’ perceived health status. Discriminatory power was assessed using ANOVA. Temporal stability was explored in a sub-sample of patients who responded at two time points, using a T-test to compare their mean scores. Results: Construct validity results indicated the need to remove 1 item from the Perceived Harm of Medicines (PHM and Perceived Benefit of Medicines (PBM for use in a Portuguese sample and the abandon of the tolerance scale. The internal consistency was high for the EID, PBM and AI scales (alpha>0.600 and acceptable for the PHM scale (alpha=0.536. All scales, except the EID, were consistent over time (p>0.05; p<0.01. All the scales tested showed good discriminatory power. The comparison of the AI scale with the SF-36 indicated good criterion validity (p<0.05.Conclusion: The translated tool was valid and reliable in Portuguese patients- excluding the Tolerance

  1. Computer-aided test selection and result validation-opportunities and pitfalls

    DEFF Research Database (Denmark)

    McNair, P; Brender, J; Talmon, J

    1998-01-01

    /or to increase cost-efficiency). Our experience shows that there is a practical limit to the extent of exploitation of the principle of dynamic test scheduling, unless it is automated in one way or the other. This paper analyses some issues of concern related to the profession of clinical biochemistry, when......Dynamic test scheduling is concerned with pre-analytical preprocessing of the individual samples within a clinical laboratory production by means of decision algorithms. The purpose of such scheduling is to provide maximal information with minimal data production (to avoid data pollution and...... implementing such dynamic test scheduling within a Laboratory Information System (and/or an advanced analytical workstation). The challenge is related to 1) generation of appropriately validated decision models, and 2) mastering consequences of analytical imprecision and bias....

  2. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  3. Validity and Reliability of Farsi Version of Youth Sport Environment Questionnaire

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Eshghi

    2015-01-01

    Full Text Available The Youth Sport Environment Questionnaire (YSEQ had been developed from Group Environment Questionnaire, a well-known measure of team cohesion. The aim of this study was to adapt and examine the reliability and validity of the Farsi version of the YSEQ. This version was completed by 455 athletes aged 13–17 years. Results of confirmatory factor analysis indicated that two-factor solution showed a good fit to the data. The results also revealed that the Farsi YSEQ showed high internal consistency, test-retest reliability, and good concurrent validity. This study indicated that the Farsi version of the YSEQ is a valid and reliable measure to assess team cohesion in sport setting.

  4. Validation of a clinical critical thinking skills test in nursing.

    Science.gov (United States)

    Shin, Sujin; Jung, Dukyoo; Kim, Sungeun

    2015-01-27

    The purpose of this study was to develop a revised version of the clinical critical thinking skills test (CCTS) and to subsequently validate its performance. This study is a secondary analysis of the CCTS. Data were obtained from a convenience sample of 284 college students in June 2011. Thirty items were analyzed using item response theory and test reliability was assessed. Test-retest reliability was measured using the results of 20 nursing college and graduate school students in July 2013. The content validity of the revised items was analyzed by calculating the degree of agreement between instrument developer intention in item development and the judgments of six experts. To analyze response process validity, qualitative data related to the response processes of nine nursing college students obtained through cognitive interviews were analyzed. Out of initial 30 items, 11 items were excluded after the analysis of difficulty and discrimination parameter. When the 19 items of the revised version of the CCTS were analyzed, levels of item difficulty were found to be relatively low and levels of discrimination were found to be appropriate or high. The degree of agreement between item developer intention and expert judgments equaled or exceeded 50%. From above results, evidence of the response process validity was demonstrated, indicating that subjects respondeds as intended by the test developer. The revised 19-item CCTS was found to have sufficient reliability and validity and will therefore represents a more convenient measurement of critical thinking ability.

  5. Validation of the Gratitude Questionnaire in Filipino Secondary School Students.

    Science.gov (United States)

    Valdez, Jana Patricia M; Yang, Weipeng; Datu, Jesus Alfonso D

    2017-10-11

    Most studies have assessed the psychometric properties of the Gratitude Questionnaire - Six-Item Form (GQ-6) in the Western contexts while very few research has been generated to explore the applicability of this scale in non-Western settings. To address this gap, the aim of the study was to examine the factorial validity and gender invariance of the Gratitude Questionnaire in the Philippines through a construct validation approach. There were 383 Filipino high school students who participated in the research. In terms of within-network construct validity, results of confirmatory factor analyses revealed that the five-item version of the questionnaire (GQ-5) had better fit compared to the original six-item version of the gratitude questionnaire. The scores from the GQ-5 also exhibited invariance across gender. Between-network construct validation showed that gratitude was associated with higher levels of academic achievement (β = .46, p gratitude was linked to lower degree of amotivation (β = -.51, p <.001). Theoretical and practical implications are discussed.

  6. Construction and Validation of the Perceived Opportunity to Craft Scale.

    Science.gov (United States)

    van Wingerden, Jessica; Niks, Irene M W

    2017-01-01

    We developed and validated a scale to measure employees' perceived opportunity to craft (POC) in two separate studies conducted in the Netherlands (total N = 2329). POC is defined as employees' perception of their opportunity to craft their job. In Study 1, the perceived opportunity to craft scale (POCS) was developed and tested for its factor structure and reliability in an explorative way. Study 2 consisted of confirmatory analyses of the factor structure and reliability of the scale as well as examination of the discriminant and criterion-related validity of the POCS. The results indicated that the scale consists of one dimension and could be reliably measured with five items. Evidence was found for the discriminant validity of the POCS. The scale also showed criterion-related validity when correlated with job crafting (+), job resources (autonomy +; opportunities for professional development +), work engagement (+), and the inactive construct cynicism (-). We discuss the implications of these findings for theory and practice.

  7. Structural Validation of the Holistic Wellness Assessment

    Science.gov (United States)

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  8. Factorial and construct validity of Portuguese version (Brazil Bergen Facebook Addiction Scale

    Directory of Open Access Journals (Sweden)

    Hugo Rafael de Souza e Silva

    Full Text Available ABSTRACT Objective To evaluate factorial and construct validity of the Brazilian Portuguese version of the Bergen Facebook Addiction Scale (BFAS-BR. Methods A sociodemographic questionnaire, the Brazilian Portuguese versions of Online Cognition Scale (OCS-BR and of BFAS-BR were applied to a sample of Health Undergraduate (n = 356. Construct validity evidences were verified through the Confirmatory Factor Analysis. Discriminant validity was examined by correlational analysis between the version of the BFAS-BR and OCS-BR. Results Proposed factorial model of BFAS did not present a good quality adjustment. So, a model restructuring was necessary from behavioral addiction theoretical views and new model presented satisfactory adjustment quality and construct validity evidence. Correlation between both tested scales was strong (ρ = 0.707 and, therefore, they measure the same construct. Conclusion The BFAS-BR show adequate factorial and construct validity.

  9. An integrated approach to validation of safeguards and security program performance

    International Nuclear Information System (INIS)

    Altman, W.D.; Hunt, J.S.; Hockert, J.W.

    1988-01-01

    Department of Energy (DOE) requirements for safeguards and security programs are becoming increasingly performance oriented. Master Safeguards and Security Agreemtns specify performance levels for systems protecting DOE security interests. In order to measure and validate security system performance, Lawrence Livermore National Laboratory (LLNL) has developed cost effective validation tools and a comprehensive validation approach that synthesizes information gained from different activities such as force on force exercises, limited scope performance tests, equipment testing, vulnerability analyses, and computer modeling; into an overall assessment of the performance of the protection system. The analytic approach employs logic diagrams adapted from the fault and event trees used in probabilistic risk assessment. The synthesis of the results from the various validation activities is accomplished using a method developed by LLNL, based upon Bayes' theorem

  10. French validation of the Foot Function Index (FFI).

    Science.gov (United States)

    Pourtier-Piotte, C; Pereira, B; Soubrier, M; Thomas, E; Gerbaud, L; Coudeyre, E

    2015-10-01

    French validation of the Foot Function Index (FFI), self-questionnaire designed to evaluate rheumatoid foot according to 3 domains: pain, disability and activity restriction. The first step consisted of translation/back translation and cultural adaptation according to the validated methodology. The second stage was a prospective validation on 53 patients with rheumatoid arthritis who filled out the FFI. The following data were collected: pain (Visual Analog Scale), disability (Health Assessment Questionnaire) and activity restrictions (McMaster Toronto Arthritis questionnaire). A test/retest procedure was performed 15 days later. The statistical analyses focused on acceptability, internal consistency (Cronbach's alpha and Principal Component Analysis), test-retest reproducibility (concordance coefficients), external validity (correlation coefficients) and responsiveness to change. The FFI-F is a culturally acceptable version for French patients with rheumatoid arthritis. The Cronbach's alpha ranged from 0.85 to 0.97. Reproducibility was correct (correlation coefficients>0.56). External validity and responsiveness to change were good. The use of a rigorous methodology allowed the validation of the FFI in the French language (FFI-F). This tool can be used in routine practice and clinical research for evaluating the rheumatoid foot. The FFI-F could be used in other pathologies with foot-related functional impairments. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  11. Validation of the reactor dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.

    1994-05-01

    HEXTRAN is a new three-dimensional, hexagonal reactor dynamics code developed in the Technical Research Centre of Finland (VTT) for VVER type reactors. This report describes the validation work of HEXTRAN. The work has been made with the financing of the Finnish Centre for Radiation and Nuclear Safety (STUK). HEXTRAN is particularly intended for calculation of such accidents, in which radially asymmetric phenomena are included and both good neutron dynamics and two-phase thermal hydraulics are important. HEXTRAN is based on already validated codes. The models of these codes have been shown to function correctly also within the HEXTRAN code. The main new model of HEXTRAN, the spatial neutron kinetics model has been successfully validated against LR-0 test reactor and Loviisa plant measurements. Connected with SMABRE, HEXTRAN can be reliably used for calculation of transients including effects of the whole cooling system of VVERs. Further validation plans are also introduced in the report. (orig.). (23 refs., 16 figs., 2 tabs.)

  12. ICP-MS Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  13. Content validity and reliability of test of gross motor development in Chilean children

    Directory of Open Access Journals (Sweden)

    Marcelo Cano-Cappellacci

    2015-01-01

    Full Text Available ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2 for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries.

  14. 20 CFR 404.727 - Evidence of a deemed valid marriage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of a deemed valid marriage. 404.727... DISABILITY INSURANCE (1950- ) Evidence Evidence of Age, Marriage, and Death § 404.727 Evidence of a deemed valid marriage. (a) General. A deemed valid marriage is a ceremonial marriage we consider valid even...

  15. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    Science.gov (United States)

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  16. The validation of the turnover intention scale

    Directory of Open Access Journals (Sweden)

    Chris F.C. Bothma

    2013-04-01

    Full Text Available Orientation: Turnover intention as a construct has attracted increased research attention in the recent past, but there are seemingly not many valid and reliable scales around to measure turnover intention. Research purpose: This study focused on the validation of a shortened, six-item version of the turnover intention scale (TIS-6. Motivation for the study: The research question of whether the TIS-6 is a reliable and a valid scale for measuring turnover intention and for predicting actual turnover was addressed in this study. Research design, approach and method: The study was based on a census-based sample (n= 2429 of employees in an information, communication and technology (ICT sector company (N= 23 134 where the TIS-6 was used as one of the criterion variables. The leavers (those who left the company in this sample were compared with the stayers (those who remained in the employ of the company in this sample in respect of different variables used in the study. Main findings: It was established that the TIS-6 could measure turnover intentions reliably (α= 0.80. The TIS-6 could significantly distinguish between leavers and stayers (actual turnover, thereby confirming its criterion-predictive validity. The scale also established statistically significant differences between leavers and stayers in respect of a number of the remaining theoretical variables used in the study, thereby also confirming its differential validity. These comparisons were conducted for both the 4-month and the 4-year period after the survey was conducted. Practical/managerial implications: Turnover intention is related to a number of variables in the study which necessitates a reappraisal and a reconceptualisation of existing turnover intention models. Contribution/value-add: The TIS-6 can be used as a reliable and valid scale to assess turnover intentions and can therefore be used in research to validly and reliably assess turnover intentions or to

  17. Quality of life and hormone use: new validation results of MRS scale

    Directory of Open Access Journals (Sweden)

    Heinemann Lothar AJ

    2006-05-01

    Full Text Available Abstract Background The Menopause Rating Scale is a health-related Quality of Life scale developed in the early 1990s and step-by-step validated since then. Recently the MRS scale was validated as outcomes measure for hormone therapy. The suspicion however was expressed that the data were too optimistic due to methodological problems of the study. A new study became available to check how founded this suspicion was. Method An open post-marketing study of 3282 women with pre- and post- treatment data of the self-administered version of the MRS scale was analyzed to evaluate the capacity of the scale to detect hormone treatment related effects with the MRS scale. The main results were then compared with the old study where the interview-based version of the MRS scale was used. Results The hormone-therapy related improvement of complaints relative to the baseline score was about or less than 30% in total or domain scores, whereas it exceeded 30% improvement in the old study. Similarly, the relative improvement after therapy, stratified by the degree of severity at baseline, was lower in the new than in the old study, but had the same slope. Although we cannot exclude different treatment effects with the study method used, this supports our hypothesis that the individual MRS interviews performed by the physician biased the results towards over-estimation of the treatment effects. This hypothesis is underlined by the degree of concordance of physician's assessment and patient's perception of treatment success (MRS results: Sensitivity (correct prediction of the positive assessment by the treating physician of the MRS and specificity (correct prediction of a negative assessment by the physician were lower than the results obtained with the interview-based MRS scale in the previous publication. Conclusion The study confirmed evidence for the capacity of the MRS scale to measure treatment effects on quality of life across the full range of severity of

  18. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  19. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  20. Pooled results from five validation studies of dietary self-report instruments using recovery biomarkers for potassium and sodium intake

    Science.gov (United States)

    We have pooled data from five large validation studies of dietary self-report instruments that used recovery biomarkers as referents to assess food frequency questionnaires (FFQs) and 24-hour recalls. We reported on total potassium and sodium intakes, their densities, and their ratio. Results were...

  1. Validation of the Vanderbilt Holistic Face Processing Test

    OpenAIRE

    Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  2. Validation of the Vanderbilt Holistic Face Processing Test.

    OpenAIRE

    Chao-Chih Wang; Chao-Chih Wang; David Andrew Ross; Isabel Gauthier; Jennifer Joanna Richler

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  3. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  4. Psychometric validation of the SF-36® Health Survey in ulcerative colitis: results from a systematic literature review.

    Science.gov (United States)

    Yarlas, Aaron; Bayliss, Martha; Cappelleri, Joseph C; Maher, Stephen; Bushmakin, Andrew G; Chen, Lea Ann; Manuchehri, Alireza; Healey, Paul

    2018-02-01

    To conduct a systematic literature review of the reliability, construct validity, and responsiveness of the SF-36 ® Health Survey (SF-36) in patients with ulcerative colitis (UC). We performed a systematic search of electronic medical databases to identify published peer-reviewed studies which reported scores from the eight scales and/or two summary measures of the SF-36 collected from adult patients with UC. Study findings relevant to reliability, construct validity, and responsiveness were reviewed. Data were extracted and summarized from 43 articles meeting inclusion criteria. Convergent validity was supported by findings that 83% (197/236) of correlations between SF-36 scales and measures of disease symptoms, disease activity, and functioning exceeded the prespecified threshold (r ≥ |0.40|). Known-groups validity was supported by findings of clinically meaningful differences in SF-36 scores between subgroups of patients when classified by disease activity (i.e., active versus inactive), symptom status, and comorbidity status. Responsiveness was supported by findings of clinically meaningful changes in SF-36 scores following treatment in non-comparative trials, and by meaningfully larger improvements in SF-36 scores in treatment arms relative to controls in randomized controlled trials. The sole study of SF-36 reliability found evidence supporting internal consistency (Cronbach's α ≥ 0.70) for all SF-36 scales and test-retest reliability (intraclass correlation coefficient ≥0.70) for six of eight scales. Evidence from this systematic literature review indicates that the SF-36 is reliable, valid, and responsive when used with UC patients, supporting the inclusion of the SF-36 as an endpoint in clinical trials for this patient population.

  5. [Reliability and validity of the Chinese version on Alcohol Use Disorders Identification Test].

    Science.gov (United States)

    Zhang, C; Yang, G P; Li, Z; Li, X N; Li, Y; Hu, J; Zhang, F Y; Zhang, X J

    2017-08-10

    Objective: To assess the reliability and validity of the Chinese version on Alcohol Use Disorders Identification Test (AUDIT) among medical students in China and to provide correct way of application on the recommended scales. Methods: An E-questionnaire was developed and sent to medical students in five different colleges. Students were all active volunteers to accept the testings. Cronbach's α and split-half reliability were calculated to evaluate the reliability of AUDIT while content, contract, discriminant and convergent validity were performed to measure the validity of the scales. Results: The overall Cronbach's α of AUDIT was 0.782 and the split-half reliability was 0.711. Data showed that the domain Cronbach's α and split-half reliability were 0.796 and 0.794 for hazardous alcohol use, 0.561 and 0.623 for dependence symptoms, and 0.647 and 0.640 for harmful alcohol use. Results also showed that the content validity index on the levels of items I-CVI) were from 0.83 to 1.00, the content validity index of scale level (S-CVI/UA) was 0.90, content validity index of average scale level (S-CVI/Ave) was 0.99 and the content validity ratios (CVR) were from 0.80 to 1.00. The simplified version of AUDIT supported a presupposed three-factor structure which could explain 61.175% of the total variance revealed through exploratory factor analysis. AUDIT semed to have good convergent and discriminant validity, with the success rate of calibration experiment as 100%. Conclusion: AUDIT showed good reliability and validity among medical students in China thus worth for promotion on its use.

  6. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    Science.gov (United States)

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  7. Development and validation of a stock addiction inventory (SAI).

    Science.gov (United States)

    Youn, HyunChul; Choi, Jung-Seok; Kim, Dai-Jin; Choi, Sam-Wook

    2016-01-01

    Investing in financial markets is promoted and protected by the government as an essential economic activity, but can turn into a gambling addiction problem. Until now, few scales have widely been used to identify gambling addicts in financial markets. This study aimed to develop a self-rating scale to distinguish them. In addition, the reliability and validity of the stock addiction inventory (SAI) were demonstrated. A set of questionnaires, including the SAI, south oaks gambling screen (SOGS), and DSM-5 diagnostic criteria, for gambling disorder was completed by 1005 participants. Factor analysis, internal consistency testing, t tests, analysis of variance, and partial correlation analysis were conducted to verify the reliability and validity of SAI. The factor analysis results showed the final SAI consisting of two factors and nine items. The internal consistency and concurrent validity of SAI were verified. The Cronbach's α for the total scale was 0.892, and the SAI and its factors were significantly correlated with SOGS. This study developed a specific scale for financial market investments or trading; this scale proved to be reliable and valid. Our scale expands the understanding of gambling addiction in financial markets and provides a diagnostic reference.

  8. Nursing diagnosis of grieving: content validity in perinatal loss situations.

    Science.gov (United States)

    Paloma-Castro, Olga; Romero-Sánchez, José Manuel; Paramio-Cuevas, Juan Carlos; Pastor-Montero, Sonia María; Castro-Yuste, Cristina; Frandsen, Anna J; Albar-Marín, María Jesús; Bas-Sarmiento, Pilar; Moreno-Corral, Luis Javier

    2014-06-01

    To validate the content of the NANDA-I nursing diagnosis of grieving in situations of perinatal loss. Using the Fehring's model, 208 Spanish experts were asked to assess the adequacy of the defining characteristics and other manifestations identified in the literature for cases of perinatal loss. The content validity index was 0.867. Twelve of the 18 defining characteristics were validated, seven as major and five as minor. From the manifestations proposed, "empty inside" was considered as major. The nursing diagnosis of grieving fits in content to the cases of perinatal loss according to experts. The results have provided evidence to support the use of the diagnosis in care plans for said clinical situation. © 2013 NANDA International.

  9. Validity of a Semi-Quantitative Food Frequency Questionnaire for Collegiate Athletes

    Directory of Open Access Journals (Sweden)

    Ayaka Sunam

    2016-06-01

    Full Text Available Background: Food frequency questionnaires (FFQs have been developed and validated for various populations. To our knowledge, however, no FFQ has been validated for young athletes. Here, we investigated whether an FFQ that was developed and validated to estimate dietary intake in middle-aged persons was also valid for estimating that in young athletes. Methods: We applied an FFQ that had been developed for the Japan Public Health Center-based Prospective Cohort Study with modification to the duration of recollection. A total of 156 participants (92 males completed the FFQ and a 3-day non-consecutive 24-hour dietary recall (24hDR. Validity of the mean estimates was evaluated by calculating the percentage differences between the 24hDR and FFQ. Ranking estimation was validated using Spearman’s correlation coefficient (CC, and the degree of miscategorization was determined by joint classification. Results: The FFQ underestimated energy intake by approximately 10% for both males and females. For 35 nutrients, the median (range deattenuated CC was 0.30 (0.10 to 0.57 for males and 0.32 (−0.08 to 0.62 for females. For 19 food groups, the median (range deattenuated CC was 0.32 (0.17 to 0.72 for males and 0.34 (−0.11 to 0.58 for females. For both nutrient and food group intakes, cross-classification analysis indicated extreme miscategorization rates of 3% to 5%. Conclusions: An FFQ developed and validated for middle-aged persons had comparable validity among young athletes. This FFQ might be useful for assessing habitual dietary intake in collegiate athletes, especially for calcium, vitamin C, vegetables, fruits, and milk and dairy products.

  10. On the Need for Quality Control in Validation Research.

    Science.gov (United States)

    Maier, Milton H.

    1988-01-01

    Validated aptitude tests used to help make personnel decisions about military recruits against hands-on tests of job performance in radio repairers and automotive mechanics. Data were filled with errors, reducing accuracy of validity coefficients. Discusses how validity coefficients can be made more accurate by exercising quality control during…

  11. Validation of Visual Caries Activity Assessment

    DEFF Research Database (Denmark)

    Guedes, R S; Piovesan, C; Ardenghi, T M

    2014-01-01

    We evaluated the predictive and construct validity of a caries activity assessment system associated with the International Caries Detection and Assessment System (ICDAS) in primary teeth. A total of 469 children were reexamined: participants of a caries survey performed 2 yr before (follow-up rate...... of 73.4%). At baseline, children (12-59 mo old) were examined with the ICDAS and a caries activity assessment system. The predictive validity was assessed by evaluating the risk of active caries lesion progression to more severe conditions in the follow-up, compared with inactive lesions. We also...... assessed if children with a higher number of active caries lesions were more likely to develop new lesions (construct validity). Noncavitated active caries lesions at occlusal surfaces presented higher risk of progression than inactive ones. Children with a higher number of active lesions and with higher...

  12. Validity and Reliability of Agoraphobic Cognitions Questionnaire-Turkish Version

    Directory of Open Access Journals (Sweden)

    Ayşegül KART

    2013-11-01

    Full Text Available Validity and Reliability of Agoraphobic Cognitions Questionnaire-Turkish Version Objective: The aim of this study is to investigate the validity and reliability of Agoraphobic Cognitions Questionnaire -Turkish Version (ACQ. Method: ACQ was administered to 92 patients with agoraphobia or panic disorder with agoraphobia. BSQ Turkish version completed by translation, back-translation and pilot assessment. Reliability of ACQ was analyzed by test-retest correlation, split-half technique, Cronbach’s alpha coefficient. Construct validity was evaluated by factor analysis after the Kaiser-Meyer-Olkin (KMO and Bartlett test had been performed. Principal component analysis and varimax rotation used for factor analysis. Results: 64% of patients evaluated in the study were female and 36% were male. Age interval was between 18 and 58, mean age was 31.5±10.4. The Cronbach’s alpha coefficient was 0.91. Analysis of test-retest evaluations revealed that there were statistically significant correlations ranging between 24% and 84% concerning questionnaire components. In analysis performed by split-half method reliability coefficients of half questionnaires were found as 0.77 and 0.91. Again Spearmen-Brown coefficient was found as 0.87 by the same analysis. To assess construct validity of ACQ, factor analysis was performed and two basic factors found. These two factors explained 57.6% of the total variance. (Factor 1: 34.6%, Factor 2: 23% Conclusion: Our findings support that ACQ-Turkish version had a satisfactory level of reliability and validity

  13. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures. This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  14. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures.This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  15. Preliminary Validity of the Eyberg Child Behavior Inventory With Filipino Immigrant Parents.

    Science.gov (United States)

    Coffey, Dean M; Javier, Joyce R; Schrager, Sheree M

    Filipinos are an understudied minority affected by significant behavioral health disparities. We evaluate evidence for the reliability, construct validity, and convergent validity of the Eyberg Child Behavior Inventory (ECBI) in 6- to 12- year old Filipino children ( N = 23). ECBI scores demonstrated high internal consistency, supporting a single-factor model (pre-intervention α =.91; post-intervention α =.95). Results document convergent validity with the Child Behavior Checklist Externalizing scale at pretest ( r = .54, p Filipino children.

  16. Construct Validity of the Nepalese School Leaving English Reading Test

    Science.gov (United States)

    Dawadi, Saraswati; Shrestha, Prithvi N.

    2018-01-01

    There has been a steady interest in investigating the validity of language tests in the last decades. Despite numerous studies on construct validity in language testing, there are not many studies examining the construct validity of a reading test. This paper reports on a study that explored the construct validity of the English reading test in…

  17. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  18. Spatial and Semantic Validation of Secondary Food Source Data

    DEFF Research Database (Denmark)

    Lyseen, Anders Knørr; Hansen, Henning Sten

    2014-01-01

    Governmental and commercial lists of food retailers are often used to measure food environments and foodscapes for health and nutritional research. Information about the validity of such secondary food source data is relevant to understanding the potential and limitations of its application....... This study assesses the validity of two government lists of food retailer locations and types by comparing them to direct field observations, including an assessment of whether pre-classification of the directories can reduce the need for field observation. Lists of food retailers were obtained from......-classification was measured through the calculation of PPV, sensitivity and negative prediction value (NPV). The application of either CVR or Smiley as a measure of the food environment would result in a misrepresentation. The pre-classification based on the food retailer names was found to be a valid method for identifying...

  19. Validity of dementia diagnoses in the danish hospital registers

    DEFF Research Database (Denmark)

    Phung, T.K.T.; Andersen, B.B.; Phung, T.K.T.

    2007-01-01

    .24-0.48). Conclusion: The validity of dementia syndrome in the Danish hospital registers was high and allows for epidemiological studies about dementia. Alzheimer's disease, although underregistered, also had a good validity once the diagnosis was registered. In general, other ICD-10 dementia subtypes in the registers......Background:The validity of dementia diagnoses in the Danish nationwide hospital registers was evaluated to determine the value of these registers in epidemiological research about dementia. Methods: Two hundred patients were randomly selected from 4,682 patients registered for the first time...... with a dementia diagnosis in the last 6 months of 2003. The patients' medical journals were reviewed to evaluate if they fulfilled ICD-10 and/or DSM-IV criteria for dementia and specific dementia subtypes. The patients who were still alive in 2006 were invited to an interview. Results: One hundred and ninety...

  20. Validation of the Spanish Addiction Severity Index Multimedia Version (S-ASI-MV).

    Science.gov (United States)

    Butler, Stephen F; Redondo, José Pedro; Fernandez, Kathrine C; Villapiano, Albert

    2009-01-01

    This study aimed to develop and test the reliability and validity of a Spanish adaptation of the ASI-MV, a computer administered version of the Addiction Severity Index, called the S-ASI-MV. Participants were 185 native Spanish-speaking adult clients from substance abuse treatment facilities serving Spanish-speaking clients in Florida, New Mexico, California, and Puerto Rico. Participants were administered the S-ASI-MV as well as Spanish versions of the general health subscale of the SF-36, the work and family unit subscales of the Social Adjustment Scale Self-Report, the Michigan Alcohol Screening Test, the alcohol and drug subscales of the Personality Assessment Inventory, and the Hopkins Symptom Checklist-90. Three-to-five-day test-retest reliability was examined along with criterion validity, convergent/discriminant validity, and factorial validity. Measurement invariance between the English and Spanish versions of the ASI-MV was also examined. The S-ASI-MV demonstrated good test-retest reliability (ICCs for composite scores between .59 and .93), criterion validity (rs for composite scores between .66 and .87), and convergent/discriminant validity. Factorial validity and measurement invariance were demonstrated. These results compared favorably with those reported for the original interviewer version of the ASI and the English version of the ASI-MV.

  1. Validity and Reliability of Turkish Male Breast Self-Examination Instrument.

    Science.gov (United States)

    Erkin, Özüm; Göl, İlknur

    2018-04-01

    This study aims to measure the validity and reliability of Turkish male breast self-examination (MBSE) instrument. The methodological study was performed in 2016 at Ege University, Faculty of Nursing, İzmir, Turkey. The MBSE includes ten steps. For validity studies, face validity, content validity, and construct validity (exploratory factor analysis) were done. For reliability study, Kuder Richardson was calculated. The content validity index was found to be 0.94. Kendall W coefficient was 0.80 (p=0.551). The total variance explained by the two factors was found to be 63.24%. Kuder Richardson 21 was done for reliability study and found to be 0.97 for the instrument. The final instrument included 10 steps and two stages. The Turkish version of MBSE is a valid and reliable instrument for early diagnose. The MBSE can be used in Turkish speaking countries and cultures with two stages and 10 steps.

  2. Experimental validation of Monte Carlo calculations for organ dose

    International Nuclear Information System (INIS)

    Yalcintas, M.G.; Eckerman, K.F.; Warner, G.G.

    1980-01-01

    The problem of validating estimates of absorbed dose due to photon energy deposition is examined. The computational approaches used for the estimation of the photon energy deposition is examined. The limited data for validation of these approaches is discussed and suggestions made as to how better validation information might be obtained

  3. Cross-cultural adaptation and validation of the teamwork climate scale

    Directory of Open Access Journals (Sweden)

    Mariana Charantola Silva

    2016-01-01

    Full Text Available ABSTRACT OBJECTIVE To adapt and validate the Team Climate Inventory scale, of teamwork climate measurement, for the Portuguese language, in the context of primary health care in Brazil. METHODS Methodological study with quantitative approach of cross-cultural adaptation (translation, back-translation, synthesis, expert committee, and pretest and validation with 497 employees from 72 teams of the Family Health Strategy in the city of Campinas, SP, Southeastern Brazil. We verified reliability by the Cronbach’s alpha, construct validity by the confirmatory factor analysis with SmartPLS software, and correlation by the job satisfaction scale. RESULTS We problematized the overlap of items 9, 11, and 12 of the “participation in the team” factor and the “team goals” factor regarding its definition. The validation showed no overlapping of items and the reliability ranged from 0.92 to 0.93. The confirmatory factor analysis indicated suitability of the proposed model with distribution of the 38 items in the four factors. The correlation between teamwork climate and job satisfaction was significant. CONCLUSIONS The version of the scale in Brazilian Portuguese was validated and can be used in the context of primary health care in the Country, constituting an adequate tool for the assessment and diagnosis of teamwork.

  4. Child abuse: validation of a questionnaire translated into Brazilian Portuguese

    Directory of Open Access Journals (Sweden)

    Glaucia Marengo

    2013-04-01

    Full Text Available This study sought to validate the Portuguese translation of a questionnaire on maltreatment of children and adolescents, developed by Russell et al. and to test its psychometric properties for use in Brazil. The original questionnaire was translated into Portuguese using a standardized forward-backward linguistic translation method. Both face and content validity were tested in a small pilot study (n = 8. In the main study, a convenience sample of 80 graduate dentistry students with different specialties, from Curitiba, PR, Brazil, were invited to complete the final Brazilian version of the questionnaire. Discriminant validity was assessed by comparing the results obtained from the questionnaire for different specialties (pediatric dentistry, for example. The respondents completed the questionnaire again after 4 weeks to evaluate test-retest reliability. The comparison of test versus retest questionnaire answers showed good agreement (kappa > 0.53, intraclass correlation > 0.84 for most questions. In regard to discriminant validity, a statistically significant difference was observed only in the experience and interest domains, in which pediatric dentists showed more experience with and interest in child abuse compared with dentists of other specialties (Mann-Whitney test, p < 0.05. The Brazilian version of the questionnaire was valid and reliable for assessing knowledge regarding child abuse by Portuguese-speaking dentists.

  5. Promoting Rigorous Validation Practice: An Applied Perspective

    Science.gov (United States)

    Mattern, Krista D.; Kobrin, Jennifer L.; Camara, Wayne J.

    2012-01-01

    As researchers at a testing organization concerned with the appropriate uses and validity evidence for our assessments, we provide an applied perspective related to the issues raised in the focus article. Newton's proposal for elaborating the consensus definition of validity is offered with the intention to reduce the risks of inadequate…

  6. Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.

    Science.gov (United States)

    Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J

    2018-01-01

    The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.

  7. GPM GROUND VALIDATION CAMPAIGN REPORTS IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Campaign Reports IFloodS dataset consists of various reports filed by the scientists during the GPM Ground Validation Iowa Flood Studies...

  8. Brazilian Portuguese Validated Version of the Cardiac Anxiety Questionnaire

    Science.gov (United States)

    Sardinha, Aline; Nardi, Antonio Egidio; de Araújo, Claudio Gil Soares; Ferreira, Maria Cristina; Eifert, Georg H.

    2013-01-01

    Background Cardiac Anxiety (CA) is the fear of cardiac sensations, characterized by recurrent anxiety symptoms, in patients with or without cardiovascular disease. The Cardiac Anxiety Questionnaire (CAQ) is a tool to assess CA, already adapted but not validated to Portuguese. Objective This paper presents the three phases of the validation studies of the Brazilian CAQ. Methods To extract the factor structure and assess the reliability of the CAQ (phase 1), 98 patients with coronary artery disease were recruited. The aim of phase 2 was to explore the convergent and divergent validity. Fifty-six patients completed the CAQ, along with the Body Sensations Questionnaire (BSQ) and the Social Phobia Inventory (SPIN). To determine the discriminative validity (phase 3), we compared the CAQ scores of two subgroups formed with patients from phase 1 (n = 98), according to the diagnoses of panic disorder and agoraphobia, obtained with the MINI - Mini International Neuropsychiatric Interview. Results A 2-factor solution was the most interpretable (46.4% of the variance). Subscales were named "Fear and Hypervigilance" (n = 9; alpha = 0.88), and "Avoidance", (n = 5; alpha = 0.82). Significant correlation was found between factor 1 and the BSQ total score (p < 0.01), but not with factor 2. SPIN factors showed significant correlations with CAQ subscales (p < 0.01). In phase 3, "Cardiac with panic" patients scored significantly higher in CAQ factor 1 (t = -3.42; p < 0.01, CI = -1.02 to -0.27), and higher, but not significantly different, in factor 2 (t = -1.98; p = 0.51, CI = -0.87 to 0.00). Conclusions These results provide a definite Brazilian validated version of the CAQ, adequate to clinical and research settings. PMID:24145391

  9. Reliability and validity of the Wolfram Unified Rating Scale (WURS

    Directory of Open Access Journals (Sweden)

    Nguyen Chau

    2012-11-01

    Full Text Available Abstract Background Wolfram syndrome (WFS is a rare, neurodegenerative disease that typically presents with childhood onset insulin dependent diabetes mellitus, followed by optic atrophy, diabetes insipidus, deafness, and neurological and psychiatric dysfunction. There is no cure for the disease, but recent advances in research have improved understanding of the disease course. Measuring disease severity and progression with reliable and validated tools is a prerequisite for clinical trials of any new intervention for neurodegenerative conditions. To this end, we developed the Wolfram Unified Rating Scale (WURS to measure the severity and individual variability of WFS symptoms. The aim of this study is to develop and test the reliability and validity of the Wolfram Unified Rating Scale (WURS. Methods A rating scale of disease severity in WFS was developed by modifying a standardized assessment for another neurodegenerative condition (Batten disease. WFS experts scored the representativeness of WURS items for the disease. The WURS was administered to 13 individuals with WFS (6-25 years of age. Motor, balance, mood and quality of life were also evaluated with standard instruments. Inter-rater reliability, internal consistency reliability, concurrent, predictive and content validity of the WURS were calculated. Results The WURS had high inter-rater reliability (ICCs>.93, moderate to high internal consistency reliability (Cronbach’s α = 0.78-0.91 and demonstrated good concurrent and predictive validity. There were significant correlations between the WURS Physical Assessment and motor and balance tests (rs>.67, ps>.76, ps=-.86, p=.001. The WURS demonstrated acceptable content validity (Scale-Content Validity Index=0.83. Conclusions These preliminary findings demonstrate that the WURS has acceptable reliability and validity and captures individual differences in disease severity in children and young adults with WFS.

  10. Simulation codes and the impact of validation/uncertainty requirements

    International Nuclear Information System (INIS)

    Sills, H.E.

    1995-01-01

    Several of the OECD/CSNI members have adapted a proposed methodology for code validation and uncertainty assessment. Although the validation process adapted by members has a high degree of commonality, the uncertainty assessment processes selected are more variable, ranaing from subjective to formal. This paper describes the validation and uncertainty assessment process, the sources of uncertainty, methods of reducing uncertainty, and methods of assessing uncertainty.Examples are presented from the Ontario Hydro application of the validation methodology and uncertainty assessment to the system thermal hydraulics discipline and the TUF (1) system thermal hydraulics code. (author)

  11. Validation of CATHARE for gas-cooled reactors

    International Nuclear Information System (INIS)

    Fabrice Bentivoglio; Ola Widlund; Manuel Saez

    2005-01-01

    heat exchanger to dissipate the power transferred to the fluid. The PBMM loop is subject to an international benchmark coordinated by IAEA. A data pack, containing detailed geometrical information and turbo-machine characteristics, is available to the participants in the benchmark. One of the challenges for the modellers is that very little design data and practically no experimental results have been published. The paper describes the modelling of the Oberhausen II plant and the PBMM loop, and presents the results of the corresponding CATHARE calculations. The current status of the code validation program is also discussed. (authors)

  12. A validation methodology for fault-tolerant clock synchronization

    Science.gov (United States)

    Johnson, S. C.; Butler, R. W.

    1984-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating an experimental implementation of the Software Implemented Fault Tolerance (SIFT) clock synchronization algorithm. The design proof of the algorithm defines the maximum skew between any two nonfaulty clocks in the system in terms of theoretical upper bounds on certain system parameters. The quantile to which each parameter must be estimated is determined by a combinatorial analysis of the system reliability. The parameters are measured by direct and indirect means, and upper bounds are estimated. A nonparametric method based on an asymptotic property of the tail of a distribution is used to estimate the upper bound of a critical system parameter. Although the proof process is very costly, it is extremely valuable when validating the crucial synchronization subsystem.

  13. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  14. How Developments in Psychology and Technology Challenge Validity Argumentation

    Science.gov (United States)

    Mislevy, Robert J.

    2016-01-01

    Validity is the sine qua non of properties of educational assessment. While a theory of validity and a practical framework for validation has emerged over the past decades, most of the discussion has addressed familiar forms of assessment and psychological framings. Advances in digital technologies and in cognitive and social psychology have…

  15. Clinical validation of an epigenetic assay to predict negative histopathological results in repeat prostate biopsies.

    Science.gov (United States)

    Partin, Alan W; Van Neste, Leander; Klein, Eric A; Marks, Leonard S; Gee, Jason R; Troyer, Dean A; Rieger-Christ, Kimberly; Jones, J Stephen; Magi-Galluzzi, Cristina; Mangold, Leslie A; Trock, Bruce J; Lance, Raymond S; Bigley, Joseph W; Van Criekinge, Wim; Epstein, Jonathan I

    2014-10-01

    The DOCUMENT multicenter trial in the United States validated the performance of an epigenetic test as an independent predictor of prostate cancer risk to guide decision making for repeat biopsy. Confirming an increased negative predictive value could help avoid unnecessary repeat biopsies. We evaluated the archived, cancer negative prostate biopsy core tissue samples of 350 subjects from a total of 5 urological centers in the United States. All subjects underwent repeat biopsy within 24 months with a negative (controls) or positive (cases) histopathological result. Centralized blinded pathology evaluation of the 2 biopsy series was performed in all available subjects from each site. Biopsies were epigenetically profiled for GSTP1, APC and RASSF1 relative to the ACTB reference gene using quantitative methylation specific polymerase chain reaction. Predetermined analytical marker cutoffs were used to determine assay performance. Multivariate logistic regression was used to evaluate all risk factors. The epigenetic assay resulted in a negative predictive value of 88% (95% CI 85-91). In multivariate models correcting for age, prostate specific antigen, digital rectal examination, first biopsy histopathological characteristics and race the test proved to be the most significant independent predictor of patient outcome (OR 2.69, 95% CI 1.60-4.51). The DOCUMENT study validated that the epigenetic assay was a significant, independent predictor of prostate cancer detection in a repeat biopsy collected an average of 13 months after an initial negative result. Due to its 88% negative predictive value adding this epigenetic assay to other known risk factors may help decrease unnecessary repeat prostate biopsies. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  16. Site characterization and validation - stress field in the SCV block and around the validation drift. Stage 3

    International Nuclear Information System (INIS)

    McKinnon, S.; Carr, P.

    1990-04-01

    The results of previous stress measurement and stress modelling programmes carried out in the vicinity of the SCV block have been reviewed. Collectively, the results show that the stress field is influenced by the presence of the old mine excavations, and the measurements can be divided into near-field and far-field locations. The near-field measurements denote the extent and magnitude of the mining induced stresses while the far-field measurements reflect virgin conditions. Because of large scatter in the previous data, additional stress measurements were carried out using the CSIRO hollow inclusion cell. Combining all measurements, an estimate of the virgin stress tensor was made. Three-dimensional stress modelling was carried out using the program BEFE to determine the state of stress in the SCV block, and around the validation drift. This modelling showed that most of the SCV block is in a virgin stress field. Stresses acting on the fracture zones in the SCV block will be due only to the virgin stress field and induced stresses from the validation drift. (orig.)

  17. Validation of a two-fluid model used for the simulation of dense fluidized beds; Validation d`un modele a deux fluides applique a la simulation des lits fluidises denses

    Energy Technology Data Exchange (ETDEWEB)

    Boelle, A.

    1997-02-17

    A two-fluid model applied to the simulation of gas-solid dense fluidized beds is validated on micro scale and on macro scale. Phase coupling is carried out in the momentum and energy transport equation of both phases. The modeling is built on the kinetic theory of granular media in which the gas action has been taken into account in order to get correct expressions of transport coefficients. A description of hydrodynamic interactions between particles in high Stokes number flow is also incorporated in the model. The micro scale validation uses Lagrangian numerical simulations viewed as numerical experiments. The first validation case refers to a gas particle simple shear flow. It allows to validate the competition between two dissipation mechanisms: drag and particle collisions. The second validation case is concerted with sedimenting particles in high Stokes number flow. It allows to validate our approach of hydrodynamic interactions. This last case had led us to develop an original Lagrangian simulation with a two-way coupling between the fluid and the particles. The macro scale validation uses the results of Eulerian simulations of dense fluidized bed. Bed height, particles circulation and spontaneous created bubbles characteristics are studied and compared to experimental measurement, both looking at physical and numerical parameters. (author) 159 refs.

  18. DBS-LC-MS/MS assay for caffeine: validation and neonatal application.

    Science.gov (United States)

    Bruschettini, Matteo; Barco, Sebastiano; Romantsik, Olga; Risso, Francesco; Gennai, Iulian; Chinea, Benito; Ramenghi, Luca A; Tripodi, Gino; Cangemi, Giuliana

    2016-09-01

    DBS might be an appropriate microsampling technique for therapeutic drug monitoring of caffeine in infants. Nevertheless, its application presents several issues that still limit its use. This paper describes a validated DBS-LC-MS/MS method for caffeine. The results of the method validation showed an hematocrit dependence. In the analysis of 96 paired plasma and DBS clinical samples, caffeine levels measured in DBS were statistically significantly lower than in plasma but the observed differences were independent from hematocrit. These results clearly showed the need for extensive validation with real-life samples for DBS-based methods. DBS-LC-MS/MS can be considered to be a good alternative to traditional methods for therapeutic drug monitoring or PK studies in preterm infants.

  19. Validation of a questionnaire of knowledge about asthma

    International Nuclear Information System (INIS)

    Rodriguez Martinez, Carlos; Sossa, Monica Patricia

    2004-01-01

    An educative intervention destined to increase the knowledge in asthma allows the children and/or its parents to acquire abilities that allow to prevent and/or to handle the asthmatic attacks, decreasing the morbidity produced by the disease, nevertheless we do not account with a validated instrument that allows us to quantify the level of asthma knowledge. The objective is to develop and to validate a questionnaire of knowledge about asthma to be filled out by the parents and/or people in charge of the care of the asthmatic pediatric patients. The 17 items that conform the questionnaire were obtained alter literature review, realization of focal groups the professional experience of the investigators and the realization of pilot studies. The face content and concurrent validity of the instrument was evaluated; we also determined the factor structure, test-retest reproducibility, and sensitivity to change of the questionnaire. We included 120 patients with average age of 4.5 %3.7 years the factor analysis demonstrated a probable structure of three factors that altogether explain 85% of the total variance of the results the face and content validity was based on the concept of a multi-disciplinary group of experts in the field the concurrent validity was demonstrated by the ability of the questionnaire to distinguish low from high knowledge parents. Test-retest reproducibility and sensitivity to change were demonstrated comparing scores of the questionnaire filled out in two different occasions. The questionnaire of knowledge of asthma developed in the study is a useful and reliable tool to quantify the basal level of asthma knowledge in parents of asthmatic children and to determine the effectiveness of an educative intervention destined to increase the knowledge and understanding of the disease

  20. A Systematic Review on the Validity of Teledentistry.

    Science.gov (United States)

    Alabdullah, Jafar H; Daniel, Susan J

    2018-01-05

    The aim of this systematic review was to evaluate the validity of using teledentistry in oral care examination and diagnosis. In June 2016, a systematic search of the literature was conducted without time restrictions in three electronic databases (Ebscohost, Pubmed, and Scopus). Two reviewers screened the retrieved articles first by title and then by abstract to determine relevant articles for full text review. Studies included were as follows: (1) related to teledentistry, (2) available in full text and English, (3) compared teledentistry application to a gold standard, and (4) provided clear statistical tests for validity. The methodological quality of studies was determined using the "Quality Assessment of Studies of Diagnostic Accuracy (QUADAS)." Seventy-nine studies met the initial search criteria. Following removal of duplicate articles, only 58 were remaining and reviewed by title and abstract, yielding 14 full-text articles. Nine of the full-text articles met the inclusion criteria. Results of the QUADAS assessment varied from 9 to 13 out of 14 items; therefore, studies demonstrated high quality (>60%). Validity of teledentistry varied and is reported by range for the following statistics: sensitivity (n = 8, 25-100%), specificity (n = 7, 68-100%), positive predictive value (n = 5, 57-100%), and negative predictive value (n = 5, 50-100%). Kappa statistics were also reported for evaluation of reliability between gold standard and teledentistry examination (n = 6, 46-93%). Teledentistry could be comparable to face-to-face for oral screening, especially in school-based programs, rural areas and areas with limited access to care, and long-term care facilities. Identification of oral diseases, referrals, and teleconsultations are possible and valid. The need for methodologically designed studies with appropriate statistical tests to determine the validity of teledentistry exists.