WorldWideScience

Sample records for validation set results

  1. 45 CFR 162.1011 - Valid code sets.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public... ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates specified by the organization responsible for maintaining that code set. ...

  2. Reliability and Validity of 10 Different Standard Setting Procedures.

    Science.gov (United States)

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  3. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  4. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  5. Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.

    Science.gov (United States)

    Daepp, Madeleine Ig; Black, Jennifer

    2017-10-01

    The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.

  6. Validity of measures of pain and symptoms in HIV/AIDS infected households in resources poor settings: results from the Dominican Republic and Cambodia

    Directory of Open Access Journals (Sweden)

    Morineau Guy

    2006-03-01

    Full Text Available Abstract Background HIV/AIDS treatment programs are currently being mounted in many developing nations that include palliative care services. While measures of palliative care have been developed and validated for resource rich settings, very little work exists to support an understanding of measurement for Africa, Latin America or Asia. Methods This study investigates the construct validity of measures of reported pain, pain control, symptoms and symptom control in areas with high HIV-infected prevalence in Dominican Republic and Cambodia Measures were adapted from the POS (Palliative Outcome Scale. Households were selected through purposive sampling from networks of people living with HIV/AIDS. Consistencies in patterns in the data were tested used Chi Square and Mantel Haenszel tests. Results The sample persons who reported chronic illness were much more likely to report pain and symptoms compared to those not chronically ill. When controlling for the degrees of pain, pain control did not differ between the chronically ill and non-chronically ill using a Mantel Haenszel test in both countries. Similar results were found for reported symptoms and symptom control for the Dominican Republic. These findings broadly support the construct validity of an adapted version of the POS in these two less developed countries. Conclusion The results of the study suggest that the selected measures can usefully be incorporated into population-based surveys and evaluation tools needed to monitor palliative care and used in settings with high HIV/AIDS prevalence.

  7. All the mathematics in the world: logical validity and classical set theory

    Directory of Open Access Journals (Sweden)

    David Charles McCarty

    2017-12-01

    Full Text Available A recognizable topological model construction shows that any consistent principles of classical set theory, including the validity of the law of the excluded third, together with a standard class theory, do not suffice to demonstrate the general validity of the law of the excluded third. This result calls into question the classical mathematician's ability to offer solid justifications for the logical principles he or she favors.

  8. An Ethical Issue Scale for Community Pharmacy Setting (EISP): Development and Validation.

    Science.gov (United States)

    Crnjanski, Tatjana; Krajnovic, Dusanka; Tadic, Ivana; Stojkov, Svetlana; Savic, Mirko

    2016-04-01

    Many problems that arise when providing pharmacy services may contain some ethical components and the aims of this study were to develop and validate a scale that could assess difficulties of ethical issues, as well as the frequency of those occurrences in everyday practice of community pharmacists. Development and validation of the scale was conducted in three phases: (1) generating items for the initial survey instrument after qualitative analysis; (2) defining the design and format of the instrument; (3) validation of the instrument. The constructed Ethical Issue scale for community pharmacy setting has two parts containing the same 16 items for assessing the difficulty and frequency thereof. The results of the 171 completely filled out scales were analyzed (response rate 74.89%). The Cronbach's α value of the part of the instrument that examines difficulties of the ethical situations was 0.83 and for the part of the instrument that examined frequency of the ethical situations was 0.84. Test-retest reliability for both parts of the instrument was satisfactory with all Interclass correlation coefficient (ICC) values above 0.6, (for the part that examines severity ICC = 0.809, for the part that examines frequency ICC = 0.929). The 16-item scale, as a self assessment tool, demonstrated a high degree of content, criterion, and construct validity and test-retest reliability. The results support its use as a research tool to asses difficulty and frequency of ethical issues in community pharmacy setting. The validated scale needs to be further employed on a larger sample of pharmacists.

  9. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  10. Good validity of the international spinal cord injury quality of life basic data set

    DEFF Research Database (Denmark)

    Post, M W M; Adriaansen, J J E; Charlifue, S

    2016-01-01

    STUDY DESIGN: Cross-sectional validation study. OBJECTIVES: To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. SETTING: Dutch community. PARTICIPANTS: People 28-65 years of age, who obtained their SCI between 18...... and 35 years of age, were at least 10 years post SCI and were wheelchair users in daily life.Measure(s):The International SCI QoL Basic Data Set consists of three single items on satisfaction with life as a whole, physical health and psychological health (0=complete dissatisfaction; 10=complete...... and psychological health (0.70). CONCLUSIONS: This first validity study of the International SCI QoL Basic Data Set shows that it appears valid for persons with SCI....

  11. S.E.T., CSNI Separate Effects Test Facility Validation Matrix

    International Nuclear Information System (INIS)

    1997-01-01

    1 - Description of test facility: The SET matrix of experiments is suitable for the developmental assessment of thermal-hydraulics transient system computer codes by selecting individual tests from selected facilities, relevant to each phenomena. Test facilities differ from one another in geometrical dimensions, geometrical configuration and operating capabilities or conditions. Correlation between SET facility and phenomena were calculated on the basis of suitability for model validation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant and is sufficiently instrumented); limited suitability for model variation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant but has problems associated with imperfect scaling, different test fluids or insufficient instrumentation); and unsuitability for model validation. 2 - Description of test: Whereas integral experiments are usually designed to follow the behaviour of a reactor system in various off-normal or accident transients, separate effects tests focus on the behaviour of a single component, or on the characteristics of one thermal-hydraulic phenomenon. The construction of a separate effects test matrix is an attempt to collect together the best sets of openly available test data for code validation, assessment and improvement, from the wide range of experiments that have been carried out world-wide in the field of thermal hydraulics. In all, 2094 tests are included in the SET matrix

  12. The Mistra experiment for field containment code validation first results

    International Nuclear Information System (INIS)

    Caron-Charles, M.; Blumenfeld, L.

    2001-01-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  13. Construct Validity and Reliability of Structured Assessment of endoVascular Expertise in a Simulated Setting

    DEFF Research Database (Denmark)

    Bech, B; Lönn, L; Falkenberg, M

    2011-01-01

    Objectives To study the construct validity and reliability of a novel endovascular global rating scale, Structured Assessment of endoVascular Expertise (SAVE). Design A Clinical, experimental study. Materials Twenty physicians with endovascular experiences ranging from complete novices to highly....... Validity was analysed by correlating experience with performance results. Reliability was analysed according to generalisability theory. Results The mean score on the 29 items of the SAVE scale correlated well with clinical experience (R = 0.84, P ... with clinical experience (R = -0.53, P validity and reliability of assessment with the SAVE scale was high when applied to performances in a simulation setting with advanced realism. No ceiling effect...

  14. ValidatorDB: database of up-to-date validation results for ligands and non-standard residues from the Protein Data Bank.

    Science.gov (United States)

    Sehnal, David; Svobodová Vařeková, Radka; Pravda, Lukáš; Ionescu, Crina-Maria; Geidl, Stanislav; Horský, Vladimír; Jaiswal, Deepti; Wimmerová, Michaela; Koča, Jaroslav

    2015-01-01

    Following the discovery of serious errors in the structure of biomacromolecules, structure validation has become a key topic of research, especially for ligands and non-standard residues. ValidatorDB (freely available at http://ncbr.muni.cz/ValidatorDB) offers a new step in this direction, in the form of a database of validation results for all ligands and non-standard residues from the Protein Data Bank (all molecules with seven or more heavy atoms). Model molecules from the wwPDB Chemical Component Dictionary are used as reference during validation. ValidatorDB covers the main aspects of validation of annotation, and additionally introduces several useful validation analyses. The most significant is the classification of chirality errors, allowing the user to distinguish between serious issues and minor inconsistencies. Other such analyses are able to report, for example, completely erroneous ligands, alternate conformations or complete identity with the model molecules. All results are systematically classified into categories, and statistical evaluations are performed. In addition to detailed validation reports for each molecule, ValidatorDB provides summaries of the validation results for the entire PDB, for sets of molecules sharing the same annotation (three-letter code) or the same PDB entry, and for user-defined selections of annotations or PDB entries. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Good validity of the international spinal cord injury quality of life basic data set

    NARCIS (Netherlands)

    Post, M. W. M.; Adriaansen, J. J. E.; Charlifue, S.; Biering-Sorensen, F.; van Asbeck, F. W. A.

    Study design: Cross-sectional validation study. Objectives: To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. Setting: Dutch community. Participants: People 28-65 years of age, who obtained their SCI between 18 and 35

  16. Validation Test Results for Orthogonal Probe Eddy Current Thruster Inspection System

    Science.gov (United States)

    Wincheski, Russell A.

    2007-01-01

    Recent nondestructive evaluation efforts within NASA have focused on an inspection system for the detection of intergranular cracking originating in the relief radius of Primary Reaction Control System (PCRS) Thrusters. Of particular concern is deep cracking in this area which could lead to combustion leakage in the event of through wall cracking from the relief radius into an acoustic cavity of the combustion chamber. In order to reliably detect such defects while ensuring minimal false positives during inspection, the Orthogonal Probe Eddy Current (OPEC) system has been developed and an extensive validation study performed. This report describes the validation procedure, sample set, and inspection results as well as comparing validation flaws with the response from naturally occuring damage.

  17. Validity of proposed DSM-5 diagnostic criteria for nicotine use disorder: results from 734 Israeli lifetime smokers

    Science.gov (United States)

    Shmulewitz, D.; Wall, M.M.; Aharonovich, E.; Spivak, B.; Weizman, A.; Frisch, A.; Grant, B. F.; Hasin, D.

    2013-01-01

    Background The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) proposes aligning nicotine use disorder (NUD) criteria with those for other substances, by including the current DSM fourth edition (DSM-IV) nicotine dependence (ND) criteria, three abuse criteria (neglect roles, hazardous use, interpersonal problems) and craving. Although NUD criteria indicate one latent trait, evidence is lacking on: (1) validity of each criterion; (2) validity of the criteria as a set; (3) comparative validity between DSM-5 NUD and DSM-IV ND criterion sets; and (4) NUD prevalence. Method Nicotine criteria (DSM-IV ND, abuse and craving) and external validators (e.g. smoking soon after awakening, number of cigarettes per day) were assessed with a structured interview in 734 lifetime smokers from an Israeli household sample. Regression analysis evaluated the association between validators and each criterion. Receiver operating characteristic analysis assessed the association of the validators with the DSM-5 NUD set (number of criteria endorsed) and tested whether DSM-5 or DSM-IV provided the most discriminating criterion set. Changes in prevalence were examined. Results Each DSM-5 NUD criterion was significantly associated with the validators, with strength of associations similar across the criteria. As a set, DSM-5 criteria were significantly associated with the validators, were significantly more discriminating than DSM-IV ND criteria, and led to increased prevalence of binary NUD (two or more criteria) over ND. Conclusions All findings address previous concerns about the DSM-IV nicotine diagnosis and its criteria and support the proposed changes for DSM-5 NUD, which should result in improved diagnosis of nicotine disorders. PMID:23312475

  18. Affordances in the home environment for motor development: Validity and reliability for the use in daycare setting.

    Science.gov (United States)

    Müller, Alessandra Bombarda; Valentini, Nadia Cristina; Bandeira, Paulo Felipe Ribeiro

    2017-05-01

    The range of stimuli provided by physical space, toys and care practices contributes to the motor, cognitive and social development of children. However, assessing the quality of child education environments is a challenge, and can be considered a health promotion initiative. This study investigated the validity of the criterion, content, construct and reliability of the Affordances in the Home Environment for Motor Development - Infant Scale (AHEMD-IS), version 3-18 months, for the use in daycare settings. Content validation was conducted with the participation of seven motor development and health care experts; and, face validity by 20 specialists in health and education. The results indicate the suitability of the adapted AHEMD-IS, evidencing its validity for the daycare setting a potential tool to assess the opportunities that the collective context offers to child development. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  20. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  1. Validation of the TRUST tool in a Greek perioperative setting.

    Science.gov (United States)

    Chatzea, Vasiliki-Eirini; Sifaki-Pistolla, Dimitra; Dey, Nilanjan; Melidoniotis, Evangelos

    2017-06-01

    The aim of this study was to translate, culturally adapt and validate the TRUST questionnaire in a Greek perioperative setting. The TRUST questionnaire assesses the relationship between trust and performance. The study assessed the levels of trust and performance in the surgery and anaesthesiology department during a very stressful period for Greece (economic crisis) and offered a user friendly and robust assessment tool. The study concludes that the Greek version of the TRUST questionnaire is a reliable and valid instrument for measuring team performance among Greek perioperative teams. Copyright the Association for Perioperative Practice.

  2. Validation results of satellite mock-up capturing experiment using nets

    Science.gov (United States)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly

  3. The Outcome and Assessment Information Set (OASIS): A Review of Validity and Reliability

    Science.gov (United States)

    O’CONNOR, MELISSA; DAVITT, JOAN K.

    2015-01-01

    The Outcome and Assessment Information Set (OASIS) is the patient-specific, standardized assessment used in Medicare home health care to plan care, determine reimbursement, and measure quality. Since its inception in 1999, there has been debate over the reliability and validity of the OASIS as a research tool and outcome measure. A systematic literature review of English-language articles identified 12 studies published in the last 10 years examining the validity and reliability of the OASIS. Empirical findings indicate the validity and reliability of the OASIS range from low to moderate but vary depending on the item studied. Limitations in the existing research include: nonrepresentative samples; inconsistencies in methods used, items tested, measurement, and statistical procedures; and the changes to the OASIS itself over time. The inconsistencies suggest that these results are tentative at best; additional research is needed to confirm the value of the OASIS for measuring patient outcomes, research, and quality improvement. PMID:23216513

  4. The impact of crowd noise on officiating in Muay Thai: achieving external validity in an experimental setting

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2012-09-01

    Full Text Available Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the ‘crowd noise’ intervention is allowed to vary, they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring ‘home’ and ‘away’ boxers. In each bout, judges were randomised into a ‘noise’ (live sound or ‘no crowd noise’ (noise cancelling headphones and white noise condition, resulting in 59 judgements in the ‘no crowd noise’ and 61 in the ‘crowd noise’ condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the ‘ten point must’ scoring system shared with professional boxing. The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  5. The impact of crowd noise on officiating in muay thai: achieving external validity in an experimental setting.

    Science.gov (United States)

    Myers, Tony; Balmer, Nigel

    2012-01-01

    Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the "crowd noise" intervention is allowed to vary), they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring "home" and "away" boxers. In each bout, judges were randomized into a "noise" (live sound) or "no crowd noise" (noise-canceling headphones and white noise) condition, resulting in 59 judgments in the "no crowd noise" and 61 in the "crowd noise" condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the "10-point must" scoring system shared with professional boxing). The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  6. Moving faces, looking places: validation of the Amsterdam Dynamic Facial Expression Set (ADFES)

    NARCIS (Netherlands)

    van der Schalk, J.; Hawk, S.T.; Fischer, A.H.; Doosje, B.

    2011-01-01

    We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away

  7. The Set of Fear Inducing Pictures (SFIP): Development and validation in fearful and nonfearful individuals.

    Science.gov (United States)

    Michałowski, Jarosław M; Droździel, Dawid; Matuszewski, Jacek; Koziejowski, Wojtek; Jednoróg, Katarzyna; Marchewka, Artur

    2017-08-01

    Emotionally charged pictorial materials are frequently used in phobia research, but no existing standardized picture database is dedicated to the study of different phobias. The present work describes the results of two independent studies through which we sought to develop and validate this type of database-a Set of Fear Inducing Pictures (SFIP). In Study 1, 270 fear-relevant and 130 neutral stimuli were rated for fear, arousal, and valence by four groups of participants; small-animal (N = 34), blood/injection (N = 26), social-fearful (N = 35), and nonfearful participants (N = 22). The results from Study 1 were employed to develop the final version of the SFIP, which includes fear-relevant images of social exposure (N = 40), blood/injection (N = 80), spiders/bugs (N = 80), and angry faces (N = 30), as well as 726 neutral photographs. In Study 2, we aimed to validate the SFIP in a sample of spider, blood/injection, social-fearful, and control individuals (N = 66). The fear-relevant images were rated as being more unpleasant and led to greater fear and arousal in fearful than in nonfearful individuals. The fear images differentiated between the three fear groups in the expected directions. Overall, the present findings provide evidence for the high validity of the SFIP and confirm that the set may be successfully used in phobia research.

  8. Older adult mistreatment risk screening: contribution to the validation of a screening tool in a domestic setting.

    Science.gov (United States)

    Lindenbach, Jeannette M; Larocque, Sylvie; Lavoie, Anne-Marise; Garceau, Marie-Luce

    2012-06-01

    ABSTRACTThe hidden nature of older adult mistreatment renders its detection in the domestic setting particularly challenging. A validated screening instrument that can provide a systematic assessment of risk factors can facilitate this detection. One such instrument, the "expanded Indicators of Abuse" tool, has been previously validated in the Hebrew language in a hospital setting. The present study has contributed to the validation of the "e-IOA" in an English-speaking community setting in Ontario, Canada. It consisted of two phases: (a) a content validity review and adaptation of the instrument by experts throughout Ontario, and (b) an inter-rater reliability assessment by home visiting nurses. The adaptation, the "Mistreatment of Older Adult Risk Factors" tool, offers a comprehensive tool for screening in the home setting. This instrument is significant to professional practice as practitioners working with older adults will be better equipped to assess for risk of mistreatment.

  9. Validation of the PHEEM instrument in a Danish hospital setting

    DEFF Research Database (Denmark)

    Aspegren, Knut; Bastholt, Lars; Bested, K.M.

    2007-01-01

    The Postgraduate Hospital Educational Environment Measure (PHEEM) has been translated into Danish and then validated with good internal consistency by 342 Danish junior and senior hospital doctors. Four of the 40 items are culturally dependent in the Danish hospital setting. Factor analysis...... demonstrated that seven items are interconnected. This information can be used to shorten the instrument by perhaps another three items...

  10. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  11. The development and validation of an interprofessional scale to assess teamwork in mental health settings.

    Science.gov (United States)

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Misawa, Takeshi; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2014-09-01

    Currently, no evaluative scale exists to assess the quality of interprofessional teamwork in mental health settings across the globe. As a result, little is known about the detailed process of team development within this setting. The purpose of this study is to develop and validate a global interprofessional scale that assesses teamwork in mental health settings using an international comparative study based in Japan and the United States. This report provides a description of this study and reports progress made to date. Specifically, it outlines work on literature reviews to identify evaluative teamwork tools as well as identify relevant teamwork models and theories. It also outlines plans for empirical work that will be undertaken in both Japan and the United States.

  12. Antibody Selection for Cancer Target Validation of FSH-Receptor in Immunohistochemical Settings

    Directory of Open Access Journals (Sweden)

    Nina Moeker

    2017-10-01

    Full Text Available Background: The follicle-stimulating hormone (FSH-receptor (FSHR has been reported to be an attractive target for antibody therapy in human cancer. However, divergent immunohistochemical (IHC findings have been reported for FSHR expression in tumor tissues, which could be due to the specificity of the antibodies used. Methods: Three frequently used antibodies (sc-7798, sc-13935, and FSHR323 were validated for their suitability in an immunohistochemical study for FSHR expression in different tissues. As quality control, two potential therapeutic anti-hFSHR Ylanthia® antibodies (Y010913, Y010916 were used. The specificity criteria for selection of antibodies were binding to native hFSHR of different sources, and no binding to non-related proteins. The ability of antibodies to stain the paraffin-embedded Flp-In Chinese hamster ovary (CHO/FSHR cells was tested after application of different epitope retrieval methods. Results: From the five tested anti-hFSHR antibodies, only Y010913, Y010916, and FSHR323 showed specific binding to native, cell-presented hFSHR. Since Ylanthia® antibodies were selected to specifically recognize native FSHR, as required for a potential therapeutic antibody candidate, FSHR323 was the only antibody to detect the receptor in IHC/histochemical settings on transfected cells, and at markedly lower, physiological concentrations (ex., in Sertoli cells of human testes. The pattern of FSH323 staining noticed for ovarian, prostatic, and renal adenocarcinomas indicated that FSHR was expressed mainly in the peripheral tumor blood vessels. Conclusion: Of all published IHC antibodies tested, only antibody FSHR323 proved suitable for target validation of hFSHR in an IHC setting for cancer. Our studies could not confirm the previously reported FSHR overexpression in ovarian and prostate cancer cells. Instead, specific overexpression in peripheral tumor blood vessels could be confirmed after thorough validation of the antibodies used.

  13. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  14. Evaluation of the separate effects tests (SET) validation matrix

    International Nuclear Information System (INIS)

    1996-11-01

    This work is the result of a one year extended mandate which has been given by the CSNI on the request of the PWG 2 and the Task Group on Thermal Hydraulic System Behaviour (TG THSB) in late 1994. The aim was to evaluate the SET validation matrix in order to define the real needs for further experimental work. The statistical evaluation tables of the SET matrix provide an overview of the data base including the parameter ranges covered for each phenomenon and selected parameters, and questions posed to obtain answers concerning the need for additional experimental data with regard to the objective of nuclear power plant safety. A global view of the data base is first presented focussing on areas lacking in data and on hot topics. A new systematic evaluation has been done based on the authors technical judgments and giving evaluation tables. In these tables, global and indicative information are included. Four main parameters have been chosen as the most important and relevant parameters: a state parameter given by the operating pressure of the tests, a flow parameter expressed as mass flux, mass flow rate or volumetric flow rate in the tests, a geometrical parameter provided through a typical dimension expressed by a diameter, an equivalent diameter (hydraulic or heated) or a cross sectional area of the test sections, and an energy or heat transfer parameter given as the fluid temperature, the heat flux or the heat transfer surface temperature of the tests

  15. Setting Priorities Personal Values, Organizational Results

    CERN Document Server

    (CCL), Center for Creative Leadership

    2011-01-01

    To be a successful leader, you need to get results. To get results, you need to set priorities. This book can help you do a better job of setting priorities, recognizing the personal values that motivate your decision making, the probable trade-offs and consequences of your decisions, and the importance of aligning your priorities with your organization's expectations. In this way you can successfully meet organizational objectives and consistently produce results.

  16. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV: A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    Directory of Open Access Journals (Sweden)

    Tanja S H Wingenbach

    Full Text Available Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES and termed the Bath Intensity Variations (ADFES-BIV. A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness and 3 complex emotions (contempt, embarrassment, pride that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu hit rates were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the

  17. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    Science.gov (United States)

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.

  18. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  19. Development of a tool to measure person-centered maternity care in developing settings: validation in a rural and urban Kenyan population.

    Science.gov (United States)

    Afulani, Patience A; Diamond-Smith, Nadia; Golub, Ginger; Sudhinaraset, May

    2017-09-22

    Person-centered reproductive health care is recognized as critical to improving reproductive health outcomes. Yet, little research exists on how to operationalize it. We extend the literature in this area by developing and validating a tool to measure person-centered maternity care. We describe the process of developing the tool and present the results of psychometric analyses to assess its validity and reliability in a rural and urban setting in Kenya. We followed standard procedures for scale development. First, we reviewed the literature to define our construct and identify domains, and developed items to measure each domain. Next, we conducted expert reviews to assess content validity; and cognitive interviews with potential respondents to assess clarity, appropriateness, and relevance of the questions. The questions were then refined and administered in surveys; and survey results used to assess construct and criterion validity and reliability. The exploratory factor analysis yielded one dominant factor in both the rural and urban settings. Three factors with eigenvalues greater than one were identified for the rural sample and four factors identified for the urban sample. Thirty of the 38 items administered in the survey were retained based on the factors loadings and correlation between the items. Twenty-five items load very well onto a single factor in both the rural and urban sample, with five items loading well in either the rural or urban sample, but not in both samples. These 30 items also load on three sub-scales that we created to measure dignified and respectful care, communication and autonomy, and supportive care. The Chronbach alpha for the main scale is greater than 0.8 in both samples, and that for the sub-scales are between 0.6 and 0.8. The main scale and sub-scales are correlated with global measures of satisfaction with maternity services, suggesting criterion validity. We present a 30-item scale with three sub-scales to measure person

  20. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  1. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  2. Community Priority Index: utility, applicability and validation for priority setting in community-based participatory research

    Directory of Open Access Journals (Sweden)

    Hamisu M. Salihu

    2015-07-01

    Full Text Available Background. Providing practitioners with an intuitive measure for priority setting that can be combined with diverse data collection methods is a necessary step to foster accountability of the decision-making process in community settings. Yet, there is a lack of easy-to-use, but methodologically robust measures, that can be feasibly implemented for reliable decision-making in community settings. To address this important gap in community based participatory research (CBPR, the purpose of this study was to demonstrate the utility, applicability, and validation of a community priority index in a community-based participatory research setting. Design and Methods. Mixed-method study that combined focus groups findings, nominal group technique with six key informants, and the generation of a Community Priority Index (CPI that integrated community importance, changeability, and target populations. Bootstrapping and simulation were performed for validation. Results. For pregnant mothers, the top three highly important and highly changeable priorities were: stress (CPI=0.85; 95%CI: 0.70, 1.00, lack of affection (CPI=0.87; 95%CI: 0.69, 1.00, and nutritional issues (CPI=0.78; 95%CI: 0.48, 1.00. For non-pregnant women, top priorities were: low health literacy (CPI=0.87; 95%CI: 0.69, 1.00, low educational attainment (CPI=0.78; 95%CI: 0.48, 1.00, and lack of self-esteem (CPI=0.72; 95%CI: 0.44, 1.00. For children and adolescents, the top three priorities were: obesity (CPI=0.88; 95%CI: 0.69, 1.00, low self-esteem (CPI=0.81; 95%CI: 0.69, 0.94, and negative attitudes toward education (CPI=0.75; 95%CI: 0.50, 0.94. Conclusions. This study demonstrates the applicability of the CPI as a simple and intuitive measure for priority setting in CBPR.

  3. BESST (Bochum Emotional Stimulus Set)--a pilot validation study of a stimulus set containing emotional bodies and faces from frontal and averted views.

    Science.gov (United States)

    Thoma, Patrizia; Soria Bauser, Denise; Suchan, Boris

    2013-08-30

    This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Validation of the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions

    Science.gov (United States)

    Wingenbach, Tanja S. H.

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author

  5. Validation Results for LEWICE 3.0

    Science.gov (United States)

    Wright, William B.

    2005-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  6. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    Science.gov (United States)

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  7. The development and validation of the Closed-set Mandarin Sentence (CMS) test.

    Science.gov (United States)

    Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng

    2017-09-01

    Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.

  8. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  9. Validation of the Essentials of Magnetism II in Chinese critical care settings.

    Science.gov (United States)

    Bai, Jinbing; Hsu, Lily; Zhang, Qing

    2015-05-01

    To translate and evaluate the psychometric properties of the Essentials of Magnetism II tool (EOM II) for Chinese nurses in critical care settings. The EOM II is a reliable and valid scale for measuring the healthy work environment (HWE) for nurses in Western countries, however, it has not been validated among Chinese nurses. The translation of the EOM II followed internationally recognized guidelines. The Chinese version of the Essentials of Magnetism II tool (C-EOM II) was reviewed by an expert panel for culturally semantic equivalence and content validity. Then, 706 nurses from 28 intensive care units (ICUs) affiliated with 14 tertiary hospitals participated in this study. The reliability of the C-EOM II was assessed using the Cronbach's alpha coefficient; the content validity of this scale was assessed using the content validity index (CVI); and the construct validity was assessed using the confirmatory factor analysis (CFA). The C-EOM II showed excellent content validity with a CVI of 0·92. All the subscales of the C-EOM II were significantly correlated with overall nurse job satisfaction and nurse-assessed quality of care. The CFA showed that the C-EOM II was composed of 45 items with nine factors, accounting for 46·51% of the total variance. Cronbach's alpha coefficients for these factors ranged from 0·56 to 0·89. The C-EOM II is a promising scale to assess the HWE for Chinese ICU nurses. Nursing administrators and health care policy-makers can use the C-EOM II to evaluate clinical work environment so that a healthier work environment can be created and sustained for staff nurses. © 2013 British Association of Critical Care Nurses.

  10. Basic Laparoscopic Skills Assessment Study: Validation and Standard Setting among Canadian Urology Trainees.

    Science.gov (United States)

    Lee, Jason Y; Andonian, Sero; Pace, Kenneth T; Grober, Ethan

    2017-06-01

    As urology training programs move to a competency based medical education model, iterative assessments with objective standards will be required. To develop a valid set of technical skills standards we initiated a national skills assessment study focusing initially on laparoscopic skills. Between February 2014 and March 2016 the basic laparoscopic skill of Canadian urology trainees and attending urologists was assessed using 4 standardized tasks from the AUA (American Urological Association) BLUS (Basic Laparoscopic Urological Surgery) curriculum, including peg transfer, pattern cutting, suturing and knot tying, and vascular clip applying. All performances were video recorded and assessed using 3 methods, including time and error based scoring, expert global rating scores and C-SATS (Crowd-Sourced Assessments of Technical Skill Global Rating Scale), a novel, crowd sourced assessment platform. Different methods of standard setting were used to develop pass-fail cut points. Six attending urologists and 99 trainees completed testing. Reported laparoscopic experience and training level correlated with performance (p standard setting methods to define pass-fail cut points for all 4 AUA BLUS tasks. The 4 AUA BLUS tasks demonstrated good construct validity evidence for use in assessing basic laparoscopic skill. Performance scores using the novel C-SATS platform correlated well with traditional time-consuming methods of assessment. Various standard setting methods were used to develop pass-fail cut points for educators to use when making formative and summative assessments of basic laparoscopic skill. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  11. Roll-up of validation results to a target application.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy

    2013-09-01

    Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.

  12. Set-based Tasks within the Singularity-robust Multiple Task-priority Inverse Kinematics Framework: General Formulation, Stability Analysis and Experimental Results

    Directory of Open Access Journals (Sweden)

    Signe eMoe

    2016-04-01

    Full Text Available Inverse kinematics algorithms are commonly used in robotic systems to transform tasks to joint references, and several methods exist to ensure the achievement of several tasks simultaneously. The multiple task-priority inverse kinematicsframework allows tasks to be considered in a prioritized order by projecting task velocities through the nullspaces of higherpriority tasks. This paper extends this framework to handle setbased tasks, i.e. tasks with a range of valid values, in addition to equality tasks, which have a specific desired value. Examples of set-based tasks are joint limit and obstacle avoidance. The proposed method is proven to ensure asymptotic convergence of the equality task errors and the satisfaction of all high-priority set-based tasks. The practical implementation of the proposed algorithm is discussed, and experimental results are presented where a number of both set-based and equality tasks have been implemented on a 6 degree of freedom UR5 which is an industrial robotic arm from Universal Robots. The experiments validate thetheoretical results and confirm the effectiveness of the proposed approach.

  13. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS).

    Science.gov (United States)

    Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F

    2015-05-01

    Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.

  14. 42 CFR 476.84 - Changes as a result of DRG validation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Changes as a result of DRG validation. 476.84... § 476.84 Changes as a result of DRG validation. A provider or practitioner may obtain a review by a QIO... in DRG assignment as a result of QIO validation activities. ...

  15. European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip.

    Science.gov (United States)

    Weigl, Martin; Wild, Heike

    2017-09-15

    osteoarthritis. The differences in results between this Europe validation study and a previous Singaporean validation study underscore the need to validate the International Classification of Functioning, Disability and Health Core Sets in different regions of the world.

  16. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    Science.gov (United States)

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  17. Method validation in plasma source optical emission spectroscopy (ICP-OES) - From samples to results

    International Nuclear Information System (INIS)

    Pilon, Fabien; Vielle, Karine; Birolleau, Jean-Claude; Vigneau, Olivier; Labet, Alexandre; Arnal, Nadege; Adam, Christelle; Camilleri, Virginie; Amiel, Jeanine; Granier, Guy; Faure, Joel; Arnaud, Regine; Beres, Andre; Blanchard, Jean-Marc; Boyer-Deslys, Valerie; Broudic, Veronique; Marques, Caroline; Augeray, Celine; Bellefleur, Alexandre; Bienvenu, Philippe; Delteil, Nicole; Boulet, Beatrice; Bourgarit, David; Brennetot, Rene; Fichet, Pascal; Celier, Magali; Chevillotte, Rene; Klelifa, Aline; Fuchs, Gilbert; Le Coq, Gilles; Mermet, Jean-Michel

    2017-01-01

    Even though ICP-OES (Inductively Coupled Plasma - Optical Emission Spectroscopy) is now a routine analysis technique, requirements for measuring processes impose a complete control and mastering of the operating process and of the associated quality management system. The aim of this (collective) book is to guide the analyst during all the measurement validation procedure and to help him to guarantee the mastering of its different steps: administrative and physical management of samples in the laboratory, preparation and treatment of the samples before measuring, qualification and monitoring of the apparatus, instrument setting and calibration strategy, exploitation of results in terms of accuracy, reliability, data covariance (with the practical determination of the accuracy profile). The most recent terminology is used in the book, and numerous examples and illustrations are given in order to a better understanding and to help the elaboration of method validation documents

  18. Setting and validating the pass/fail score for the NBDHE.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  19. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Directory of Open Access Journals (Sweden)

    Misganaw Awoke

    2012-08-01

    Full Text Available Abstract Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of

  20. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Science.gov (United States)

    2012-01-01

    Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above) were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of providing reasonable cause

  1. Urine specimen validity test for drug abuse testing in workplace and court settings.

    Science.gov (United States)

    Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun

    2018-01-01

    In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.

  2. The use of questionnaires in colour research in real-life settings : In search of validity and methodological pitfalls

    NARCIS (Netherlands)

    Bakker, I.C.; van der Voordt, Theo; Vink, P.; de Boon, J

    2014-01-01

    This research discusses the validity of applying questionnaires in colour research in real life settings.
    In the literature the conclusions concerning the influences of colours on human performance and well-being are often conflicting. This can be caused by the artificial setting of the test

  3. Validation and results of a questionnaire for functional bowel disease in out-patients

    Directory of Open Access Journals (Sweden)

    Skordilis Panagiotis

    2002-05-01

    Full Text Available Abstract Background The aim was to evaluate and validate a bowel disease questionnaire in patients attending an out-patient gastroenterology clinic in Greece. Methods This was a prospective study. Diagnosis was based on detailed clinical and laboratory evaluation. The questionnaire was tested on a pilot group of patients. Interviewer-administration technique was used. One-hundred-and-forty consecutive patients attending the out-patient clinic for the first time and fifty healthy controls selected randomly participated in the study. Reliability (kappa statistics and validity of the questionnaire were tested. We used logistic regression models and binary recursive partitioning for assessing distinguishing ability among irritable bowel syndrome (IBS, functional dyspepsia and organic disease patients. Results Mean time for questionnaire completion was 18 min. In test-retest procedure a good agreement was obtained (kappa statistics 0.82. There were 55 patients diagnosed as having IBS, 18 with functional dyspepsia (Rome I criteria, 38 with organic disease. Location of pain was a significant distinguishing factor, patients with functional dyspepsia having no lower abdominal pain (p Conclusions This questionnaire for functional bowel disease is a valid and reliable instrument that can distinguish satisfactorily between organic and functional disease in an out-patient setting.

  4. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  5. Groebner basis, resultants and the generalized Mandelbrot set

    Energy Technology Data Exchange (ETDEWEB)

    Geum, Young Hee [Centre of Research for Computational Sciences and Informatics in Biology, Bioindustry, Environment, Agriculture and Healthcare, University of Malaya, 50603 Kuala Lumpur (Malaysia)], E-mail: conpana@empal.com; Hare, Kevin G. [Department of Pure Mathematics, University of Waterloo, Waterloo, Ont., N2L 3G1 (Canada)], E-mail: kghare@math.uwaterloo.ca

    2009-10-30

    This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.

  6. Groebner basis, resultants and the generalized Mandelbrot set

    International Nuclear Information System (INIS)

    Geum, Young Hee; Hare, Kevin G.

    2009-01-01

    This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.

  7. Results from the First Validation Phase of CAP code

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  8. Validation of the Comprehensive ICF Core Set for obstructive pulmonary diseases from the perspective of physiotherapists.

    Science.gov (United States)

    Rauch, Alexandra; Kirchberger, Inge; Stucki, Gerold; Cieza, Alarcos

    2009-12-01

    The 'Comprehensive ICF Core Set for obstructive pulmonary diseases' (OPD) is an application of the International Classification of Functioning, Disability and Health (ICF) and represents the typical spectrum of problems in functioning of patients with OPD. To optimize a multidisciplinary and patient-oriented approach in pulmonary rehabilitation, in which physiotherapy plays an important role, the ICF offers a standardized language and understanding of functioning. For it to be a useful tool for physiotherapists in rehabilitation of patients with OPD, the objective of this study was to validate this Comprehensive ICF Core Set for OPD from the perspective of physiotherapists. A three-round survey based on the Delphi technique of physiotherapists who are experienced in the treatment of OPD asked about the problems, resources and aspects of environment of patients with OPD that physiotherapists treat in clinical practice (physiotherapy intervention categories). Responses were linked to the ICF and compared with the existing Comprehensive ICF Core Set for OPD. Fifty-one physiotherapists from 18 countries named 904 single terms that were linked to 124 ICF categories, 9 personal factors and 16 'not classified' concepts. The identified ICF categories were mainly third-level categories compared with mainly second-level categories of the Comprehensive ICF Core Set for OPD. Seventy of the ICF categories, all personal factors and 15 'not classified' concepts gained more than 75% agreement among the physiotherapists. Of these ICF categories, 55 (78.5%) were covered by the Comprehensive ICF Core Set for OPD. The validity of the Comprehensive ICF Core Set for OPD was largely supported by the physiotherapists. Nevertheless, ICF categories that were not covered, personal factors and not classified terms offer opportunities towards the final ICF Core Set for OPD and further research to strengthen physiotherapists' perspective in pulmonary rehabilitation.

  9. Validation of Fall Risk Assessment Specific to the Inpatient Rehabilitation Facility Setting.

    Science.gov (United States)

    Thomas, Dan; Pavic, Andrea; Bisaccia, Erin; Grotts, Jonathan

    2016-09-01

    To evaluate and compare the Morse Fall Scale (MFS) and the Casa Colina Fall Risk Assessment Scale (CCFRA) for identification of patients at risk for falling in an acute inpatient rehabilitation facility. The primary objective of this study was to perform a retrospective validation study of the CCFRAS, specifically for use in the inpatient rehabilitation facility (IRF) setting. Retrospective validation study. The study was approved under expedited review by the local Institutional Review Board. Data were collected on all patients admitted to Cottage Rehabiliation Hospital (CRH), a 38-bed acute inpatient rehabilitation hospital, from March 2012 to August 2013. Patients were excluded from the study if they had a length of stay less than 3 days or age less than 18. The area under the receiver operating characteristic curve (AUC) and the diagnostic odds ratio were used to examine the differences between the MFS and CCFRAS. AUC between fall scales was compared using the DeLong Test. There were 931 patients included in the study with 62 (6.7%) patient falls. The average age of the population was 68.8 with 503 males (51.2%). The AUC was 0.595 and 0.713 for the MFS and CCFRAS, respectively (0.006). The diagnostic odds ratio of the MFS was 2.0 and 3.6 for the CCFRAS using the recommended cutoffs of 45 for the MFS and 80 for the CCFRAS. The CCFRAS appears to be a better tool in detecting fallers vs. nonfallers specific to the IRF setting. The assessment and identification of patients at high risk for falling is important to implement specific precautions and care for these patients to reduce their risk of falling. The CCFRAS is more clinically relevant in identifying patients at high risk for falling in the IRF setting compared to other fall risk assessments. Implementation of this scale may lead to a reduction in fall rate and injuries from falls as it more appropriately identifies patients at high risk for falling. © 2015 Association of Rehabilitation Nurses.

  10. Handle with Care! an Exploration of the Potential Risks Associated with the Publication and Summative Usage of Student Evaluation of Teaching (SET) Results

    Science.gov (United States)

    Jones, Joanna; Gaffney-Rhys, Ruth; Jones, Edward

    2014-01-01

    This article presents a synthesis of previous ideas relating to student evaluation of teaching (SET) results in higher education institutions (HEIs), with particular focus upon possible validity issues and matters that HEI decision-makers should consider prior to interpreting survey results and using them summatively. Furthermore, the research…

  11. The validity of visual acuity assessment using mobile technology devices in the primary care setting.

    Science.gov (United States)

    O'Neill, Samuel; McAndrew, Darryl J

    2016-04-01

    The assessment of visual acuity is indicated in a number of clinical circumstances. It is commonly conducted through the use of a Snellen wall chart. Mobile technology developments and adoption rates by clinicians may potentially provide more convenient methods of assessing visual acuity. Limited data exist on the validity of these devices and applications. The objective of this study was to evaluate the assessment of distance visual acuity using mobile technology devices against the commonly used 3-metre Snellen chart in a primary care setting. A prospective quantitative comparative study was conducted at a regional medical practice. The visual acuity of 60 participants was assessed on a Snellen wall chart and two mobile technology devices (iPhone, iPad). Visual acuity intervals were converted to logarithm of minimum angle of resolution (logMAR) scores and subjected to intraclass correlation coefficient (ICC) assessment. The results show a high level of general agreement between testing modality (ICC 0.917 with a 95% confidence interval of 0.887-0.940). The high level of agreement of visual acuity results between the Snellen wall chart and both mobile technology devices suggests that clinicians can use this technology with confidence in the primary care setting.

  12. Validation of a global scale to assess the quality of interprofessional teamwork in mental health settings.

    Science.gov (United States)

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2017-12-01

    Few scales currently exist to assess the quality of interprofessional teamwork through team members' perceptions of working together in mental health settings. The purpose of this study was to revise and validate an interprofessional scale to assess the quality of teamwork in inpatient psychiatric units and to use it multi-nationally. A literature review was undertaken to identify evaluative teamwork tools and develop an additional 12 items to ensure a broad global focus. Focus group discussions considered adaptation to different care systems using subjective judgements from 11 participants in a pre-test of items. Data quality, construct validity, reproducibility, and internal consistency were investigated in the survey using an international comparative design. Exploratory factor analysis yielded five factors with 21 items: 'patient/community centred care', 'collaborative communication', 'interprofessional conflict', 'role clarification', and 'environment'. High overall internal consistency, reproducibility, adequate face validity, and reasonable construct validity were shown in the USA and Japan. The revised Collaborative Practice Assessment Tool (CPAT) is a valid measure to assess the quality of interprofessional teamwork in psychiatry and identifies the best strategies to improve team performance. Furthermore, the revised scale will generate more rigorous evidence for collaborative practice in psychiatry internationally.

  13. Validity and predictive ability of the juvenile arthritis disease activity score based on CRP versus ESR in a Nordic population-based setting

    DEFF Research Database (Denmark)

    Nordal, E B; Zak, M; Aalto, K

    2012-01-01

    To compare the juvenile arthritis disease activity score (JADAS) based on C reactive protein (CRP) (JADAS-CRP) with JADAS based on erythrocyte sedimentation rate (ESR) (JADAS-ESR) and to validate JADAS in a population-based setting.......To compare the juvenile arthritis disease activity score (JADAS) based on C reactive protein (CRP) (JADAS-CRP) with JADAS based on erythrocyte sedimentation rate (ESR) (JADAS-ESR) and to validate JADAS in a population-based setting....

  14. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  15. Validation of KENO, ANISN and Hansen-Roach cross-section set on plutonium oxide and metal fuel system

    International Nuclear Information System (INIS)

    Matsumoto, Tadakuni; Yumoto, Ryozo; Nakano, Koh.

    1980-01-01

    In the previous report, the authors discussed the validity of KENO, ANISN and Hansen-Roach 16 group cross-section set on the critical plutonium nitrate solution systems with various geometries, absorbers and neutron interactions. The purpose of the present report is to examine the validity of the same calculation systems on the homogeneous plutonium oxide and plutonium-uranium mixed oxide fuels with various density values. Eleven experiments adopted for validation are summarized. First six experiments were performed at Pacific Northwest Laboratory of Battelle Memorial Institute, and the remaining five at Los Alamos Scientific Laboratory. The characteristics of core fuel are given, and the isotopic composition of plutonium, the relation between H/(Pu + U) atomic ratio and fuel density as compared with the atomic ratios of PuO 2 and mixed oxides in powder storage and pellet fabrication processes, and critical core dimensions and reflector conditions are shown. The effective multiplication factors were calculated with the KENO code. In case of the metal fuels with simple sphere geometry, additional calculations with the ANISN code were performed. The criticality calculation system composed of KENO, ANISN and Hansen-Roach cross-section set was found to be valid for calculating the criticality on plutonium oxide, plutonium-uranium mixed oxide, plutonium metal and uranium metal fuel systems as well as on plutonium solution systems with various geometries, absorbers and neutron interactions. There seems to remain some problems in the method for evaluating experimental correction. Some discussions foloow. (Wakatsuki, Y.)

  16. Norming the odd: creation, norming, and validation of a stimulus set for the study of incongruities across music and language.

    Science.gov (United States)

    Featherstone, Cara R; Waterman, Mitch G; Morrison, Catriona M

    2012-03-01

    Research into similarities between music and language processing is currently experiencing a strong renewed interest. Recent methodological advances have led to neuroimaging studies presenting striking similarities between neural patterns associated with the processing of music and language--notably, in the study of participants' responses to elements that are incongruous with their musical or linguistic context. Responding to a call for greater systematicity by leading researchers in the field of music and language psychology, this article describes the creation, selection, and validation of a set of auditory stimuli in which both congruence and resolution were manipulated in equivalent ways across harmony, rhythm, semantics, and syntax. Three conditions were created by changing the contexts preceding and following musical and linguistic incongruities originally used for effect by authors and composers: Stimuli in the incongruous-resolved condition reproduced the original incongruity and resolution into the same context; stimuli in the incongruous-unresolved condition reproduced the incongruity but continued postincongruity with a new context dictated by the incongruity; and stimuli in the congruous condition presented the same element of interest, but the entire context was adapted to match it so that it was no longer incongruous. The manipulations described in this article rendered unrecognizable the original incongruities from which the stimuli were adapted, while maintaining ecological validity. The norming procedure and validation study resulted in a significant increase in perceived oddity from congruous to incongruous-resolved and from incongruous-resolved to incongruous-unresolved in all four components of music and language, making this set of stimuli a theoretically grounded and empirically validated resource for this growing area of research.

  17. Validity and Interrater Reliability of the Visual Quarter-Waste Method for Assessing Food Waste in Middle School and High School Cafeteria Settings.

    Science.gov (United States)

    Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J

    2017-11-01

    Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  18. Predicting death from kala-azar: construction, development, and validation of a score set and accompanying software.

    Science.gov (United States)

    Costa, Dorcas Lamounier; Rocha, Regina Lunardi; Chaves, Eldo de Brito Ferreira; Batista, Vivianny Gonçalves de Vasconcelos; Costa, Henrique Lamounier; Costa, Carlos Henrique Nery

    2016-01-01

    Early identification of patients at higher risk of progressing to severe disease and death is crucial for implementing therapeutic and preventive measures; this could reduce the morbidity and mortality from kala-azar. We describe a score set composed of four scales in addition to software for quick assessment of the probability of death from kala-azar at the point of care. Data from 883 patients diagnosed between September 2005 and August 2008 were used to derive the score set, and data from 1,031 patients diagnosed between September 2008 and November 2013 were used to validate the models. Stepwise logistic regression analyses were used to derive the optimal multivariate prediction models. Model performance was assessed by its discriminatory accuracy. A computational specialist system (Kala-Cal(r)) was developed to speed up the calculation of the probability of death based on clinical scores. The clinical prediction score showed high discrimination (area under the curve [AUC] 0.90) for distinguishing death from survival for children ≤2 years old. Performance improved after adding laboratory variables (AUC 0.93). The clinical score showed equivalent discrimination (AUC 0.89) for older children and adults, which also improved after including laboratory data (AUC 0.92). The score set also showed a high, although lower, discrimination when applied to the validation cohort. This score set and Kala-Cal(r) software may help identify individuals with the greatest probability of death. The associated software may speed up the calculation of the probability of death based on clinical scores and assist physicians in decision-making.

  19. A strategy for developing representative germplasm sets for systematic QTL validation, demonstrated for apple, peach, and sweet cherry

    NARCIS (Netherlands)

    Peace, C.P.; Luby, J.; Weg, van de W.E.; Bink, M.C.A.M.; Iezzoni, A.F.

    2014-01-01

    Horticultural crop improvement would benefit from a standardized, systematic, and statistically robust procedure for validating quantitative trait loci (QTLs) in germplasm relevant to breeding programs. Here, we describe and demonstrate a strategy for developing reference germplasm sets of

  20. 42 CFR 478.15 - QIO review of changes resulting from DRG validation.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false QIO review of changes resulting from DRG validation... review of changes resulting from DRG validation. (a) General rules. (1) A provider or practitioner dissatisfied with a change to the diagnostic or procedural coding information made by a QIO as a result of DRG...

  1. Atmospheric correction at AERONET locations: A new science and validation data set

    Science.gov (United States)

    Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.

    2009-01-01

    This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo

  2. Results from Investigations of Torsional Vibration in Turbine Set Shaft Systems

    Science.gov (United States)

    Taradai, D. V.; Deomidova, Yu. A.; Zile, A. Z.; Tomashevskii, S. B.

    2018-01-01

    The article generalizes the results obtained from investigations of torsional vibration in the shaft system of the T-175/210-12.8 turbine set installed at the Omsk CHPP-5 combined heat and power plant. Three different experimental methods were used to determine the lowest natural frequencies of torsional vibration excited in the shaft system when the barring gear is switched into operation, when the generator is synchronized with the grid, and in response to unsteady disturbances caused by the grid and by the turbine control and steam admission system. It is pointed out that the experimental values of the lowest natural frequencies (to the fourth one inclusively) determined using three different methods were found to be almost completely identical with one another, even though the shaft system was stopped in the experiments carried out according to one method and the shaft system rotated at the nominal speed in those carried out according to two other methods. The need to further develop the experimental methods for determining the highest natural frequencies is substantiated. The values of decrements for the first, third, and fourth natural torsional vibration modes are obtained. A conclusion is drawn from a comparison between the calculated and experimental data on the shaft system's static twisting about the need to improve the mathematical models for calculating torsional vibration. The measurement procedure is described, and the specific features pertinent to the way in which torsional vibration manifests itself as a function of time and turbine set operating mode under the conditions of its long-term operation are considered. The fundamental measurement errors are analyzed, and their influence on the validity of measured parameters is evaluated. With an insignificant level of free and forced torsional vibrations set up under the normal conditions of turbine set and grid operation, it becomes possible to exclude this phenomenon from the list of main factors

  3. Identification and Validation of a New Set of Five Genes for Prediction of Risk in Early Breast Cancer

    Directory of Open Access Journals (Sweden)

    Giorgio Mustacchi

    2013-05-01

    Full Text Available Molecular tests predicting the outcome of breast cancer patients based on gene expression levels can be used to assist in making treatment decisions after consideration of conventional markers. In this study we identified a subset of 20 mRNA differentially regulated in breast cancer analyzing several publicly available array gene expression data using R/Bioconductor package. Using RTqPCR we evaluate 261 consecutive invasive breast cancer cases not selected for age, adjuvant treatment, nodal and estrogen receptor status from paraffin embedded sections. The biological samples dataset was split into a training (137 cases and a validation set (124 cases. The gene signature was developed on the training set and a multivariate stepwise Cox analysis selected five genes independently associated with DFS: FGF18 (HR = 1.13, p = 0.05, BCL2 (HR = 0.57, p = 0.001, PRC1 (HR = 1.51, p = 0.001, MMP9 (HR = 1.11, p = 0.08, SERF1a (HR = 0.83, p = 0.007. These five genes were combined into a linear score (signature weighted according to the coefficients of the Cox model, as: 0.125FGF18 − 0.560BCL2 + 0.409PRC1 + 0.104MMP9 − 0.188SERF1A (HR = 2.7, 95% CI = 1.9–4.0, p < 0.001. The signature was then evaluated on the validation set assessing the discrimination ability by a Kaplan Meier analysis, using the same cut offs classifying patients at low, intermediate or high risk of disease relapse as defined on the training set (p < 0.001. Our signature, after a further clinical validation, could be proposed as prognostic signature for disease free survival in breast cancer patients where the indication for adjuvant chemotherapy added to endocrine treatment is uncertain.

  4. The Child Behaviour Assessment Instrument: development and validation of a measure to screen for externalising child behavioural problems in community setting

    Directory of Open Access Journals (Sweden)

    Perera Hemamali

    2010-06-01

    Full Text Available Abstract Background In Sri Lanka, behavioural problems have grown to epidemic proportions accounting second highest category of mental health problems among children. Early identification of behavioural problems in children is an important pre-requisite of the implementation of interventions to prevent long term psychiatric outcomes. The objectives of the study were to develop and validate a screening instrument for use in the community setting to identify behavioural problems in children aged 4-6 years. Methods An initial 54 item questionnaire was developed following an extensive review of the literature. A three round Delphi process involving a panel of experts from six relevant fields was then undertaken to refine the nature and number of items and created the 15 item community screening instrument, Child Behaviour Assessment Instrument (CBAI. Validation study was conducted in the Medical Officer of Health area Kaduwela, Sri Lanka and a community sample of 332 children aged 4-6 years were recruited by two stage randomization process. The behaviour status of the participants was assessed by an interviewer using the CBAI and a clinical psychologist following clinical assessment concurrently. Criterion validity was appraised by assessing the sensitivity, specificity and predictive values at the optimum screen cut off value. Construct validity of the instrument was quantified by testing whether the data of validation study fits to a hypothetical model. Face and content validity of the CBAI were qualitatively assessed by a panel of experts. The reliability of the instrument was assessed by internal consistency analysis and test-retest methods in a 15% subset of the community sample. Results Using the Receiver Operating Characteristic analysis the CBAI score of >16 was identified as the cut off point that optimally differentiated children having behavioural problems, with a sensitivity of 0.88 (95% CI = 0.80-0.96 and specificity of 0.81 (95% CI = 0

  5. Evaluation of convergent and discriminant validity of the Russian version of MMPI-2: First results

    Directory of Open Access Journals (Sweden)

    Emma I. Mescheriakova

    2015-06-01

    Full Text Available The paper presents the results of construct validity testing for a new version of the MMPI-2 (Minnesota Multiphasic Personality Inventory, which restandardization started in 1982 (J.N. Butcher, W.G. Dahlstrom, J.R. Graham, A. Tellegen, B. Kaemmer and is still going on. The professional community’s interest in this new version of the Inventory is determined by its advantage over the previous one in restructuring the inventory and adding new items which offer additional opportunities for psychodiagnostics and personality assessment. The construct validity testing was carried out using three up-to-date techniques, namely the Quality of Life and Satisfaction with Life questionnaire (a short version of Ritsner’s instrument adapted by E.I. Rasskazova, Janoff-Bulman’s World Assumptions Scale (adapted by O. Kravtsova, and the Character Strengths Assessment questionnaire developed by E. Osin based on Peterson and Seligman’s Values in Action Inventory of Strengths. These psychodiagnostic techniques were selected in line with the current trends in psychology, such as its orientation to positive phenomena as well as its interpretation of subjectivity potential as the need for self-determined, self-organized, self-realized and self-controlled behavior and the ability to accomplish it. The procedure of construct validity testing involved the «norm» group respondents, with the total sample including 205 people (62% were females, 32% were males. It was focused on the MMPI-2 additional and expanded scales (FI, BF, FP, S и К and six of its ten basic ones (D, Pd, Pa, Pt, Sc, Si. The results obtained confirmed construct validity of the scales concerned, and this allows the MMPI-2 to be applied to examining one’s personal potential instead of a set of questionnaires, facilitating, in turn, the personality researchers’ objectives. The paper discusses the first stage of this construct validity testing, the further stage highlighting the factor

  6. Development and construct validation of the Client-Centredness of Goal Setting (C-COGS) scale.

    Science.gov (United States)

    Doig, Emmah; Prescott, Sarah; Fleming, Jennifer; Cornwell, Petrea; Kuipers, Pim

    2015-07-01

    Client-centred philosophy is integral to occupational therapy practice and client-centred goal planning is considered fundamental to rehabilitation. Evaluation of whether goal-planning practices are client-centred requires an understanding of the client's perspective about goal-planning processes and practices. The Client-Centredness of Goal Setting (C-COGS) was developed for use by practitioners who seek to be more client-centred and who require a scale to guide and evaluate individually orientated practice, especially with adults with cognitive impairment related to acquired brain injury. To describe development of the C-COGS scale and examine its construct validity. The C-COGS was administered to 42 participants with acquired brain injury after multidisciplinary goal planning. C-COGS scores were correlated with the Canadian Occupational Performance Measure (COPM) importance scores, and measures of therapeutic alliance, motivation, and global functioning to establish construct validity. The C-COGS scale has three subscales evaluating goal alignment, goal planning participation, and client-centredness of goals. The C-COGS subscale items demonstrated moderately significant correlations with scales measuring similar constructs. Findings provide preliminary evidence to support the construct validity of the C-COGS scale, which is intended to be used to evaluate and reflect on client-centred goal planning in clinical practice, and to highlight factors contributing to best practice rehabilitation.

  7. Validation of one-dimensional module of MARS 2.1 computer code by comparison with the RELAP5/MOD3.3 developmental assessment results

    International Nuclear Information System (INIS)

    Lee, Y. J.; Bae, S. W.; Chung, B. D.

    2003-02-01

    This report records the results of the code validation for the one-dimensional module of the MARS 2.1 thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 code development assessment problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS 2.1 code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The results suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  8. Dark Energy Survey Year 1 Results: The Photometric Data Set for Cosmology

    Science.gov (United States)

    Drlica-Wagner, A.; Sevilla-Noarbe, I.; Rykoff, E. S.; Gruendl, R. A.; Yanny, B.; Tucker, D. L.; Hoyle, B.; Carnero Rosell, A.; Bernstein, G. M.; Bechtol, K.; Becker, M. R.; Benoit-Lévy, A.; Bertin, E.; Carrasco Kind, M.; Davis, C.; de Vicente, J.; Diehl, H. T.; Gruen, D.; Hartley, W. G.; Leistedt, B.; Li, T. S.; Marshall, J. L.; Neilsen, E.; Rau, M. M.; Sheldon, E.; Smith, J.; Troxel, M. A.; Wyatt, S.; Zhang, Y.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Banerji, M.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Dietrich, J. P.; Doel, P.; Evrard, A. E.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gschwend, J.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jeltema, T.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lima, M.; Lin, H.; Maia, M. A. G.; Martini, P.; McMahon, R. G.; Melchior, P.; Menanteau, F.; Miquel, R.; Nichol, R. C.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Vikram, V.; Walker, A. R.; Wechsler, R. H.; Zuntz, J.; DES Collaboration

    2018-04-01

    We describe the creation, content, and validation of the Dark Energy Survey (DES) internal year-one cosmology data set, Y1A1 GOLD, in support of upcoming cosmological analyses. The Y1A1 GOLD data set is assembled from multiple epochs of DES imaging and consists of calibrated photometric zero-points, object catalogs, and ancillary data products—e.g., maps of survey depth and observing conditions, star–galaxy classification, and photometric redshift estimates—that are necessary for accurate cosmological analyses. The Y1A1 GOLD wide-area object catalog consists of ∼ 137 million objects detected in co-added images covering ∼ 1800 {\\deg }2 in the DES grizY filters. The 10σ limiting magnitude for galaxies is g=23.4, r=23.2, i=22.5, z=21.8, and Y=20.1. Photometric calibration of Y1A1 GOLD was performed by combining nightly zero-point solutions with stellar locus regression, and the absolute calibration accuracy is better than 2% over the survey area. DES Y1A1 GOLD is the largest photometric data set at the achieved depth to date, enabling precise measurements of cosmic acceleration at z ≲ 1.

  9. Setting Priorities: Personal Values, Organizational Results. Ideas into Action Guidebooks

    Science.gov (United States)

    Cartwright, Talula

    2007-01-01

    Successful leaders get results. To get results, you need to set priorities. This book can help you do a better job of setting priorities, recognizing the personal values that motivate your decision making, the probable trade-offs and consequences of your decisions, and the importance of aligning your priorities with your organization's…

  10. The Svalbard study 1988-89: a unique setting for validation of self-reported alcohol consumption.

    Science.gov (United States)

    Høyer, G; Nilssen, O; Brenn, T; Schirmer, H

    1995-04-01

    The Norwegian island of Spitzbergen, Svalbard offers a unique setting for validation studies on self-reported alcohol consumption. No counterfeit production or illegal import exists, thus making complete registration of all sources of alcohol possible. In this study we recorded sales from all agencies selling alcohol on Svalbard over a 2-month period in 1988. During the same period all adults living permanently on Svalbard were invited to take part in a health screening. As part of the screening a self-administered questionnaire on alcohol consumption was introduced to the participants. We found that the self-reported volume accounted for approximately 40 percent of the sales volume. Because of the unique situation applying to Svalbard, the estimate made in this study is believed to be more reliable compared to other studies using sales volume to validate self-reports.

  11. Development and validation of a set of German stimulus- and target words for an attachment related semantic priming paradigm.

    Directory of Open Access Journals (Sweden)

    Anke Maatz

    Full Text Available Experimental research in adult attachment theory is faced with the challenge to adequately activate the adult attachment system. In view of the multitude of methods employed for this purpose so far, this paper suggests to further make use of the methodological advantages of semantic priming. In order to enable the use of such a paradigm in a German speaking context, a set of German words belonging to the semantic categories 'interpersonal closeness', 'interpersonal distance' and 'neutral' were identified and their semantics were validated combining production- and rating method. 164 university students answered corresponding online-questionnaires. Ratings were analysed using analysis of variance (ANOVA and cluster analysis from which three clearly distinct groups emerged. Beyond providing validated stimulus- and target words which can be used to activate the adult attachment system in a semantic priming paradigm, the results of this study point at important links between attachment and stress which call for further investigation in the future.

  12. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    Science.gov (United States)

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most

  13. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  14. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  15. The ToMenovela – A photograph-based stimulus set for the study of social cognition with high ecological validity

    Directory of Open Access Journals (Sweden)

    Maike C. Herbort

    2016-12-01

    Full Text Available We present the ToMenovela, a stimulus set that has been developed to provide a set of normatively rated socio-emotional stimuli showing varying amount of characters in emotionally laden interactions for experimental investigations of i cognitive and ii affective ToM, iii emotional reactivity, and iv complex emotion judgment with respect to Ekman’s basic emotions (happiness, sadness, anger, fear, surprise and disgust, Ekman & Friesen, 1975. Stimuli were generated with focus on ecological validity and consist of 190 scenes depicting daily-life situations. Two or more of eight main characters with distinct biographies and personalities are depicted on each scene picture.To obtain an initial evaluation of the stimulus set and to pave the way for future studies in clinical populations, normative data on each stimulus of the set was obtained from a sample of 61 neurologically and psychiatrically healthy participants (31 female, 30 male; mean age 26.74 +/- 5.84, including a visual analog scale rating of Ekman’s basic emotions (happiness, sadness, anger, fear, surprise and disgust and free-text descriptions of the content. The ToMenovela is being developed to provide standardized material of social scenes that are available to researchers in the study of social cognition. It should facilitate experimental control while keeping ecological validity high.

  16. A generic validation methodology and its application to a set of multi-axial creep damage constitutive equations

    International Nuclear Information System (INIS)

    Xu Qiang

    2005-01-01

    A generic validation methodology for a set of multi-axial creep damage constitutive equations is proposed and its use is illustrated with 0.5Cr0.5Mo0.25V ferritic steel which is featured as brittle or intergranular rupture. The objective of this research is to develop a methodology to guide systematically assess the quality of a set of multi-axial creep damage constitutive equations in order to ensure its general applicability. This work adopted a total quality assurance approach and expanded as a Four Stages procedure (Theories and Fundamentals, Parameter Identification, Proportional Load, and Non-proportional load). Its use is illustrated with 0.5Cr0.5Mo0.25V ferritic steel and this material is chosen due to its industry importance, the popular use of KRH type of constitutive equations, and the available qualitative experimental data including damage distribution from notched bar test. The validation exercise clearly revealed the deficiencies existed in the KRH formulation (in terms of mathematics and physics of damage mechanics) and its incapability to predict creep deformation accurately. Consequently, its use should be warned, which is particularly important due to its wide use as indicated in literature. This work contributes to understand the rational for formulation and the quality assurance of a set of constitutive equations in creep damage mechanics as well as in general damage mechanics. (authors)

  17. Spanish translation and cross-language validation of a sleep habits questionnaire for use in clinical and research settings.

    Science.gov (United States)

    Baldwin, Carol M; Choi, Myunghan; McClain, Darya Bonds; Celaya, Alma; Quan, Stuart F

    2012-04-15

    To translate, back-translate and cross-language validate (English/Spanish) the Sleep Heart Health Study Sleep Habits Questionnaire for use with Spanish-speakers in clinical and research settings. Following rigorous translation and back-translation, this cross-sectional cross-language validation study recruited bilingual participants from academic, clinic, and community-based settings (N = 50; 52% women; mean age 38.8 ± 12 years; 90% of Mexican heritage). Participants completed English and Spanish versions of the Sleep Habits Questionnaire, the Epworth Sleepiness Scale, and the Acculturation Rating Scale for Mexican Americans II one week apart in randomized order. Psychometric properties were assessed, including internal consistency, convergent validity, scale equivalence, language version intercorrelations, and exploratory factor analysis using PASW (Version18) software. Grade level readability of the sleep measure was evaluated. All sleep categories (duration, snoring, apnea, insomnia symptoms, other sleep symptoms, sleep disruptors, restless legs syndrome) showed Cronbach α, Spearman-Brown coefficients and intercorrelations ≥ 0.700, suggesting robust internal consistency, correlation, and agreement between language versions. The Epworth correlated significantly with snoring, apnea, sleep symptoms, restless legs, and sleep disruptors) on both versions, supporting convergent validity. Items loaded on 4 factors accounted for 68% and 67% of the variance on the English and Spanish versions, respectively. The Spanish-language Sleep Habits Questionnaire demonstrates conceptual and content equivalency. It has appropriate measurement properties and should be useful for assessing sleep health in community-based clinics and intervention studies among Spanish-speaking Mexican Americans. Both language versions showed readability at the fifth grade level. Further testing is needed with larger samples.

  18. Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting

    Directory of Open Access Journals (Sweden)

    Kristensen S

    2015-02-01

    Full Text Available Solvejg Kristensen,1–3 Svend Sabroe,4 Paul Bartels,1,5 Jan Mainz,3,5 Karl Bang Christensen6 1The Danish Clinical Registries, Aarhus, Denmark; 2Department of Health Science and Technology, Aalborg University, Aalborg, Denmark; 3Aalborg University Hospital, Psychiatry, Aalborg, Denmark; 4Department of Public Health, Aarhus University, Aarhus, Denmark; 5Department of Clinical Medicine, Aalborg University, Aalborg, Denmark; 6Department of Biostatistics, University of Copenhagen, Copenhagen, Denmark Purpose: Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee's perceptions of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ. The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data.Materials and methods: The SAQ was translated and adapted for the Danish setting (SAQ-DK. The SAQ-DK was distributed to 1,263 staff members from 31 in- and outpatient units (clinical areas across five somatic and one psychiatric hospitals through meeting administration, hand delivery, and mailing. Construct validity and reliability were tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach's alpha (α, and item and subscale scores.Results: Participation was 73.2% (N=925 of invited health care workers. Goodness-of-fit indices from the confirmatory factor analysis showed: c2=1496.76, P<0.001, CFI 0.901, RMSEA (90%CI 0.053 (0.050-0056, Probability RMSEA (p close=0.057. Inter-scale correlations between the factors showed moderate-to-high correlations. The scale stress recognition had significant

  19. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  20. Establishing the Reliability and Validity of a Computerized Assessment of Children's Working Memory for Use in Group Settings

    Science.gov (United States)

    St Clair-Thompson, Helen

    2014-01-01

    The aim of the present study was to investigate the reliability and validity of a brief standardized assessment of children's working memory; "Lucid Recall." Although there are many established assessments of working memory, "Lucid Recall" is fully automated and can therefore be administered in a group setting. It is therefore…

  1. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review

    Science.gov (United States)

    2018-01-11

    Background: This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. Methods: A systematic review study design wa...

  2. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    Science.gov (United States)

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  3. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  4. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    International Nuclear Information System (INIS)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun; Lee, Jae Hong; Roh, Jee Hoon

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease

  5. Validation of One-Dimensional Module of MARS-KS1.2 Computer Code By Comparison with the RELAP5/MOD3.3/patch3 Developmental Assessment Results

    International Nuclear Information System (INIS)

    Bae, S. W.; Chung, B. D.

    2010-07-01

    This report records the results of the code validation for the one-dimensional module of the MARS-KS thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 Code Developmental Assessment Problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The result suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  6. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals.

    Science.gov (United States)

    Dicke, Theresa; Marsh, Herbert W; Riley, Philip; Parker, Philip D; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals ( N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors.

  7. Brief report: Assessing youth well-being in global emergency settings: Early results from the Emergency Developmental Assets Profile.

    Science.gov (United States)

    Scales, Peter C; Roehlkepartain, Eugene C; Wallace, Teresa; Inselman, Ashley; Stephenson, Paul; Rodriguez, Michael

    2015-12-01

    The 13-item Emergency Developmental Assets Profile measures the well-being of children and youth in emergency settings such as refugee camps and armed conflict zones, assessing whether young people are experiencing adequate positive relationships and opportunities, and developing positive values, skills, and self-perceptions, despite being in crisis circumstances. The instrument was found to have acceptable and nearly identical internal consistency reliability in 22 administrations in non-emergency samples in 15 countries (.75), and in 4 samples of youth ages 10-18 (n = 1550) in the emergency settings (war refugees and typhoon victims, .74) that are the measure's focus, and evidence of convergent validity. Confirmatory Factor Analysis showed acceptable model fit among those youth in emergency settings. Measures of model fit showed that the Em-DAP has configural and metric invariance across all emergency contexts and scalar invariance across some. The Em-DAP is a promising brief cross-cultural tool for assessing the developmental quality of life as reported by samples of youth in a current humanitarian crisis situation. The results can help to inform international relief program decisions about services and activities to be provided for children, youth, and families in emergency settings. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  8. Cross-cultural validation of Lupus Impact Tracker in five European clinical practice settings.

    Science.gov (United States)

    Schneider, Matthias; Mosca, Marta; Pego-Reigosa, José-Maria; Gunnarsson, Iva; Maurel, Frédérique; Garofano, Anna; Perna, Alessandra; Porcasi, Rolando; Devilliers, Hervé

    2017-05-01

    The aim was to evaluate the cross-cultural validity of the Lupus Impact Tracker (LIT) in five European countries and to assess its acceptability and feasibility from the patient and physician perspectives. A prospective, observational, cross-sectional and multicentre validation study was conducted in clinical settings. Before the visit, patients completed LIT, Short Form 36 (SF-36) and care satisfaction questionnaires. During the visit, physicians assessed disease activity [Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI], organ damage [SLICC/ACR damage index (SDI)] and flare occurrence. Cross-cultural validity was assessed using the Differential Item Functioning method. Five hundred and sixty-nine SLE patients were included by 25 specialists; 91.7% were outpatients and 89.9% female, with mean age 43.5 (13.0) years. Disease profile was as follows: 18.3% experienced flares; mean SELENA-SLEDAI score 3.4 (4.5); mean SDI score 0.8 (1.4); and SF-36 mean physical and mental component summary scores: physical component summary 42.8 (10.8) and mental component summary 43.0 (12.3). Mean LIT score was 34.2 (22.3) (median: 32.5), indicating that lupus moderately impacted patients' daily life. A cultural Differential Item Functioning of negligible magnitude was detected across countries (pseudo- R 2 difference of 0.01-0.04). Differences were observed between LIT scores and Physician Global Assessment, SELENA-SLEDAI, SDI scores = 0 (P cultural invariability across countries. They suggest that LIT can be used in routine clinical practice to evaluate and follow patient-reported outcomes in order to improve patient-physician interaction. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol.

    Science.gov (United States)

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O'Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-03-17

    Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for patients and public. ISRCTN90752212. © Article author

  10. De-MetaST-BLAST: a tool for the validation of degenerate primer sets and data mining of publicly available metagenomes.

    Directory of Open Access Journals (Sweden)

    Christopher A Gulvik

    Full Text Available Development and use of primer sets to amplify nucleic acid sequences of interest is fundamental to studies spanning many life science disciplines. As such, the validation of primer sets is essential. Several computer programs have been created to aid in the initial selection of primer sequences that may or may not require multiple nucleotide combinations (i.e., degeneracies. Conversely, validation of primer specificity has remained largely unchanged for several decades, and there are currently few available programs that allows for an evaluation of primers containing degenerate nucleotide bases. To alleviate this gap, we developed the program De-MetaST that performs an in silico amplification using user defined nucleotide sequence dataset(s and primer sequences that may contain degenerate bases. The program returns an output file that contains the in silico amplicons. When De-MetaST is paired with NCBI's BLAST (De-MetaST-BLAST, the program also returns the top 10 nr NCBI database hits for each recovered in silico amplicon. While the original motivation for development of this search tool was degenerate primer validation using the wealth of nucleotide sequences available in environmental metagenome and metatranscriptome databases, this search tool has potential utility in many data mining applications.

  11. Initial validation of the prekindergarten Classroom Observation Tool and goal setting system for data-based coaching.

    Science.gov (United States)

    Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H

    2013-12-01

    Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. Validity of Chinese Version of the Composite International Diagnostic Interview-3.0 in Psychiatric Settings

    Institute of Scientific and Technical Information of China (English)

    Jin Lu; Yue-Qin Huang; Zhao-Rui Liu; Xiao-Lan Cao

    2015-01-01

    Background:The Composite International Diagnostic Interview-3.0 (CIDI-3.0) is a fully structured lay-administered diagnostic interview for the assessment of mental disorders according to ICD-10 and Diagnostic and Statistical Manual of Mental Disorders,Fourth Edition (DSM-Ⅳ) criteria.The aim of the study was to investigate the concurrent validity of the Chinese CIDI in diagnosing mental disorders in psychiatric settings.Methods:We recruited 208 participants,of whom 148 were patients from two psychiatric hospitals and 60 healthy people from communities.These participants were administered with CIDI by six trained lay interviewers and the Structured Clinical Interview for DSM-Ⅳ Axis I Disorders (SCID-I,gold standard) by two psychiatrists.Agreement between CIDI and SCID-I was assessed with sensitivity,specificity,positive predictive value and negative predictive value.Individual-level CIDI-SCID diagnostic concordance was evaluated using the area under the receiver operator characteristic curve and Cohen's K.Results:Substantial to excellent CIDI to SCID concordance was found for any substance use disorder (area under the receiver operator characteristic curve [AUC] =0.926),any anxiety disorder (AUC =0.807) and any mood disorder (AUC =0.806).The concordance between the CIDI and the SCID for psychotic and eating disorders is moderate.However,for individual mental disorders,the CIDI-SCID concordance for bipolar disorders (AUC =0.55) and anorexia nervosa (AUC =0.50) was insufficient.Conclusions:Overall,the Chinese version of CIDI-3.0 has acceptable validity in diagnosing the substance use disorder,anxiety disorder and mood disorder among Chinese adult population.However,we should be cautious when using it for bipolar disorders and anorexia nervosa.

  13. Validity of Chinese Version of the Composite International Diagnostic Interview-3.0 in Psychiatric Settings

    Directory of Open Access Journals (Sweden)

    Jin Lu

    2015-01-01

    Full Text Available Background: The Composite International Diagnostic Interview-3.0 (CIDI-3.0 is a fully structured lay-administered diagnostic interview for the assessment of mental disorders according to ICD-10 and Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV criteria. The aim of the study was to investigate the concurrent validity of the Chinese CIDI in diagnosing mental disorders in psychiatric settings. Methods: We recruited 208 participants, of whom 148 were patients from two psychiatric hospitals and 60 healthy people from communities. These participants were administered with CIDI by six trained lay interviewers and the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I, gold standard by two psychiatrists. Agreement between CIDI and SCID-I was assessed with sensitivity, specificity, positive predictive value and negative predictive value. Individual-level CIDI-SCID diagnostic concordance was evaluated using the area under the receiver operator characteristic curve and Cohen′s K. Results: Substantial to excellent CIDI to SCID concordance was found for any substance use disorder (area under the receiver operator characteristic curve [AUC] = 0.926, any anxiety disorder (AUC = 0.807 and any mood disorder (AUC = 0.806. The concordance between the CIDI and the SCID for psychotic and eating disorders is moderate. However, for individual mental disorders, the CIDI-SCID concordance for bipolar disorders (AUC = 0.55 and anorexia nervosa (AUC = 0.50 was insufficient. Conclusions: Overall, the Chinese version of CIDI-3.0 has acceptable validity in diagnosing the substance use disorder, anxiety disorder and mood disorder among Chinese adult population. However, we should be cautious when using it for bipolar disorders and anorexia nervosa.

  14. Validity of the Perceived Health Competence Scale in a UK primary care setting.

    Science.gov (United States)

    Dempster, Martin; Donnelly, Michael

    2008-01-01

    The Perceived Health Competence Scale (PHCS) is a measure of self-efficacy regarding general health-related behaviour. This brief paper examines the psychometric properties of the PHCS in a UK context. Questionnaires containing the PHCS, the SF-36 and questions about perceived health needs were posted to 486 patients randomly selected from a GP practice list. Complete questionnaires were returned by 320 patients. Analyses of these responses provide strong evidence for the validity of the PHCS in this setting. Consequently, we conclude that the PHCS is a useful addition to measures of global self-efficacy and measures of self-efficacy regarding specific behaviours in the toolkit of health psychologists. This range of self-efficacy assessment tools will ensure that psychologists can match the level of specificity of the measure of expectancy beliefs to the level of specificity of the outcome of interest.

  15. Reliability and validity of a novel tool to comprehensively assess food and beverage marketing in recreational sport settings.

    Science.gov (United States)

    Prowse, Rachel J L; Naylor, Patti-Jean; Olstad, Dana Lee; Carson, Valerie; Mâsse, Louise C; Storey, Kate; Kirk, Sara F L; Raine, Kim D

    2018-05-31

    Current methods for evaluating food marketing to children often study a single marketing channel or approach. As the World Health Organization urges the removal of unhealthy food marketing in children's settings, methods that comprehensively explore the exposure and power of food marketing within a setting from multiple marketing channels and approaches are needed. The purpose of this study was to test the inter-rater reliability and the validity of a novel settings-based food marketing audit tool. The Food and beverage Marketing Assessment Tool for Settings (FoodMATS) was developed and its psychometric properties evaluated in five public recreation and sport facilities (sites) and subsequently used in 51 sites across Canada for a cross-sectional analysis of food marketing. Raters recorded the count of food marketing occasions, presence of child-targeted and sports-related marketing techniques, and the physical size of marketing occasions. Marketing occasions were classified by healthfulness. Inter-rater reliability was tested using Cohen's kappa (κ) and intra-class correlations (ICC). FoodMATS scores for each site were calculated using an algorithm that represented the theoretical impact of the marketing environment on food preferences, purchases, and consumption. Higher FoodMATS scores represented sites with higher exposure to, and more powerful (unhealthy, child-targeted, sports-related, large) food marketing. Validity of the scoring algorithm was tested through (1) Pearson's correlations between FoodMATS scores and facility sponsorship dollars, and (2) sequential multiple regression for predicting "Least Healthy" food sales from FoodMATS scores. Inter-rater reliability was very good to excellent (κ = 0.88-1.00, p marketing in recreation facilities, the FoodMATS provides a novel means to comprehensively track changes in food marketing environments that can assist in developing and monitoring the impact of policies and interventions.

  16. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  17. Content validation of the international classification of functioning, disability and health core set for stroke from gender perspective using a qualitative approach.

    Science.gov (United States)

    Glässel, A; Coenen, M; Kollerits, B; Cieza, A

    2014-06-01

    The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.

  18. The Virtual Care Climate Questionnaire: Development and Validation of a Questionnaire Measuring Perceived Support for Autonomy in a Virtual Care Setting.

    Science.gov (United States)

    Smit, Eline Suzanne; Dima, Alexandra Lelia; Immerzeel, Stephanie Annette Maria; van den Putte, Bas; Williams, Geoffrey Colin

    2017-05-08

    Web-based health behavior change interventions may be more effective if they offer autonomy-supportive communication facilitating the internalization of motivation for health behavior change. Yet, at this moment no validated tools exist to assess user-perceived autonomy-support of such interventions. The aim of this study was to develop and validate the virtual climate care questionnaire (VCCQ), a measure of perceived autonomy-support in a virtual care setting. Items were developed based on existing questionnaires and expert consultation and were pretested among experts and target populations. The virtual climate care questionnaire was administered in relation to Web-based interventions aimed at reducing consumption of alcohol (Study 1; N=230) or cannabis (Study 2; N=228). Item properties, structural validity, and reliability were examined with item-response and classical test theory methods, and convergent and divergent validity via correlations with relevant concepts. In Study 1, 20 of 23 items formed a one-dimensional scale (alpha=.97; omega=.97; H=.66; mean 4.9 [SD 1.0]; range 1-7) that met the assumptions of monotonicity and invariant item ordering. In Study 2, 16 items fitted these criteria (alpha=.92; H=.45; omega=.93; mean 4.2 [SD 1.1]; range 1-7). Only 15 items remained in the questionnaire in both studies, thus we proceeded to the analyses of the questionnaire's reliability and construct validity with a 15-item version of the virtual climate care questionnaire. Convergent validity of the resulting 15-item virtual climate care questionnaire was confirmed by positive associations with autonomous motivation (Study 1: r=.66, Pperceived competence for reducing alcohol intake (Study 1: r=.52, Pperceived competence for learning (Study 2: r=.05, P=.48). The virtual climate care questionnaire accurately assessed participants' perceived autonomy-support offered by two Web-based health behavior change interventions. Overall, the scale showed the expected properties

  19. Signal-to-noise assessment for diffusion tensor imaging with single data set and validation using a difference image method with data from a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhiyue J., E-mail: jerry.wang@childrens.com [Department of Radiology, Children' s Medical Center, Dallas, Texas 75235 and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Chia, Jonathan M. [Clinical Science, Philips Healthcare, Cleveland, Ohio 44143 (United States); Ahmed, Shaheen; Rollins, Nancy K. [Department of Radiology, Children' s Medical Center, Dallas, TX 75235 and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX 75390 (United States)

    2014-09-15

    Purpose: To describe a quantitative method for determination of SNR that extracts the local noise level using a single diffusion data set. Methods: Brain data sets came from a multicenter study (eight sites; three MR vendors). Data acquisition protocol required b = 0, 700 s/mm{sup 2}, fov = 256 × 256 mm{sup 2}, acquisition matrix size 128 × 128, reconstruction matrix size 256 × 256; 30 gradient encoding directions and voxel size 2 × 2 × 2 mm{sup 3}. Regions-of-interest (ROI) were placed manually on the b = 0 image volume on transverse slices, and signal was recorded as the mean value of the ROI. The noise level from the ROI was evaluated using Fourier Transform based Butterworth high-pass filtering. Patients were divided into two groups, one for filter parameter optimization (N = 17) and one for validation (N = 10). Six white matter areas (the genu and splenium of corpus callosum, right and left centrum semiovale, right and left anterior corona radiata) were analyzed. The Bland–Altman method was used to compare the resulting SNR with that from the difference image method. The filter parameters were optimized for each brain area, and a set of “global” parameters was also obtained, which represent an average of all regions. Results: The Bland–Altman analysis on the validation group using “global” filter parameters revealed that the 95% limits of agreement of percent bias between the SNR obtained with the new and the reference methods were −15.5% (median of the lower limit, range [−24.1%, −8.9%]) and 14.5% (median of the higher limits, range [12.7%, 18.0%]) for the 6 brain areas. Conclusions: An FT-based high-pass filtering method can be used for local area SNR assessment using only one DTI data set. This method could be used to evaluate SNR for patient studies in a multicenter setting.

  20. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  1. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  2. Discriminating real victims from feigners of psychological injury in gender violence: Validating a protocol for forensic setting

    Directory of Open Access Journals (Sweden)

    Ramon Arce

    2009-07-01

    Full Text Available Standard clinical assessment of psychological injury does not provide valid evidence in forensic settings, and screening of genuine from feigned complaints must be undertaken prior to the diagnosis of mental state (American Psychological Association, 2002. Whereas psychological injury is Post-traumatic Stress Disorder (PTSD, a clinical diagnosis may encompass other nosologies (e.g., depression and anxiety. The assessment of psychological injury in forensic contexts requires a multimethod approach consisting of a psychometric measure and an interview. To assess the efficacy of the multimethod approach in discriminating real from false victims, 25 real victims of gender violence and 24 feigners were assessed using a the Symptom Checklist-90-Revised (SCL-90-R, a recognition task; and a forensic clinical interview, a knowledge task. The results revealed that feigners reported more clinical symptoms on the SCL-90-R than real victims. Moreover, the feigning indicators on the SCL-90-R, GSI, PST, and PSDI were higher in feigners, but not sufficient to provide a screening test for invalidating feigning protocols. In contrast, real victims reported more clinical symptoms related to PTSD in the forensic clinical interview than feigners. Notwithstanding, in the forensic clinical interview feigners were able to feign PTSD which was not detected by the analysis of feigning strategies. The combination of both measures and their corresponding validity controls enabled the discrimination of real victims from feigners. Hence, a protocol for discriminating the psychological sequelae of real victims from feigners of gender violence is described.

  3. Validity of the Elite HRV Smartphone Application for Examining Heart Rate Variability in a Field-Based Setting.

    Science.gov (United States)

    Perrotta, Andrew S; Jeklin, Andrew T; Hives, Ben A; Meanwell, Leah E; Warburton, Darren E R

    2017-08-01

    Perrotta, AS, Jeklin, AT, Hives, BA, Meanwell, LE, and Warburton, DER. Validity of the elite HRV smartphone application for examining heart rate variability in a field-based setting. J Strength Cond Res 31(8): 2296-2302, 2017-The introduction of smartphone applications has allowed athletes and practitioners to record and store R-R intervals on smartphones for immediate heart rate variability (HRV) analysis. This user-friendly option should be validated in the effort to provide practitioners confidence when monitoring their athletes before implementing such equipment. The objective of this investigation was to examine the relationship and validity between a vagal-related HRV index, rMSSD, when derived from a smartphone application accessible with most operating systems against a frequently used computer software program, Kubios HRV 2.2. R-R intervals were recorded immediately upon awakening over 14 consecutive days using the Elite HRV smartphone application. R-R recordings were then exported into Kubios HRV 2.2 for analysis. The relationship and levels of agreement between rMSSDln derived from Elite HRV and Kubios HRV 2.2 was examined using a Pearson product-moment correlation and a Bland-Altman Plot. An extremely large relationship was identified (r = 0.92; p smartphone HRV application may offer a reliable platform when assessing parasympathetic modulation.

  4. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    Science.gov (United States)

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  5. Prospective Validation of the Decalogue, a Set of Doctor-Patient Communication Recommendations to Improve Patient Illness Experience and Mood States within a Hospital Cardiologic Ambulatory Setting

    Directory of Open Access Journals (Sweden)

    Piercarlo Ballo

    2017-01-01

    Full Text Available Strategies to improve doctor-patient communication may have a beneficial impact on patient’s illness experience and mood, with potential favorable clinical effects. We prospectively tested the psychometric and clinical validity of the Decalogue, a tool utilizing 10 communication recommendations for patients and physicians. The Decalogue was administered to 100 consecutive patients referred for a cardiologic consultation, whereas 49 patients served as controls. The POMS-2 questionnaire was used to measure the total mood disturbance at the end of the consultation. Structural equation modeling showed high internal consistency (Cronbach alpha 0.93, good test-retest reproducibility, and high validity of the psychometric construct (all > 0.80, suggesting a positive effect on patients’ illness experience. The total mood disturbance was lower in the patients exposed to the Decalogue as compared to the controls (1.4±12.1 versus 14.8±27.6, p=0.0010. In an additional questionnaire, patients in the Decalogue group showed a trend towards a better understanding of their state of health (p=0.07. In a cardiologic ambulatory setting, the Decalogue shows good validity and reliability as a tool to improve patients’ illness experience and could have a favorable impact on mood states. These effects might potentially improve patient engagement in care and adherence to therapy, as well as clinical outcome.

  6. The accuracy of SST retrievals from AATSR: An initial assessment through geophysical validation against in situ radiometers, buoys and other SST data sets

    Science.gov (United States)

    Corlett, G. K.; Barton, I. J.; Donlon, C. J.; Edwards, M. C.; Good, S. A.; Horrocks, L. A.; Llewellyn-Jones, D. T.; Merchant, C. J.; Minnett, P. J.; Nightingale, T. J.; Noyes, E. J.; O'Carroll, A. G.; Remedios, J. J.; Robinson, I. S.; Saunders, R. W.; Watts, J. G.

    The Advanced Along-Track Scanning Radiometer (AATSR) was launched on Envisat in March 2002. The AATSR instrument is designed to retrieve precise and accurate global sea surface temperature (SST) that, combined with the large data set collected from its predecessors, ATSR and ATSR-2, will provide a long term record of SST data that is greater than 15 years. This record can be used for independent monitoring and detection of climate change. The AATSR validation programme has successfully completed its initial phase. The programme involves validation of the AATSR derived SST values using in situ radiometers, in situ buoys and global SST fields from other data sets. The results of the initial programme presented here will demonstrate that the AATSR instrument is currently close to meeting its scientific objectives of determining global SST to an accuracy of 0.3 K (one sigma). For night time data, the analysis gives a warm bias of between +0.04 K (0.28 K) for buoys to +0.06 K (0.20 K) for radiometers, with slightly higher errors observed for day time data, showing warm biases of between +0.02 (0.39 K) for buoys to +0.11 K (0.33 K) for radiometers. They show that the ATSR series of instruments continues to be the world leader in delivering accurate space-based observations of SST, which is a key climate parameter.

  7. Psychometric evaluation of 3-set 4P questionnaire.

    Science.gov (United States)

    Akerman, Eva; Fridlund, Bengt; Samuelson, Karin; Baigi, Amir; Ersson, Anders

    2013-02-01

    This is a further development of a specific questionnaire, the 3-set 4P, to be used for measuring former ICU patients' physical and psychosocial problems after intensive care and the need for follow-up. The aim was to psychometrically test and evaluate the 3-set 4P questionnaire in a larger population. The questionnaire consists of three sets: "physical", "psychosocial" and "follow-up". The questionnaires were sent by mail to all patients with more than 24-hour length of stay on four ICUs in Sweden. Construct validity was measured with exploratory factor analysis with Varimax rotation. This resulted in three factors for the "physical set", five factors for the "psychosocial set" and four factors for the "follow-up set" with strong factor loadings and a total explained variance of 62-77.5%. Thirteen questions in the SF-36 were used for concurrent validity showing Spearman's r(s) 0.3-0.6 in eight questions and less than 0.2 in five. Test-retest was used for stability reliability. In set follow-up the correlation was strong to moderate and in physical and psychosocial sets the correlations were moderate to fair. This may have been because the physical and psychosocial status changed rapidly during the test period. All three sets had good homogeneity. In conclusion, the 3-set 4P showed overall acceptable results, but it has to be further modified in different cultures before being considered a fully operational instrument for use in clinical practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, Jae Hong; Roh, Jee Hoon [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-11-15

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.

  9. Several Results on Set-Valued Possibilistic Distributions

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan; Daniel, Milan

    2015-01-01

    Roč. 51, č. 3 (2015), s. 391-407 ISSN 0023-5954 R&D Projects: GA ČR GAP202/10/1826 Institutional support: RVO:67985807 Keywords : probability measures * possibility measures * non-numerical uncertainty degrees * set-valued uncertainty degrees * possibilistic uncertainty functions * set-valued entropy functions Subject RIV: BA - General Mathematics Impact factor: 0.628, year: 2015 http://dml.cz/handle/10338.dmlcz/144376

  10. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    Science.gov (United States)

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  11. 42 CFR 493.571 - Disclosure of accreditation, State and CMS validation inspection results.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Disclosure of accreditation, State and CMS... Program § 493.571 Disclosure of accreditation, State and CMS validation inspection results. (a) Accreditation organization inspection results. CMS may disclose accreditation organization inspection results to...

  12. Goal setting as an outcome measure: A systematic review.

    Science.gov (United States)

    Hurn, Jane; Kneebone, Ian; Cropley, Mark

    2006-09-01

    Goal achievement has been considered to be an important measure of outcome by clinicians working with patients in physical and neurological rehabilitation settings. This systematic review was undertaken to examine the reliability, validity and sensitivity of goal setting and goal attainment scaling approaches when used with working age and older people. To review the reliability, validity and sensitivity of both goal setting and goal attainment scaling when employed as an outcome measure within a physical and neurological working age and older person rehabilitation environment, by examining the research literature covering the 36 years since goal-setting theory was proposed. Data sources included a computer-aided literature search of published studies examining the reliability, validity and sensitivity of goal setting/goal attainment scaling, with further references sourced from articles obtained through this process. There is strong evidence for the reliability, validity and sensitivity of goal attainment scaling. Empirical support was found for the validity of goal setting but research demonstrating its reliability and sensitivity is limited. Goal attainment scaling appears to be a sound measure for use in physical rehabilitation settings with working age and older people. Further work needs to be carried out with goal setting to establish its reliability and sensitivity as a measurement tool.

  13. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    Science.gov (United States)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  14. Validation of a pediatric early warning system for hospitalized pediatric oncology patients in a resource-limited setting.

    Science.gov (United States)

    Agulnik, Asya; Méndez Aceituno, Alejandra; Mora Robles, Lupe Nataly; Forbes, Peter W; Soberanis Vasquez, Dora Judith; Mack, Ricardo; Antillon-Klussmann, Federico; Kleinman, Monica; Rodriguez-Galindo, Carlos

    2017-12-15

    Pediatric oncology patients are at high risk of clinical deterioration, particularly in hospitals with resource limitations. The performance of pediatric early warning systems (PEWS) to identify deterioration has not been assessed in these settings. This study evaluates the validity of PEWS to predict the need for unplanned transfer to the pediatric intensive care unit (PICU) among pediatric oncology patients in a resource-limited hospital. A retrospective case-control study comparing the highest documented and corrected PEWS score before unplanned PICU transfer in pediatric oncology patients (129 cases) with matched controls (those not requiring PICU care) was performed. Documented and corrected PEWS scores were found to be highly correlated with the need for PICU transfer (area under the receiver operating characteristic, 0.940 and 0.930, respectively). PEWS scores increased 24 hours prior to unplanned transfer (P = .0006). In cases, organ dysfunction at the time of PICU admission correlated with maximum PEWS score (correlation coefficient, 0.26; P = .003), patients with PEWS results ≥4 had a higher Pediatric Index of Mortality 2 (PIM2) (P = .028), and PEWS results were higher in patients with septic shock (P = .01). The PICU mortality rate was 17.1%; nonsurvivors had higher mean PEWS scores before PICU transfer (P = .0009). A single-point increase in the PEWS score increased the odds of mechanical ventilation or vasopressors within the first 24 hours and during PICU admission (odds ratio 1.3-1.4). PEWS accurately predicted the need for unplanned PICU transfer in pediatric oncology patients in this resource-limited setting, with abnormal results beginning 24 hours before PICU admission and higher scores predicting the severity of illness at the time of PICU admission, need for PICU interventions, and mortality. These results demonstrate that PEWS aid in the identification of clinical deterioration in this high-risk population, regardless of a hospital

  15. Validation of the GROMOS force-field parameter set 45A3 against nuclear magnetic resonance data of hen egg lysozyme

    International Nuclear Information System (INIS)

    Soares, T. A.; Daura, X.; Oostenbrink, C.; Smith, L. J.; Gunsteren, W. F. van

    2004-01-01

    The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, 3 J NHα and 3 J αβ coupling constants, and 1 5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone 3 J HNα -coupling constants and 1 H- 1 5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain 3 J αβ -coupling constants and 1 H- 1 5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the simulations. In other words, the experimental data converged towards the theoretical result

  16. Through the eyes of a child: preschoolers' identification of emotional expressions from the child affective facial expression (CAFE) set.

    Science.gov (United States)

    LoBue, Vanessa; Baker, Lewis; Thrasher, Cat

    2017-08-10

    Researchers have been interested in the perception of human emotional expressions for decades. Importantly, most empirical work in this domain has relied on controlled stimulus sets of adults posing for various emotional expressions. Recently, the Child Affective Facial Expression (CAFE) set was introduced to the scientific community, featuring a large validated set of photographs of preschool aged children posing for seven different emotional expressions. Although the CAFE set was extensively validated using adult participants, the set was designed for use with children. It is therefore necessary to verify that adult validation applies to child performance. In the current study, we examined 3- to 4-year-olds' identification of a subset of children's faces in the CAFE set, and compared it to adult ratings cited in previous research. Our results demonstrate an exceptionally strong relationship between adult ratings of the CAFE photos and children's ratings, suggesting that the adult validation of the set can be applied to preschool-aged participants. The results are discussed in terms of methodological implications for the use of the CAFE set with children, and theoretical implications for using the set to study the development of emotion perception in early childhood.

  17. Utility of the MMPI-2-RF (Restructured Form) Validity Scales in Detecting Malingering in a Criminal Forensic Setting: A Known-Groups Design

    Science.gov (United States)

    Sellbom, Martin; Toomey, Joseph A.; Wygant, Dustin B.; Kucharski, L. Thomas; Duncan, Scott

    2010-01-01

    The current study examined the utility of the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) validity scales to detect feigned psychopathology in a criminal forensic setting. We used a known-groups design with the Structured Interview of Reported Symptoms (SIRS;…

  18. Implementing the Science Assessment Standards: Developing and validating a set of laboratory assessment tasks in high school biology

    Science.gov (United States)

    Saha, Gouranga Chandra

    Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.

  19. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    Science.gov (United States)

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, pSkills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  20. [The Danish debate on priority setting in medicine - characteristics and results].

    Science.gov (United States)

    Pornak, S; Meyer, T; Raspe, H

    2011-10-01

    Priority setting in medicine helps to achieve a fair and transparent distribution of health-care resources. The German discussion about priority setting is still in its infancy and may benefit from other countries' experiences. This paper aims to analyse the Danish priority setting debate in order to stimulate the German discussion. The methods used are a literature analysis and a document analysis as well as expert interviews. The Danish debate about priority setting in medicine began in the 1970s, when a government committee was constituted to evaluate health-care priorities at the national level. In the 1980s a broader debate arose in politics, ethics, medicine and health economy. The discussions reached a climax in the 1990s, when many local activities - always involving the public - were initiated. Some Danish counties tried to implement priority setting in the daily routine of health care. The Council of Ethics was a major player in the debate of the 1990s and published a detailed statement on priority setting in 1996. With the new century the debate about priority setting seemed to have come to an end, but in 2006 the Technology Council and the Danish Regions resumed the discussion. In 2009 the Medical Association called for a broad debate in order to achieve equity among all patients. The long lasting Danish debate on priority setting has entailed only very little practical consequences on health care. The main problems seem to have been the missing effort to bundle the various local initiatives on a national level and the lack of powerful players to put results of the discussion into practice. Nevertheless, today the attitude towards priority setting is predominantly positive and even politicians talk freely about it. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Concordance and predictive value of two adverse drug event data sets.

    Science.gov (United States)

    Cami, Aurel; Reis, Ben Y

    2014-08-22

    Accurate prediction of adverse drug events (ADEs) is an important means of controlling and reducing drug-related morbidity and mortality. Since no single "gold standard" ADE data set exists, a range of different drug safety data sets are currently used for developing ADE prediction models. There is a critical need to assess the degree of concordance between these various ADE data sets and to validate ADE prediction models against multiple reference standards. We systematically evaluated the concordance of two widely used ADE data sets - Lexi-comp from 2010 and SIDER from 2012. The strength of the association between ADE (drug) counts in Lexi-comp and SIDER was assessed using Spearman rank correlation, while the differences between the two data sets were characterized in terms of drug categories, ADE categories and ADE frequencies. We also performed a comparative validation of the Predictive Pharmacosafety Networks (PPN) model using both ADE data sets. The predictive power of PPN using each of the two validation sets was assessed using the area under Receiver Operating Characteristic curve (AUROC). The correlations between the counts of ADEs and drugs in the two data sets were 0.84 (95% CI: 0.82-0.86) and 0.92 (95% CI: 0.91-0.93), respectively. Relative to an earlier snapshot of Lexi-comp from 2005, Lexi-comp 2010 and SIDER 2012 introduced a mean of 1,973 and 4,810 new drug-ADE associations per year, respectively. The difference between these two data sets was most pronounced for Nervous System and Anti-infective drugs, Gastrointestinal and Nervous System ADEs, and postmarketing ADEs. A minor difference of 1.1% was found in the AUROC of PPN when SIDER 2012 was used for validation instead of Lexi-comp 2010. In conclusion, the ADE and drug counts in Lexi-comp and SIDER data sets were highly correlated and the choice of validation set did not greatly affect the overall prediction performance of PPN. Our results also suggest that it is important to be aware of the

  2. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  3. Results of LLNL investigation of NYCT data sets

    International Nuclear Information System (INIS)

    Sale, K; Harrison, M; Guo, M; Groza, M

    2007-01-01

    Upon examination we have concluded that none of the alarms indicate the presence of a real threat. A brief history and results from our examination of the NYCT ASP occupancy data sets dated from 2007-05-14 19:11:07 to 2007-06-20 15:46:15 are presented in this letter report. When the ASP data collection campaign at NYCT was completed, rather than being shut down, the Canberra ASP annunciator box was unplugged leaving the data acquisition system running. By the time it was discovered that the ASP was still acquiring data about 15,000 occupancies had been recorded. Among these were about 500 alarms (classified by the ASP analysis system as either Threat Alarms or Suspect Alarms). At your request, these alarms have been investigated. Our conclusion is that none of the alarm data sets indicate the presence of a real threat (within statistics). The data sets (ICD1 and ICD2 files with concurrent JPEG pictures) were delivered to LLNL on a removable hard drive labeled FOUO. The contents of the data disk amounted to 53.39 GB of data requiring over two days for the standard LLNL virus checking software to scan before work could really get started. Our first step was to walk through the directory structure of the disk and create a database of occupancies. For each occupancy, the database was populated with the occupancy date and time, occupancy number, file path to the ICD1 data and the alarm ('No Alarm', 'Suspect Alarm' or 'Threat Alarm') from the ICD2 file along with some other incidental data. In an attempt to get a global understanding of what was going on, we investigated the occupancy information. The occupancy date/time and alarm type were binned into one-hour counts. These data are shown in Figures 1 and 2

  4. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review.

    Science.gov (United States)

    Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen

    2018-02-15

    This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.

  5. A set of pathological tests to validate new finite elements

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    The finite element method entails several approximations. Hence it ... researchers have designed several pathological tests to validate any new finite element. The .... Three dimensional thick shell elements using a hybrid/mixed formu- lation.

  6. The urban boundary-layer field campaign in marseille (ubl/clu-escompte): set-up and first results

    Science.gov (United States)

    Mestayer, P.G.; Durand, P.; Augustin, P.; Bastin, S.; Bonnefond, J.-M.; Benech, B.; Campistron, B.; Coppalle, A.; Delbarre, H.; Dousset, B.; Drobinski, P.; Druilhet, A.; Frejafon, E.; Grimmond, C.S.B.; Groleau, D.; Irvine, M.; Kergomard, C.; Kermadi, S.; Lagouarde, J.-P.; Lemonsu, A.; Lohou, F.; Long, N.; Masson, V.; Moppert, C.; Noilhan, J.; Offerle, B.; Oke, T.R.; Pigeon, G.; Puygrenier, V.; Roberts, S.; Rosant, J.-M.; Sanid, F.; Salmond, J.; Talbaut, M.; Voogt, J.

    The UBL/CLU (urban boundary layer/couche limite urbaine) observation and modelling campaign is a side-project of the regional photochemistry campaign ESCOMPTE. UBL/CLU focuses on the dynamics and thermodynamics of the urban boundary layer of Marseille, on the Mediterranean coast of France. The objective of UBL/CLU is to document the four-dimensional structure of the urban boundary layer and its relation to the heat and moisture exchanges between the urban canopy and the atmosphere during periods of low wind conditions, from June 4 to July 16, 2001. The project took advantage of the comprehensive observational set-up of the ESCOMPTE campaign over the Berre-Marseille area, especially the ground-based remote sensing, airborne measurements, and the intensive documentation of the regional meteorology. Additional instrumentation was installed as part of UBL/CLU. Analysis objectives focus on (i) validation of several energy balance computational schemes such as LUMPS, TEB and SM2-U, (ii) ground truth and urban canopy signatures suitable for the estimation of urban albedos and aerodynamic surface temperatures from satellite data, (iii) high resolution mapping of urban land cover, land-use and aerodynamic parameters used in UBL models, and (iv) testing the ability of high resolution atmospheric models to simulate the structure of the UBL during land and sea breezes, and the related transport and diffusion of pollutants over different districts of the city. This paper presents initial results from such analyses and details of the overall experimental set-up.

  7. ACE-FTS version 3.0 data set: validation and data processing update

    Directory of Open Access Journals (Sweden)

    Claire Waymark

    2014-01-01

    Full Text Available On 12 August 2003, the Canadian-led Atmospheric Chemistry Experiment (ACE was launched into a 74° inclination orbit at 650 km with the mission objective to measure atmospheric composition using infrared and UV-visible spectroscopy (Bernath et al. 2005. The ACE mission consists of two main instruments, ACE-FTS and MAESTRO (McElroy et al. 2007, which are being used to investigate the chemistry and dynamics of the Earth’s atmosphere.  Here, we focus on the high resolution (0.02 cm-1 infrared Fourier Transform Spectrometer, ACE-FTS, that measures in the 750-4400 cm-1 (2.2 to 13.3 µm spectral region.  This instrument has been making regular solar occultation observations for more than nine years.  The current ACE-FTS data version (version 3.0 provides profiles of temperature and volume mixing ratios (VMRs of more than 30 atmospheric trace gas species, as well as 20 subsidiary isotopologues of the most abundant trace atmospheric constituents over a latitude range of ~85°N to ~85°S.  This letter describes the current data version and recent validation comparisons and provides a description of our planned updates for the ACE-FTS data set. [...

  8. Developing an assessment of fire-setting to guide treatment in secure settings: the St Andrew's Fire and Arson Risk Instrument (SAFARI).

    Science.gov (United States)

    Long, Clive G; Banyard, Ellen; Fulton, Barbara; Hollin, Clive R

    2014-09-01

    Arson and fire-setting are highly prevalent among patients in secure psychiatric settings but there is an absence of valid and reliable assessment instruments and no evidence of a significant approach to intervention. To develop a semi-structured interview assessment specifically for fire-setting to augment structured assessments of risk and need. The extant literature was used to frame interview questions relating to the antecedents, behaviour and consequences necessary to formulate a functional analysis. Questions also covered readiness to change, fire-setting self-efficacy, the probability of future fire-setting, barriers to change, and understanding of fire-setting behaviour. The assessment concludes with indications for assessment and a treatment action plan. The inventory was piloted with a sample of women in secure care and was assessed for comprehensibility, reliability and validity. Staff rated the St Andrews Fire and Risk Instrument (SAFARI) as acceptable to patients and easy to administer. SAFARI was found to be comprehensible by over 95% of the general population, to have good acceptance, high internal reliability, substantial test-retest reliability and validity. SAFARI helps to provide a clear explanation of fire-setting in terms of the complex interplay of antecedents and consequences and facilitates the design of an individually tailored treatment programme in sympathy with a cognitive-behavioural approach. Further studies are needed to verify the reliability and validity of SAFARI with male populations and across settings.

  9. Gene set analysis of purine and pyrimidine antimetabolites cancer therapies.

    Science.gov (United States)

    Fridley, Brooke L; Batzler, Anthony; Li, Liang; Li, Fang; Matimba, Alice; Jenkins, Gregory D; Ji, Yuan; Wang, Liewei; Weinshilboum, Richard M

    2011-11-01

    Responses to therapies, either with regard to toxicities or efficacy, are expected to involve complex relationships of gene products within the same molecular pathway or functional gene set. Therefore, pathways or gene sets, as opposed to single genes, may better reflect the true underlying biology and may be more appropriate units for analysis of pharmacogenomic studies. Application of such methods to pharmacogenomic studies may enable the detection of more subtle effects of multiple genes in the same pathway that may be missed by assessing each gene individually. A gene set analysis of 3821 gene sets is presented assessing the association between basal messenger RNA expression and drug cytotoxicity using ethnically defined human lymphoblastoid cell lines for two classes of drugs: pyrimidines [gemcitabine (dFdC) and arabinoside] and purines [6-thioguanine and 6-mercaptopurine]. The gene set nucleoside-diphosphatase activity was found to be significantly associated with both dFdC and arabinoside, whereas gene set γ-aminobutyric acid catabolic process was associated with dFdC and 6-thioguanine. These gene sets were significantly associated with the phenotype even after adjusting for multiple testing. In addition, five associated gene sets were found in common between the pyrimidines and two gene sets for the purines (3',5'-cyclic-AMP phosphodiesterase activity and γ-aminobutyric acid catabolic process) with a P value of less than 0.0001. Functional validation was attempted with four genes each in gene sets for thiopurine and pyrimidine antimetabolites. All four genes selected from the pyrimidine gene sets (PSME3, CANT1, ENTPD6, ADRM1) were validated, but only one (PDE4D) was validated for the thiopurine gene sets. In summary, results from the gene set analysis of pyrimidine and purine therapies, used often in the treatment of various cancers, provide novel insight into the relationship between genomic variation and drug response.

  10. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  11. 42 CFR 476.85 - Conclusive effect of QIO initial denial determinations and changes as a result of DRG validations.

    Science.gov (United States)

    2010-10-01

    ... determinations and changes as a result of DRG validations. 476.85 Section 476.85 Public Health CENTERS FOR... denial determinations and changes as a result of DRG validations. A QIO initial denial determination or change as a result of DRG validation is final and binding unless, in accordance with the procedures in...

  12. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    Science.gov (United States)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and

  13. Validation of Nurse Practitioner Primary Care Organizational Climate Questionnaire: A New Tool to Study Nurse Practitioner Practice Settings.

    Science.gov (United States)

    Poghosyan, Lusine; Chaplin, William F; Shaffer, Jonathan A

    2017-04-01

    Favorable organizational climate in primary care settings is necessary to expand the nurse practitioner (NP) workforce and promote their practice. Only one NP-specific tool, the Nurse Practitioner Primary Care Organizational Climate Questionnaire (NP-PCOCQ), measures NP organizational climate. We confirmed NP-PCOCQ's factor structure and established its predictive validity. A crosssectional survey design was used to collect data from 314 NPs in Massachusetts in 2012. Confirmatory factor analysis and regression models were used. The 4-factor model characterized NP-PCOCQ. The NP-PCOCQ score predicted job satisfaction (beta = .36; p organizational climate in their clinics. Further testing of NP-PCOCQ is needed.

  14. Validation of the GROMOS force-field parameter set 45A3 against nuclear magnetic resonance data of hen egg lysozyme

    Energy Technology Data Exchange (ETDEWEB)

    Soares, T. A. [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland); Daura, X. [Universitat Autonoma de Barcelona, InstitucioCatalana de Recerca i Estudis Avancats and Institut de Biotecnologia i Biomedicina (Spain); Oostenbrink, C. [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland); Smith, L. J. [University of Oxford, Oxford Centre for Molecular Sciences, Central Chemistry Laboratory (United Kingdom); Gunsteren, W. F. van [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland)], E-mail: wfvgn@igc.phys.chem.ethz.ch

    2004-12-15

    The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, {sup 3}J{sub NH{alpha}} and {sup 3}J{sub {alpha}}{sub {beta}} coupling constants, and {sup 1}5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone {sup 3}J{sub HN{alpha}}-coupling constants and {sup 1}H- {sup 1}5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain {sup 3}J{sub {alpha}}{sub {beta}}-coupling constants and {sup 1}H- {sup 1}5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the

  15. New set of convective heat transfer coefficients established for pools and validated against CLARA experiments for application to corium pools

    Energy Technology Data Exchange (ETDEWEB)

    Michel, B., E-mail: benedicte.michel@irsn.fr

    2015-05-15

    Highlights: • A new set of 2D convective heat transfer correlations is proposed. • It takes into account different horizontal and lateral superficial velocities. • It is based on previously established correlations. • It is validated against recent CLARA experiments. • It has to be implemented in a 0D MCCI (molten core concrete interaction) code. - Abstract: During an hypothetical Pressurized Water Reactor (PWR) or Boiling Water Reactor (BWR) severe accident with core meltdown and vessel failure, corium would fall directly on the concrete reactor pit basemat if no water is present. The high temperature of the corium pool maintained by the residual power would lead to the erosion of the concrete walls and basemat of this reactor pit. The thermal decomposition of concrete will lead to the release of a significant amount of gases that will modify the corium pool thermal hydraulics. In particular, it will affect heat transfers between the corium pool and the concrete which determine the reactor pit ablation kinetics. A new set of convective heat transfer coefficients in a pool with different lateral and horizontal superficial gas velocities is modeled and validated against the recent CLARA experimental program. 155 tests of this program, in two size configurations and a high range of investigated viscosity, have been used to validate the model. Then, a method to define different lateral and horizontal superficial gas velocities in a 0D code is proposed together with a discussion about the possible viscosity in the reactor case when the pool is semi-solid. This model is going to be implemented in the 0D ASTEC/MEDICIS code in order to determine the impact of the convective heat transfer in the concrete ablation by corium.

  16. Determination of polychlorinated dibenzodioxins and polychlorinated dibenzofurans (PCDDs/PCDFs) in food and feed using a bioassay. Result of a validation study

    Energy Technology Data Exchange (ETDEWEB)

    Gizzi, G.; Holst, C. von; Anklam, E. [Commission of the European Communities, Geel (Belgium). Joint Research Centre, Inst. for Reference Materials and Measurement, Food Safety and Quality Unit; Hoogenboom, R. [RIKILT-Intitute of Food Safety, Wageningen (Netherlands); Rose, M. [Defra Central Science Laboratory, Sand Hutton, York (United Kingdom)

    2004-09-15

    It is estimated that more than 90% of dioxins consumed by humans come from foods derived from animals. The European Commission through a Council Regulation (No 2375/2001) and a Directive (2001/102/EC), both revised by the Commission Recommendation (2002/201/EC), has set maximum levels for dioxins in food and feedstuffs. To implement the regulation, dioxin-monitoring programs of food and feedstuffs will be undertaken by the Member States requiring the analysis of large amounts of samples. Food and feed companies will have to control their products before putting them into the market. The monitoring for the presence of dioxins in food and feeds needs fast and cheap screening methods in order to select samples with potentially high levels of dioxins to be then analysed by a confirmatory method like HRGC/HRMS. Bioassays like the DR CALUX {sup registered} - assay have claimed to provide a suitable alternative for the screening of large number of samples, reducing costs and the required time of analysis. These methods have to comply with the specific characteristics considered into two Commission Directives (2002/69/EC; 2002/70/EC), establishing the requirements for the determination of dioxin and dioxin-like PCBs for the official control of food and feedstuffs. The European Commission's Joint Research Centre is pursuing validation of alternative techniques in food and feed materials. In order to evaluate the applicability of the DR CALUX {sup registered} technique as screening method in compliance with the Commission Directives, a validation study was organised in collaboration with CSL and RIKILT. The aim of validating an analytical method is first to determine its performance characteristics (e.g. variability, bias, rate of false positive and false negative results), and secondly to evaluate if the method is fit for the purpose. Two approaches are commonly used: an in-house validation is preferentially performed first in order to establish whether the method is

  17. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  18. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia : development and validation

    NARCIS (Netherlands)

    Spoorenberg, Sophie L. W.; Reijneveld, Sijmen A.; Middel, Berrie; Uittenbroek, Ronald J.; Kremer, Hubertus P. H.; Wynia, Klaske

    2015-01-01

    Purpose: The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. Methods: A Delphi study was performed in order to reach consensus (70% agreement) on second-level categories from the

  19. HANDBOOK: GUIDANCE ON SETTING PERMIT CONDITIONS AND REPORTING TRIAL BURN RESULTS

    Science.gov (United States)

    This Handbook provides guidance for establishing operational conditions for incinerators. he document provides a means for state and local agencies to achieve a level of consistency in setting permit conditions that will result in establishment of more uniform permit conditions n...

  20. 42 CFR 476.94 - Notice of QIO initial denial determination and changes as a result of a DRG validation.

    Science.gov (United States)

    2010-10-01

    ... changes as a result of a DRG validation. 476.94 Section 476.94 Public Health CENTERS FOR MEDICARE... changes as a result of a DRG validation. (a) Notice of initial denial determination—(1) Parties to be... retrospective review, (excluding DRG validation and post procedure review), within 3 working days of the initial...

  1. DTU PMU Laboratory Development - Testing and Validation

    DEFF Research Database (Denmark)

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE...... standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to follow known patterns and provide confirmation about the test system to confirm the design and settings....... In a nutshell, having 2 PMUs that observe same signals provides validation of the operation and flags questionable results with more certainty. Moreover, the performance and accuracy of the DTU-PMU is tested acquiring good and precise results, when compared with a commercial phasor measurement device, PMU-1....

  2. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    Science.gov (United States)

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  3. Analytic webs support the synthesis of ecological data sets.

    Science.gov (United States)

    Ellison, Aaron M; Osterweil, Leon J; Clarke, Lori; Hadley, Julian L; Wise, Alexander; Boose, Emery; Foster, David R; Hanson, Allen; Jensen, David; Kuzeja, Paul; Riseman, Edward; Schultz, Howard

    2006-06-01

    A wide variety of data sets produced by individual investigators are now synthesized to address ecological questions that span a range of spatial and temporal scales. It is important to facilitate such syntheses so that "consumers" of data sets can be confident that both input data sets and synthetic products are reliable. Necessary documentation to ensure the reliability and validation of data sets includes both familiar descriptive metadata and formal documentation of the scientific processes used (i.e., process metadata) to produce usable data sets from collections of raw data. Such documentation is complex and difficult to construct, so it is important to help "producers" create reliable data sets and to facilitate their creation of required metadata. We describe a formal representation, an "analytic web," that aids both producers and consumers of data sets by providing complete and precise definitions of scientific processes used to process raw and derived data sets. The formalisms used to define analytic webs are adaptations of those used in software engineering, and they provide a novel and effective support system for both the synthesis and the validation of ecological data sets. We illustrate the utility of an analytic web as an aid to producing synthetic data sets through a worked example: the synthesis of long-term measurements of whole-ecosystem carbon exchange. Analytic webs are also useful validation aids for consumers because they support the concurrent construction of a complete, Internet-accessible audit trail of the analytic processes used in the synthesis of the data sets. Finally we describe our early efforts to evaluate these ideas through the use of a prototype software tool, SciWalker. We indicate how this tool has been used to create analytic webs tailored to specific data-set synthesis and validation activities, and suggest extensions to it that will support additional forms of validation. The process metadata created by SciWalker is

  4. Validity testing and neuropsychology practice in the VA healthcare system: results from recent practitioner survey (.).

    Science.gov (United States)

    Young, J Christopher; Roper, Brad L; Arentsen, Timothy J

    2016-05-01

    A survey of neuropsychologists in the Veterans Health Administration examined symptom/performance validity test (SPVT) practices and estimated base rates for patient response bias. Invitations were emailed to 387 psychologists employed within the Veterans Affairs (VA), identified as likely practicing neuropsychologists, resulting in 172 respondents (44.4% response rate). Practice areas varied, with 72% at least partially practicing in general neuropsychology clinics and 43% conducting VA disability exams. Mean estimated failure rates were 23.0% for clinical outpatient, 12.9% for inpatient, and 39.4% for disability exams. Failure rates were the highest for mTBI and PTSD referrals. Failure rates were positively correlated with the number of cases seen and frequency and number of SPVT use. Respondents disagreed regarding whether one (45%) or two (47%) failures are required to establish patient response bias, with those administering more measures employing the more stringent criterion. Frequency of the use of specific SPVTs is reported. Base rate estimates for SPVT failure in VA disability exams are comparable to those in other medicolegal settings. However, failure in routine clinical exams is much higher in the VA than in other settings, possibly reflecting the hybrid nature of the VA's role in both healthcare and disability determination. Generally speaking, VA neuropsychologists use SPVTs frequently and eschew pejorative terms to describe their failure. Practitioners who require only one SPVT failure to establish response bias may overclassify patients. Those who use few or no SPVTs may fail to identify response bias. Additional clinical and theoretical implications are discussed.

  5. Validity of Two WPPSI Short Forms in Outpatient Clinic Settings.

    Science.gov (United States)

    Haynes, Jack P.; Atkinson, David

    1983-01-01

    Investigated the validity of subtest short forms for the Wechsler Preschool and Primary Scale of Intelligence in an outpatient population of 116 children. Data showed that the short forms underestimated actual level of intelligence and supported use of a short form only as a brief screening device. (LLL)

  6. Experimental validation of control strategies for a microgrid test facility including a storage system and renewable generation sets

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Silvestro, Federico

    2012-01-01

    The paper is aimed at describing and validating some control strategies in the SYSLAB experimental test facility characterized by the presence of a low voltage network with a 15 kW-190 kWh Vanadium Redox Flow battery system and a 11 kW wind turbine. The generation set is connected to the local...... network and is fully controllable by the SCADA system. The control strategies, implemented on a local pc interfaced to the SCADA, are realized in Matlab-Simulink. The main purpose is to control the charge/discharge action of the storage system in order to present at the point of common coupling...... the desired power or energy profiles....

  7. Concurrent Validation of Experimental Army Enlisted Personnel Selection and Classification Measures

    National Research Council Canada - National Science Library

    Knapp, Deirdre J; Tremble, Trueman R

    2007-01-01

    .... This report documents the method and results of the criterion-related validation. The predictor set includes measures of cognitive ability, temperament, psychomotor skills, values, expectations...

  8. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  9. Student-Directed Video Validation of Psychomotor Skills Performance: A Strategy to Facilitate Deliberate Practice, Peer Review, and Team Skill Sets.

    Science.gov (United States)

    DeBourgh, Gregory A; Prion, Susan K

    2017-03-22

    Background Essential nursing skills for safe practice are not limited to technical skills, but include abilities for determining salience among clinical data within dynamic practice environments, demonstrating clinical judgment and reasoning, problem-solving abilities, and teamwork competence. Effective instructional methods are needed to prepare new nurses for entry-to-practice in contemporary healthcare settings. Method This mixed-methods descriptive study explored self-reported perceptions of a process to self-record videos for psychomotor skill performance evaluation in a convenience sample of 102 pre-licensure students. Results Students reported gains in confidence and skill acquisition using team skills to record individual videos of skill performance, and described the importance of teamwork, peer support, and deliberate practice. Conclusion Although time consuming, the production of student-directed video validations of psychomotor skill performance is an authentic task with meaningful accountabilities that is well-received by students as an effective, satisfying learner experience to increase confidence and competence in performing psychomotor skills.

  10. Regulatory perspectives on human factors validation

    International Nuclear Information System (INIS)

    Harrison, F.; Staples, L.

    2001-01-01

    Validation is an important avenue for controlling the genesis of human error, and thus managing loss, in a human-machine system. Since there are many ways in which error may intrude upon system operation, it is necessary to consider the performance-shaping factors that could introduce error and compromise system effectiveness. Validation works to this end by examining, through objective testing and measurement, the newly developed system, procedure or staffing level, in order to identify and eliminate those factors which may negatively influence human performance. It is essential that validation be done in a high-fidelity setting, in an objective and systematic manner, using appropriate measures, if meaningful results are to be obtained, In addition, inclusion of validation work in any design process can be seen as contributing to a good safety culture, since such activity allows licensees to eliminate elements which may negatively impact on human behaviour. (author)

  11. Solution Validation for a Double Façade Prototype

    Directory of Open Access Journals (Sweden)

    Pau Fonseca i Casas

    2017-12-01

    Full Text Available A Solution Validation involves comparing the data obtained from the system that are implemented following the model recommendations, as well as the model results. This paper presents a Solution Validation that has been performed with the aim of certifying that a set of computer-optimized designs, for a double façade, are consistent with reality. To validate the results obtained through simulation models, based on dynamic thermal calculation and using Computational Fluid Dynamic techniques, a comparison with the data obtained by monitoring a real implemented prototype has been carried out. The new validated model can be used to describe the system thermal behavior in different climatic zones without having to build a new prototype. The good performance of the proposed double façade solution is confirmed since the validation assures there is a considerable energy saving, preserving and even improving interior comfort. This work shows all the processes in the Solution Validation depicting some of the problems we faced and represents an example of this kind of validation that often is not considered in a simulation project.

  12. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  13. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  14. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  15. Developing mathematics learning set for special-needs junior high school student oriented to learning interest and achievement

    Directory of Open Access Journals (Sweden)

    Ai Sadidah

    2016-11-01

    Full Text Available This study aims to produce a mathematics learning set for special-needs students (mathematical learning disability and mathematically gifted of Junior High School Grade VIII Second Semester oriented to learning interests and achievement which is valid, practical, and effective. This study was a research and development study using the Four-D development model consisting of four stages: (1 define, (2 design, (3 develop, and (4 disseminate. The quality of learning set consisting of the following three criterions: (1 validity, (2 practicality, and (3 effectiveness.  The data analysis technique used in this study is a descriptive quantitative analysis. The research produced learning set consisting of lesson plans and student worksheets. The result of the research shows that: (1 the learning set fulfill the valid criteria base on experts’ appraisal; (2 the learning set fulfill the practical criterion base on teacher’s and students’ questionnaire, and observation of learning implementation; (3 the learning set fulfill the effectiveness criterion base on learning interest and achievement.

  16. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. EXAMINATION OF A PROPOSED VALIDATION DATA SET USING CFD CALCULATIONS

    International Nuclear Information System (INIS)

    Johnson, Richard W.

    2009-01-01

    The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 C to perhaps 1000 C. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for such for future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present article presents new results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made

  18. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  19. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  20. Hospital blood bank information systems accurately reflect patient transfusion: results of a validation study.

    Science.gov (United States)

    McQuilten, Zoe K; Schembri, Nikita; Polizzotto, Mark N; Akers, Christine; Wills, Melissa; Cole-Sinclair, Merrole F; Whitehead, Susan; Wood, Erica M; Phillips, Louise E

    2011-05-01

    Hospital transfusion laboratories collect information regarding blood transfusion and some registries gather clinical outcomes data without transfusion information, providing an opportunity to integrate these two sources to explore effects of transfusion on clinical outcomes. However, the use of laboratory information system (LIS) data for this purpose has not been validated previously. Validation of LIS data against individual patient records was undertaken at two major centers. Data regarding all transfusion episodes were analyzed over seven 24-hour periods. Data regarding 596 units were captured including 399 red blood cell (RBC), 95 platelet (PLT), 72 plasma, and 30 cryoprecipitate units. They were issued to: inpatient 221 (37.1%), intensive care 109 (18.3%), outpatient 95 (15.9%), operating theater 45 (7.6%), emergency department 27 (4.5%), and unrecorded 99 (16.6%). All products recorded by LIS as issued were documented as transfused to intended patients. Median time from issue to transfusion initiation could be calculated for 535 (89.8%) components: RBCs 16 minutes (95% confidence interval [CI], 15-18 min; interquartile range [IQR], 7-30 min), PLTs 20 minutes (95% CI, 15-22 min; IQR, 10-37 min), fresh-frozen plasma 33 minutes (95% CI, 14-83 min; IQR, 11-134 min), and cryoprecipitate 3 minutes (95% CI, -10 to 42 min; IQR, -15 to 116 min). Across a range of blood component types and destinations comparison of LIS data with clinical records demonstrated concordance. The difference between LIS timing data and patient clinical records reflects expected time to transport, check, and prepare transfusion but does not affect the validity of linkage for most research purposes. Linkage of clinical registries with LIS data can therefore provide robust information regarding individual patient transfusion. This enables analysis of joint data sets to determine the impact of transfusion on clinical outcomes. © 2010 American Association of Blood Banks.

  1. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  2. Quantitative co-localization and pattern analysis of endo-lysosomal cargo in subcellular image cytometry and validation on synthetic image sets

    DEFF Research Database (Denmark)

    Lund, Frederik W.; Wüstner, Daniel

    2017-01-01

    /LYSs. Analysis of endocytic trafficking relies heavily on quantitative fluorescence microscopy, but evaluation of the huge image data sets is challenging and demands computer-assisted statistical tools. Here, we describe how to use SpatTrack (www.sdu.dk/bmb/spattrack), an imaging toolbox, which we developed...... such synthetic vesicle patterns as “ground truth” for validation of two-channel analysis tools in SpatTrack, revealing their high reliability. An improved version of SpatTrack for microscopy-based quantification of cargo transport through the endo-lysosomal system accompanies this protocol....

  3. Neutronics validation during conversion to LEU

    International Nuclear Information System (INIS)

    Hendriks, J. A.; Sciolla, C. M.; Van Der Marck, S. C.; Valko, J.

    2006-01-01

    From October 2005 to May 2006 the High Flux Reactor at Petten, the Netherlands, was progressively converted to low-enriched uranium. The core calculations were performed with two code systems, one being Rebus/MCNP, the other being Oscar-3. These systems were chosen because Rebus (for fuel burn-up) and MCNP (for flux, power, and activation reaction rates) have a long and good track record, whereas Oscar-3 is a newer code, with more user-friendly interfaces that facilitate day to day and cycle to cycle variable input generation. The following measurements have been used for validation of the neutronics calculations: control rod settings at begin and end of cycle, reactivity of control rods, Cu-wire activation during low power runs of the reactor, activation monitor sets present during part of the full power cycle, and isotope production measurements. We report on a comparison of measurements and calculational results for the control rod settings, Cu-wire activation and monitor set data. The Cu-wire activation results are mostly within 10% of experimental values, the monitor set activation results are easily within 5%, based on absolute predictions from the calculations. (authors)

  4. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  5. A comparison of simulation results from two terrestrial carbon cycle models using three climate data sets

    International Nuclear Information System (INIS)

    Ito, Akihiko; Sasai, Takahiro

    2006-01-01

    This study addressed how different climate data sets influence simulations of the global terrestrial carbon cycle. For the period 1982-2001, we compared the results of simulations based on three climate data sets (NCEP/NCAR, NCEP/DOE AMIP-II and ERA40) employed in meteorological, ecological and biogeochemical studies and two different models (BEAMS and Sim-CYCLE). The models differed in their parameterizations of photosynthetic and phenological processes but used the same surface climate (e.g. shortwave radiation, temperature and precipitation), vegetation, soil and topography data. The three data sets give different climatic conditions, especially for shortwave radiation, in terms of long-term means, linear trends and interannual variability. Consequently, the simulation results for global net primary productivity varied by 16%-43% only from differences in the climate data sets, especially in these regions where the shortwave radiation data differed markedly: differences in the climate data set can strongly influence simulation results. The differences among the climate data set and between the two models resulted in slightly different spatial distribution and interannual variability in the net ecosystem carbon budget. To minimize uncertainty, we should pay attention to the specific climate data used. We recommend developing an accurate standard climate data set for simulation studies

  6. Wind and solar resource data sets: Wind and solar resource data sets

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, Andrew [National Renewable Energy Laboratory, Golden CO USA; Hodge, Bri-Mathias [National Renewable Energy Laboratory, Golden CO USA; Power Systems Engineering Center, National Renewable Energy Laboratory, Golden CO USA; Draxl, Caroline [National Renewable Energy Laboratory, Golden CO USA; National Wind Technology Center, National Renewable Energy Laboratory, Golden CO USA; Badger, Jake [Department of Wind Energy, Danish Technical University, Copenhagen Denmark; Habte, Aron [National Renewable Energy Laboratory, Golden CO USA; Power Systems Engineering Center, National Renewable Energy Laboratory, Golden CO USA

    2017-12-05

    The range of resource data sets spans from static cartography showing the mean annual wind speed or solar irradiance across a region to high temporal and high spatial resolution products that provide detailed information at a potential wind or solar energy facility. These data sets are used to support continental-scale, national, or regional renewable energy development; facilitate prospecting by developers; and enable grid integration studies. This review first provides an introduction to the wind and solar resource data sets, then provides an overview of the common methods used for their creation and validation. A brief history of wind and solar resource data sets is then presented, followed by areas for future research.

  7. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    Science.gov (United States)

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  8. Implicit structural inversion of gravity data using linear programming, a validation study

    NARCIS (Netherlands)

    Zon, A.T. van; Roy Chowdhury, K.

    2010-01-01

    In this study, a regional scale gravity data set has been inverted to infer the structure (topography) of the top of the basement underlying sub-horizontal strata. We apply our method to this real data set for further proof of concept, validation and benchmarking against results from an earlier

  9. Validation of dose-response calibration curve for X-Ray field of CRCN-NE/CNEN: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Laís Melo; Mendonç, Julyanne Conceição de Goes; Andrade, Aida Mayra Guedes de; Hwang, Suy F.; Mendes, Mariana Esposito; Lima, Fabiana F., E-mail: falima@cnen.gov.br, E-mail: mendes_sb@hotmail.com [Centro Regional de Ciências Nucleares, (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Melo, Ana Maria M.A., E-mail: july_cgm@yahoo.com.br [Universidade Federal de Pernambuco (UFPE), Vitória de Santo Antão, PE (Brazil). Centro Acadêmico de Vitória

    2017-07-01

    It is very important in accident investigations that accurate estimating of absorbed dose takes place, so that it contributes to medical decisions and overall assessment of long-term health consequences. Analysis of chromosome aberrations is the most developed method for biological monitoring, and frequencies of dicentric chromosomes are related to absorbed dose of human peripheral blood lymphocytes using calibration curves. International Atomic Energy Agency (IAEA) recommends that each biodosimetry laboratory sets its own calibration curves, given that there are intrinsic differences in protocols and dose interpretations when using calibration curves produced in other laboratories, which could add further uncertainties to dose estimations. The Laboratory for Biological Dosimetry CRCN-NE recently completed dose-response calibration curves for X ray field. Curves of chromosomes dicentrics and dicentrics plus rings were made using Dose Estimate. This study aimed to validate the calibration curves dose-response for X ray with three irradiated samples. Blood was obtained by venipuncture from healthy volunteer and three samples were irradiated by x-rays of 250 kVp with different absorbed doses (0,5Gy, 1Gy and 2Gy). The irradiation was performed at the CRCN-NE/CNEN Metrology Service with PANTAK X-ray equipment, model HF 320. The frequency of dicentric and centric rings chromosomes were determined in 500 metaphases per sample after cultivation of lymphocytes, and staining with Giemsa 5%. Results showed that the estimated absorbed doses are included in the confidence interval of 95% of real absorbed dose. These Dose-response calibration curves (dicentrics and dicentrics plus rings) seems valid, therefore other tests will be done with different volunteers. (author)

  10. Validation of dose-response calibration curve for X-Ray field of CRCN-NE/CNEN: preliminary results

    International Nuclear Information System (INIS)

    Silva, Laís Melo; Mendonç, Julyanne Conceição de Goes; Andrade, Aida Mayra Guedes de; Hwang, Suy F.; Mendes, Mariana Esposito; Lima, Fabiana F.; Melo, Ana Maria M.A.

    2017-01-01

    It is very important in accident investigations that accurate estimating of absorbed dose takes place, so that it contributes to medical decisions and overall assessment of long-term health consequences. Analysis of chromosome aberrations is the most developed method for biological monitoring, and frequencies of dicentric chromosomes are related to absorbed dose of human peripheral blood lymphocytes using calibration curves. International Atomic Energy Agency (IAEA) recommends that each biodosimetry laboratory sets its own calibration curves, given that there are intrinsic differences in protocols and dose interpretations when using calibration curves produced in other laboratories, which could add further uncertainties to dose estimations. The Laboratory for Biological Dosimetry CRCN-NE recently completed dose-response calibration curves for X ray field. Curves of chromosomes dicentrics and dicentrics plus rings were made using Dose Estimate. This study aimed to validate the calibration curves dose-response for X ray with three irradiated samples. Blood was obtained by venipuncture from healthy volunteer and three samples were irradiated by x-rays of 250 kVp with different absorbed doses (0,5Gy, 1Gy and 2Gy). The irradiation was performed at the CRCN-NE/CNEN Metrology Service with PANTAK X-ray equipment, model HF 320. The frequency of dicentric and centric rings chromosomes were determined in 500 metaphases per sample after cultivation of lymphocytes, and staining with Giemsa 5%. Results showed that the estimated absorbed doses are included in the confidence interval of 95% of real absorbed dose. These Dose-response calibration curves (dicentrics and dicentrics plus rings) seems valid, therefore other tests will be done with different volunteers. (author)

  11. Field assessment of balance in 10 to 14 year old children, reproducibility and validity of the Nintendo Wii board

    DEFF Research Database (Denmark)

    Larsen, Lisbeth Runge; Jørgensen, Martin Grønbech; Junge, Tina

    2014-01-01

    and adults. When assessing static balance, it is essential to use objective, sensitive tools, and these types of measurement have previously been performed in laboratory settings. However, the emergence of technologies like the Nintendo Wii Board (NWB) might allow balance assessment in field settings....... As the NWB has only been validated and tested for reproducibility in adults, the purpose of this study was to examine reproducibility and validity of the NWB in a field setting, in a population of children. METHODS: Fifty-four 10-14 year-olds from the CHAMPS-Study DK performed four different balance tests...... of the reproducibility study. CONCLUSION: Both NWB and AMTI have satisfactory reproducibility for testing static balance in a population of children. Concurrent validity of NWB compared with AMTI was satisfactory. Furthermore, the results from the concurrent validity study were comparable to the reproducibility results...

  12. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  13. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    Science.gov (United States)

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Validating the standard for the National Board Dental Examination Part II.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Neumann, Laura M; Littlefield, John H

    2012-05-01

    As part of the overall exam validation process, the Joint Commission on National Dental Examinations periodically reviews and validates the pass/fail standard for the National Board Dental Examination (NBDE), Parts I and II. The most recent standard-setting activities for NBDE Part II used the Objective Standard Setting method. This report describes the process used to set the pass/fail standard for the 2009 exam. The failure rate on the NBDE Part II increased from 5.3 percent in 2008 to 13.7 percent in 2009 and then decreased to 10 percent in 2010. This article describes the Objective Standard Setting method and presents the estimated probabilities of classification errors based on the beta binomial mathematical model. The results show that the probability of correct classifications of candidate performance is very high (0.97) and that probabilities of false negative and false positive errors are very small (.03 and <0.001, respectively). The low probability of classification errors supports the conclusion that the pass/fail score on the NBDE Part II is a valid guide for making decisions about candidates for dental licensure.

  15. Development and validation of factor analysis for dynamic in-vivo imaging data sets

    Science.gov (United States)

    Goldschmied, Lukas; Knoll, Peter; Mirzaei, Siroos; Kalchenko, Vyacheslav

    2018-02-01

    In-vivo optical imaging method provides information about the anatomical structures and function of tissues ranging from single cell to entire organisms. Dynamic Fluorescent Imaging (DFI) is used to examine dynamic events related to normal physiology or disease progression in real time. In this work we improve this method by using factor analysis (FA) to automatically separate overlying structures.The proposed method is based on a previously introduced Transcranial Optical Vascular Imaging (TOVI), which employs natural and sufficient transparency through the intact cranial bones of a mouse. Fluorescent image acquisition is performed after intravenous fluorescent tracer administration. Afterwards FA is used to extract structures with different temporal characteristics from dynamic contrast enhanced studies without making any a priori assumptions about physiology. The method was validated by a dynamic light phantom based on the Arduino hardware platform and dynamic fluorescent cerebral hemodynamics data sets. Using the phantom data FA can separate various light channels without user intervention. FA applied on an image sequence obtained after fluorescent tracer administration is allowing extracting valuable information about cerebral blood vessels anatomy and functionality without a-priory assumptions of their anatomy or physiology while keeping the mouse cranium intact. Unsupervised color-coding based on FA enhances visibility and distinguishing of blood vessels belonging to different compartments. DFI based on FA especially in case of transcranial imaging can be used to separate dynamic structures.

  16. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  17. Validity and Reliability of Farsi Version of Youth Sport Environment Questionnaire.

    Science.gov (United States)

    Eshghi, Mohammad Ali; Kordi, Ramin; Memari, Amir Hossein; Ghaziasgar, Ahmad; Mansournia, Mohammad-Ali; Zamani Sani, Seyed Hojjat

    2015-01-01

    The Youth Sport Environment Questionnaire (YSEQ) had been developed from Group Environment Questionnaire, a well-known measure of team cohesion. The aim of this study was to adapt and examine the reliability and validity of the Farsi version of the YSEQ. This version was completed by 455 athletes aged 13-17 years. Results of confirmatory factor analysis indicated that two-factor solution showed a good fit to the data. The results also revealed that the Farsi YSEQ showed high internal consistency, test-retest reliability, and good concurrent validity. This study indicated that the Farsi version of the YSEQ is a valid and reliable measure to assess team cohesion in sport setting.

  18. Comorbidity predicts poor prognosis in nasopharyngeal carcinoma: Development and validation of a predictive score model

    International Nuclear Information System (INIS)

    Guo, Rui; Chen, Xiao-Zhong; Chen, Lei; Jiang, Feng; Tang, Ling-Long; Mao, Yan-Ping; Zhou, Guan-Qun; Li, Wen-Fei; Liu, Li-Zhi; Tian, Li; Lin, Ai-Hua; Ma, Jun

    2015-01-01

    Background and purpose: The impact of comorbidity on prognosis in nasopharyngeal carcinoma (NPC) is poorly characterized. Material and methods: Using the Adult Comorbidity Evaluation-27 (ACE-27) system, we assessed the prognostic value of comorbidity and developed, validated and confirmed a predictive score model in a training set (n = 658), internal validation set (n = 658) and independent set (n = 652) using area under the receiver operating curve analysis. Results: Comorbidity was present in 40.4% of 1968 patients (mild, 30.1%; moderate, 9.1%; severe, 1.2%). Compared to an ACE-27 score ⩽1, patients with an ACE-27 score >1 in the training set had shorter overall survival (OS) and disease-free survival (DFS) (both P < 0.001), similar results were obtained in the other sets (P < 0.05). In multivariate analysis, ACE-27 score was a significant independent prognostic factor for OS and DFS. The combined risk score model including ACE-27 had superior prognostic value to TNM stage alone in the internal validation set (0.70 vs. 0.66; P = 0.02), independent set (0.73 vs. 0.67; P = 0.002) and all patients (0.71 vs. 0.67; P < 0.001). Conclusions: Comorbidity significantly affects prognosis, especially in stages II and III, and should be incorporated into the TNM staging system for NPC. Assessment of comorbidity may improve outcome prediction and help tailor individualized treatment

  19. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  20. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.

    Science.gov (United States)

    Gilbert, Hunter B; Webster, Robert J

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.

  1. Cross validation for the classical model of structured expert judgment

    International Nuclear Information System (INIS)

    Colson, Abigail R.; Cooke, Roger M.

    2017-01-01

    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance

  2. Health system context and implementation of evidence-based practices-development and validation of the Context Assessment for Community Health (COACH) tool for low- and middle-income settings.

    Science.gov (United States)

    Bergström, Anna; Skeen, Sarah; Duc, Duong M; Blandon, Elmer Zelaya; Estabrooks, Carole; Gustavsson, Petter; Hoa, Dinh Thi Phuong; Källestål, Carina; Målqvist, Mats; Nga, Nguyen Thu; Persson, Lars-Åke; Pervin, Jesmin; Peterson, Stefan; Rahman, Anisur; Selling, Katarina; Squires, Janet E; Tomlinson, Mark; Waiswa, Peter; Wallin, Lars

    2015-08-15

    The gap between what is known and what is practiced results in health service users not benefitting from advances in healthcare, and in unnecessary costs. A supportive context is considered a key element for successful implementation of evidence-based practices (EBP). There were no tools available for the systematic mapping of aspects of organizational context influencing the implementation of EBPs in low- and middle-income countries (LMICs). Thus, this project aimed to develop and psychometrically validate a tool for this purpose. The development of the Context Assessment for Community Health (COACH) tool was premised on the context dimension in the Promoting Action on Research Implementation in Health Services framework, and is a derivative product of the Alberta Context Tool. Its development was undertaken in Bangladesh, Vietnam, Uganda, South Africa and Nicaragua in six phases: (1) defining dimensions and draft tool development, (2) content validity amongst in-country expert panels, (3) content validity amongst international experts, (4) response process validity, (5) translation and (6) evaluation of psychometric properties amongst 690 health workers in the five countries. The tool was validated for use amongst physicians, nurse/midwives and community health workers. The six phases of development resulted in a good fit between the theoretical dimensions of the COACH tool and its psychometric properties. The tool has 49 items measuring eight aspects of context: Resources, Community engagement, Commitment to work, Informal payment, Leadership, Work culture, Monitoring services for action and Sources of knowledge. Aspects of organizational context that were identified as influencing the implementation of EBPs in high-income settings were also found to be relevant in LMICs. However, there were additional aspects of context of relevance in LMICs specifically Resources, Community engagement, Commitment to work and Informal payment. Use of the COACH tool will allow

  3. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  4. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  5. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues.

    Science.gov (United States)

    Mourya, Devendra T; Yadav, Pragya D; Khare, Ajay; Khan, Anwar H

    2017-10-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  6. Predicting and validating protein interactions using network structure.

    Directory of Open Access Journals (Sweden)

    Pao-Yang Chen

    2008-07-01

    Full Text Available Protein interactions play a vital part in the function of a cell. As experimental techniques for detection and validation of protein interactions are time consuming, there is a need for computational methods for this task. Protein interactions appear to form a network with a relatively high degree of local clustering. In this paper we exploit this clustering by suggesting a score based on triplets of observed protein interactions. The score utilises both protein characteristics and network properties. Our score based on triplets is shown to complement existing techniques for predicting protein interactions, outperforming them on data sets which display a high degree of clustering. The predicted interactions score highly against test measures for accuracy. Compared to a similar score derived from pairwise interactions only, the triplet score displays higher sensitivity and specificity. By looking at specific examples, we show how an experimental set of interactions can be enriched and validated. As part of this work we also examine the effect of different prior databases upon the accuracy of prediction and find that the interactions from the same kingdom give better results than from across kingdoms, suggesting that there may be fundamental differences between the networks. These results all emphasize that network structure is important and helps in the accurate prediction of protein interactions. The protein interaction data set and the program used in our analysis, and a list of predictions and validations, are available at http://www.stats.ox.ac.uk/bioinfo/resources/PredictingInteractions.

  7. Natalizumab treatment reduces fatigue in multiple sclerosis. Results from the TYNERGY trial; a study in the real life setting

    DEFF Research Database (Denmark)

    Svenningsson, Anders; Falk, Eva; Celius, Elisabeth G

    2013-01-01

    . The TYNERGY study aimed to further evaluate the effects of natalizumab treatment on MS-related fatigue. In this one-armed clinical trial including 195 MS patients, natalizumab was prescribed in a real-life setting, and a validated questionnaire, the Fatigue Scale for Motor and Cognitive functions (FSMC......), was used both before and after 12 months of treatment to evaluate a possible change in the fatigue experienced by the patients. In the treated cohort all measured variables, that is, fatigue score, quality of life, sleepiness, depression, cognition, and disability progression were improved from baseline...

  8. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  9. A Revalidation of the SET37 Questionnaire for Student Evaluations of Teaching

    Science.gov (United States)

    Mortelmans, Dimitri; Spooren, Pieter

    2009-01-01

    In this study, the authors report on the validity and reliability of a paper-and-pencil instrument called SET37 used for Student Evaluation of Teaching (SET) in higher education. Using confirmatory factor analysis on 2525 questionnaires, a revalidation of the SET37 shows construct and discriminant validity of the 12 dimensions included in the…

  10. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    Directory of Open Access Journals (Sweden)

    Devendra T Mourya

    2017-01-01

    Full Text Available With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  11. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  12. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    Science.gov (United States)

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  13. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  14. LBA-ECO TG-07 Forest Structure Measurements for GLAS Validation: Santarem 2004

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides the results of a GLAS (the Geoscience Laser Altimeter System) forest structure validation survey conducted in Santarem and Sao Jorge, Para...

  15. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    Science.gov (United States)

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  16. Validity and Reliability of Farsi Version of Youth Sport Environment Questionnaire

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Eshghi

    2015-01-01

    Full Text Available The Youth Sport Environment Questionnaire (YSEQ had been developed from Group Environment Questionnaire, a well-known measure of team cohesion. The aim of this study was to adapt and examine the reliability and validity of the Farsi version of the YSEQ. This version was completed by 455 athletes aged 13–17 years. Results of confirmatory factor analysis indicated that two-factor solution showed a good fit to the data. The results also revealed that the Farsi YSEQ showed high internal consistency, test-retest reliability, and good concurrent validity. This study indicated that the Farsi version of the YSEQ is a valid and reliable measure to assess team cohesion in sport setting.

  17. Method Validation Procedure in Gamma Spectroscopy Laboratory

    International Nuclear Information System (INIS)

    El Samad, O.; Baydoun, R.

    2008-01-01

    The present work describes the methodology followed for the application of ISO 17025 standards in gamma spectroscopy laboratory at the Lebanese Atomic Energy Commission including the management and technical requirements. A set of documents, written procedures and records were prepared to achieve the management part. The technical requirements, internal method validation was applied through the estimation of trueness, repeatability , minimum detectable activity and combined uncertainty, participation in IAEA proficiency tests assure the external method validation, specially that the gamma spectroscopy lab is a member of ALMERA network (Analytical Laboratories for the Measurements of Environmental Radioactivity). Some of these results are presented in this paper. (author)

  18. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...

  19. Monitoring electro-magnetic field in urban areas: new set-ups and results

    Energy Technology Data Exchange (ETDEWEB)

    Lubritto, C.; Petraglia, A.; Paribello, G.; Formosi, R.; Rosa, M. de; Vetromile, C.; Palmieri, A.; D' Onofrio, A. [Seconda Universita di Napoli, Dipt. di Scienze Ambientali, Caserta (Italy); Di Bella, G.; Giannini, V. [Vector Group, Roma (Italy)

    2006-07-01

    In this paper two different set-ups for continuous monitoring of electromagnetic levels are presented: the first one (Continuous Time E.M.F. Monitoring System) is based upon a network of fixed stations, allowing a detailed field monitoring as function of the time; the second one (Mobile Measurements Units) resorts to portable stations mounted on standard bicycles, allowing a positional screening in limited time intervals. For both set-ups a particular attention has been paid to the data management, by means of tools like web geographic information systems (Web-Gis). Moreover the V.I.C.R.E.M./E.L.F. software has been used for a predictive analysis of the electromagnetic field levels along with the geo referenced data coming from the field measurements. Starting from these results it has been realized that there is a need for an efficient and correct action of monitoring and information/formation in this domain, where dis-information or bad information is very often spread in the population, in particular in a field where the process of the appreciation and assessment of risk does not necessarily make use of a rationale, technically-informed procedure, but the judgement is rather based on a personal feeling, which may derive from a limited, unstructured set of information, using a set of qualitative attributes rather than a quantity. (N.C.)

  20. Monitoring electro-magnetic field in urban areas: new set-ups and results

    International Nuclear Information System (INIS)

    Lubritto, C.; Petraglia, A.; Paribello, G.; Formosi, R.; Rosa, M. de; Vetromile, C.; Palmieri, A.; D'Onofrio, A.; Di Bella, G.; Giannini, V.

    2006-01-01

    In this paper two different set-ups for continuous monitoring of electromagnetic levels are presented: the first one (Continuous Time E.M.F. Monitoring System) is based upon a network of fixed stations, allowing a detailed field monitoring as function of the time; the second one (Mobile Measurements Units) resorts to portable stations mounted on standard bicycles, allowing a positional screening in limited time intervals. For both set-ups a particular attention has been paid to the data management, by means of tools like web geographic information systems (Web-Gis). Moreover the V.I.C.R.E.M./E.L.F. software has been used for a predictive analysis of the electromagnetic field levels along with the geo referenced data coming from the field measurements. Starting from these results it has been realized that there is a need for an efficient and correct action of monitoring and information/formation in this domain, where dis-information or bad information is very often spread in the population, in particular in a field where the process of the appreciation and assessment of risk does not necessarily make use of a rationale, technically-informed procedure, but the judgement is rather based on a personal feeling, which may derive from a limited, unstructured set of information, using a set of qualitative attributes rather than a quantity. (N.C.)

  1. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  2. Assessing the physical service setting: a look at emergency departments.

    Science.gov (United States)

    Steinke, Claudia

    2015-01-01

    To determine the attributes of the physical setting that are important for developing a positive service climate within emergency departments and to validate a measure for assessing physical service design. The design of the physical setting is an important and contributing factor for creating a service climate in organizations. Service climate is defined as employee perceptions of the practices, procedures, and behaviors that get rewarded, supported, and expected with regard to customer service and customer service quality. There has been research conducted which identifies antecedents within organization that promotes a positive service climate which in turn creates service-oriented behaviors by employees toward clients. The antecedent of the physical setting and its impact on perceptions of service climate has been less commonly explored. Using the concept of the physical service setting (which may be defined as aspects of the physical, built environment that facilitate the delivery of quality service), attributes of the physical setting and their relationship with service climate were explored by means of a quantitative paper survey distributed to emergency nurses (n = 180) throughout a province in Canada. The results highlight the validity and reliability of six scales measuring the physical setting and its relation to service. Respondents gave low ratings to the physical setting of their departments, in addition to low ratings of service climate. Respondents feel that the design of the physical setting in the emergency departments where they work is not conducive to providing quality service to clients. Certain attributes of the physical setting were found to be significant in influencing perceptions of service climate, hence service quality, within the emergency department setting. © The Author(s) 2015.

  3. Urban roughness mapping validation techniques and some first results

    NARCIS (Netherlands)

    Bottema, M; Mestayer, PG

    1998-01-01

    Because of measuring problems related to evaluation of urban roughness parameters, a new approach using a roughness mapping tool has been tested: evaluation of roughness length z(o) and zero displacement z(d) from cadastral databases. Special attention needs to be given to the validation of the

  4. Overview of CSNI separate effects tests validation matrix

    Energy Technology Data Exchange (ETDEWEB)

    Aksan, N. [Paul Scherrer Institute, Villigen (Switzerland); Auria, F.D. [Univ. of Pisa (Italy); Glaeser, H. [Gesellschaft fuer anlagen und Reaktorsicherheit, (GRS), Garching (Germany)] [and others

    1995-09-01

    An internationally agreed separate effects test (SET) Validation Matrix for thermal-hydraulic system codes has been established by a sub-group of the Task Group on Thermal Hydraulic System Behaviour as requested by the OECD/NEA Committee on Safety of Nuclear Installations (SCNI) Principal Working Group No. 2 on Coolant System Behaviour. The construction of such a Matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement and also for quantitative code assessment with respect to quantification of uncertainties to the modeling of individual phenomena by the codes. The methodology, that has been developed during the process of establishing CSNI-SET validation matrix, was an important outcome of the work on SET matrix. In addition, all the choices which have been made from the 187 identified facilities covering the 67 phenomena will be investigated together with some discussions on the data base.

  5. Development of a set of benchmark problems to verify numerical methods for solving burnup equations

    International Nuclear Information System (INIS)

    Lago, Daniel; Rahnema, Farzad

    2017-01-01

    Highlights: • Description transmutation chain benchmark problems. • Problems for validating numerical methods for solving burnup equations. • Analytical solutions for the burnup equations. • Numerical solutions for the burnup equations. - Abstract: A comprehensive set of transmutation chain benchmark problems for numerically validating methods for solving burnup equations was created. These benchmark problems were designed to challenge both traditional and modern numerical methods used to solve the complex set of ordinary differential equations used for tracking the change in nuclide concentrations over time due to nuclear phenomena. Given the development of most burnup solvers is done for the purpose of coupling with an established transport solution method, these problems provide a useful resource in testing and validating the burnup equation solver before coupling for use in a lattice or core depletion code. All the relevant parameters for each benchmark problem are described. Results are also provided in the form of reference solutions generated by the Mathematica tool, as well as additional numerical results from MATLAB.

  6. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  7. Development and Validation of a Self-Assessment Tool for Albuminuria: Results From the Reasons for Geographic and Racial Differences in Stroke (REGARDS) Study

    Science.gov (United States)

    Muntner, Paul; Woodward, Mark; Carson, April P; Judd, Suzanne E; Levitan, Emily B; Mann, Devin; McClellan, William; Warnock, David G

    2011-01-01

    Background The prevalence of albuminuria in the general population is high, but awareness of it is low. Therefore, we sought to develop and validate a self-assessment tool that allows individuals to estimate their probability of having albuminuria. Study Design Cross-sectional study Setting & Participants The population-based REasons for Geographic And Racial Differences in Stroke (REGARDS) study for model development and the National Health and Nutrition Examination Survey 1999-2004 (NHANES 1999-2004) for model validation. US adults ≥ 45 years of age in the REGARDS study (n=19,697) and NHANES 1999-2004 (n=7,168) [nijsje 1]Factor Candidate items for the self-assessment tool were collected using a combination of interviewer- and self-administered questionnaires. Outcome Albuminuria was defined as a urinary albumin to urinary creatinine ratio ≥ 30 mg/g in spot samples. Results Eight items were included in the self-assessment tool (age, race, gender, current smoking, self-rated health, and self-reported history of diabetes, hypertension, and stroke). These items provided a c-statistic of 0.709 (95% CI, 0.699 – 0.720) and a good model fit (Hosmer-Lemeshow chi-square p-value = 0.49). In the external validation data set, the c-statistic for discriminating individuals with and without albuminuria using the self-assessment tool was 0.714. Using a threshold of ≥ 10% probability of albuminuria from the self-assessment tool, 36% of US adults ≥ 45 years of age in NHANES 1999-2004 would test positive and be recommended screening. The sensitivity, specificity, and positive and negative predictive values for albuminuria associated with a probability ≥ 10% were 66%, 68%, 23% and 93%, respectively. Limitations Repeat urine samples were not available to assess the persistency of albuminuria. Conclusions Eight self-report items provide good discrimination for the probability of having albuminuria. This tool may encourage individuals with a high probability to request

  8. Validation of the Self Reporting Questionnaire 20-Item (SRQ-20) for Use in a Low- and Middle-Income Country Emergency Centre Setting

    Science.gov (United States)

    Wyatt, Gail; Williams, John K.; Stein, Dan J.; Sorsdahl, Katherine

    2015-01-01

    Common mental disorders are highly prevalent in emergency centre (EC) patients, yet few brief screening tools have been validated for low- and middle-income country (LMIC) ECs. This study explored the psychometric properties of the SRQ-20 screening tool in South African ECs using the Mini Neuropsychiatric Interview (MINI) as the gold standard comparison tool. Patients (n=200) from two ECs in Cape Town, South Africa were interviewed using the SRQ-20 and the MINI. Internal consistency, screening properties and factorial validity were examined. The SRQ-20 was effective in identifying participants with major depression, anxiety disorders or suicidality and displayed good internal consistency. The optimal cutoff scores were 4/5 and 6/7 for men and women respectively. The factor structure differed by gender. The SRQ-20 is a useful tool for EC settings in South Africa and holds promise for task-shifted approaches to decreasing the LMIC burden of mental disorders. PMID:26957953

  9. Results and validity of renal blood flow measurements using Xenon 133

    International Nuclear Information System (INIS)

    Serres, P.; Danet, B.; Guiraud, R.; Durand, D.; Ader, J.L.

    1975-01-01

    The renal blood flow was measured by external recording of the xenon 133 excretion curve. The study involved 45 patients with permanent high blood pressure and 7 transplant patients. The validity of the method was checked on 10 dogs. From the results it seems that the cortical blood flow, its fraction and the mean flow rate are the most representative of the renal haemodynamics parameters, from which may be established the repercussions of blood pressure on kidney vascularisation. Experiments are in progress on animals to check the compartment idea by comparing injections into the renal artery and into various kidney tissues in situ [fr

  10. Toward valid and reliable brain imaging results in eating disorders.

    Science.gov (United States)

    Frank, Guido K W; Favaro, Angela; Marsh, Rachel; Ehrlich, Stefan; Lawson, Elizabeth A

    2018-03-01

    Human brain imaging can help improve our understanding of mechanisms underlying brain function and how they drive behavior in health and disease. Such knowledge may eventually help us to devise better treatments for psychiatric disorders. However, the brain imaging literature in psychiatry and especially eating disorders has been inconsistent, and studies are often difficult to replicate. The extent or severity of extremes of eating and state of illness, which are often associated with differences in, for instance hormonal status, comorbidity, and medication use, commonly differ between studies and likely add to variation across study results. Those effects are in addition to the well-described problems arising from differences in task designs, data quality control procedures, image data preprocessing and analysis or statistical thresholds applied across studies. Which of those factors are most relevant to improve reproducibility is still a question for debate and further research. Here we propose guidelines for brain imaging research in eating disorders to acquire valid results that are more reliable and clinically useful. © 2018 Wiley Periodicals, Inc.

  11. Validation of Code ASTEC with LIVE-L1 Experimental Results

    International Nuclear Information System (INIS)

    Bachrata, Andrea

    2008-01-01

    The severe accidents with core melting are considered at the design stage of project at Generation 3+ of Nuclear Power Plants (NPP). Moreover, there is an effort to apply the severe accident management to the operated NPP. The one of main goals of severe accidents mitigation is corium localization and stabilization. The two strategies that fulfil this requirement are: the in-vessel retention (e.g. AP-600, AP- 1000) and the ex-vessel retention (e.g. EPR). To study the scenario of in-vessel retention, a large experimental program and the integrated codes have been developed. The LIVE-L1 experimental facility studied the formation of melt pools and the melt accumulation in the lower head using different cooling conditions. Nowadays, a new European computer code ASTEC is being developed jointly in France and Germany. One of the important steps in ASTEC development in the area of in-vessel retention of corium is its validation with LIVE-L1 experimental results. Details of the experiment are reported. Results of the ASTEC (module DIVA) application to the analysis of the test are presented. (author)

  12. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  13. Planck intermediate results: IV. the XMM-Newton validation programme for new Planck galaxy clusters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Delabrouille, J.; Ganga, K.

    2013-01-01

    We present the final results from the XMM-Newton validation follow-up of new Planck galaxy cluster candidates. We observed 15 new candidates, detected with signal-to-noise ratios between 4.0 and 6.1 in the 15.5-month nominal Planck survey. The candidates were selected using ancillary data flags d...

  14. Constellation Map: Downstream visualization and interpretation of gene set enrichment results [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Yan Tan

    2015-06-01

    Full Text Available Summary: Gene set enrichment analysis (GSEA approaches are widely used to identify coordinately regulated genes associated with phenotypes of interest. Here, we present Constellation Map, a tool to visualize and interpret the results when enrichment analyses yield a long list of significantly enriched gene sets. Constellation Map identifies commonalities that explain the enrichment of multiple top-scoring gene sets and maps the relationships between them. Constellation Map can help investigators take full advantage of GSEA and facilitates the biological interpretation of enrichment results. Availability: Constellation Map is freely available as a GenePattern module at http://www.genepattern.org.

  15. Electron collision cross section sets of TMS and TEOS vapours

    Science.gov (United States)

    Kawaguchi, S.; Takahashi, K.; Satoh, K.; Itoh, H.

    2017-05-01

    Reliable and detailed sets of electron collision cross sections for tetramethylsilane [TMS, Si(CH3)4] and tetraethoxysilane [TEOS, Si(OC2H5)4] vapours are proposed. The cross section sets of TMS and TEOS vapours include 16 and 20 kinds of partial ionization cross sections, respectively. Electron transport coefficients, such as electron drift velocity, ionization coefficient, and longitudinal diffusion coefficient, in those vapours are calculated by Monte Carlo simulations using the proposed cross section sets, and the validity of the sets is confirmed by comparing the calculated values of those transport coefficients with measured data. Furthermore, the calculated values of the ionization coefficient in TEOS/O2 mixtures are compared with measured data to confirm the validity of the proposed cross section set.

  16. Excellent cross-cultural validity, intra-test reliability and construct validity of the dutch rivermead mobility index in patients after stroke undergoing rehabilitation

    NARCIS (Netherlands)

    Roorda, Leo D.; Green, John; De Kluis, Kiki R. A.; Molenaar, Ivo W.; Bagley, Pam; Smith, Jane; Geurts, Alexander C. H.

    2008-01-01

    Objective: To investigate the cross-cultural validity of international Dutch-English comparisons when using the Dutch Rivermead Mobility Index (RMI), and the intra-test reliability and construct validity of the Dutch RMI. Methods: Cross-cultural validity was studied in a combined data-set of Dutch

  17. Computer code validation study of PWR core design system, CASMO-3/MASTER-α

    International Nuclear Information System (INIS)

    Lee, K. H.; Kim, M. H.; Woo, S. W.

    1999-01-01

    In this paper, the feasibility of CASMO-3/MASTER-α nuclear design system was investigated for commercial PWR core. Validation calculation was performed as follows. Firstly, the accuracy of cross section generation from table set using linear feedback model was estimated. Secondly, the results of CASMO-3/MASTER-α was compared with CASMO-3/NESTLE 5.02 for a few benchmark problems. Microscopic cross sections computed from table set were almost the same with those from CASMO-3. There were small differences between calculated results of two code systems. Thirdly, the repetition of CASMO-3/MASTER-α calculation for Younggwang Unit-3, Cycle-1 core was done and their results were compared with nuclear design report(NDR) and uncertainty analysis results of KAERI. It was found that uncertainty analysis results were reliable enough because results were agreed each other. It was concluded that the use of nuclear design system CASMO-3/MASTER-α was validated for commercial PWR core

  18. Assessment of Random Assignment in Training and Test Sets using Generalized Cluster Analysis Technique

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACĂ

    2011-06-01

    Full Text Available Aim: The properness of random assignment of compounds in training and validation sets was assessed using the generalized cluster technique. Material and Method: A quantitative Structure-Activity Relationship model using Molecular Descriptors Family on Vertices was evaluated in terms of assignment of carboquinone derivatives in training and test sets during the leave-many-out analysis. Assignment of compounds was investigated using five variables: observed anticancer activity and four structure descriptors. Generalized cluster analysis with K-means algorithm was applied in order to investigate if the assignment of compounds was or not proper. The Euclidian distance and maximization of the initial distance using a cross-validation with a v-fold of 10 was applied. Results: All five variables included in analysis proved to have statistically significant contribution in identification of clusters. Three clusters were identified, each of them containing both carboquinone derivatives belonging to training as well as to test sets. The observed activity of carboquinone derivatives proved to be normal distributed on every. The presence of training and test sets in all clusters identified using generalized cluster analysis with K-means algorithm and the distribution of observed activity within clusters sustain a proper assignment of compounds in training and test set. Conclusion: Generalized cluster analysis using the K-means algorithm proved to be a valid method in assessment of random assignment of carboquinone derivatives in training and test sets.

  19. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  20. An assessment of the validity of inelastic design analysis methods by comparisons of predictions with test results

    International Nuclear Information System (INIS)

    Corum, J.M.; Clinard, J.A.; Sartory, W.K.

    1976-01-01

    The use of computer programs that employ relatively complex constitutive theories and analysis procedures to perform inelastic design calculations on fast reactor system components introduces questions of validation and acceptance of the analysis results. We may ask ourselves, ''How valid are the answers.'' These questions, in turn, involve the concepts of verification of computer programs as well as qualification of the computer programs and of the underlying constitutive theories and analysis procedures. This paper addresses the latter - the qualification of the analysis methods for inelastic design calculations. Some of the work underway in the United States to provide the necessary information to evaluate inelastic analysis methods and computer programs is described, and typical comparisons of analysis predictions with inelastic structural test results are presented. It is emphasized throughout that rather than asking ourselves how valid, or correct, are the analytical predictions, we might more properly question whether or not the combination of the predictions and the associated high-temperature design criteria leads to an acceptable level of structural integrity. It is believed that in this context the analysis predictions are generally valid, even though exact correlations between predictions and actual behavior are not obtained and cannot be expected. Final judgment, however, must be reserved for the design analyst in each specific case. (author)

  1. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.

  2. The Childbirth Experience Questionnaire (CEQ) - validation of its use in a Danish population

    DEFF Research Database (Denmark)

    Boie, Sidsel; Glavind, Julie; Uldbjerg, Niels

    experience is lacking. The Childbirth Experience Questionnaire (CEQ) was developed in Sweden in 2010 and validated in Swedish women, but never validated in a Danish setting, and population. The purpose of our study was to validate the CEQ as a reliable tool for measuring the childbirth experience in Danish......Title The Childbirth Experience Questionnaire (CEQ) - validation the use in a Danish population Introduction Childbirth experience is arguably as important as measuring birth outcomes such as mode of delivery or perinatal morbidity. A robust, validated, Danish tool for evaluating childbirth...... index of agreement between the two scores. Case description (mandatory for Clinical Report) Results (mandatory for Original Research) Face validity: All respondents stated that it was easy to understand and complete the questionnaire. Construct validity: Statistically significant higher CEQ scores were...

  3. A Comparison of EQ-5D-3L Index Scores Using Malaysian, Singaporean, Thai, and UK Value Sets in Indonesian Cervical Cancer Patients.

    Science.gov (United States)

    Endarti, Dwi; Riewpaiboon, Arthorn; Thavorncharoensap, Montarat; Praditsitthikorn, Naiyana; Hutubessy, Raymond; Kristina, Susi Ari

    2018-05-01

    To gain insight into the most suitable foreign value set among Malaysian, Singaporean, Thai, and UK value sets for calculating the EuroQol five-dimensional questionnaire index score (utility) among patients with cervical cancer in Indonesia. Data from 87 patients with cervical cancer recruited from a referral hospital in Yogyakarta province, Indonesia, from an earlier study of health-related quality of life were used in this study. The differences among the utility scores derived from the four value sets were determined using the Friedman test. Performance of the psychometric properties of the four value sets versus visual analogue scale (VAS) was assessed. Intraclass correlation coefficients and Bland-Altman plots were used to test the agreement among the utility scores. Spearman ρ correlation coefficients were used to assess convergent validity between utility scores and patients' sociodemographic and clinical characteristics. With respect to known-group validity, the Kruskal-Wallis test was used to examine the differences in utility according to the stages of cancer. There was significant difference among utility scores derived from the four value sets, among which the Malaysian value set yielded higher utility than the other three value sets. Utility obtained from the Malaysian value set had more agreements with VAS than the other value sets versus VAS (intraclass correlation coefficients and Bland-Altman plot tests results). As for the validity, the four value sets showed equivalent psychometric properties as those that resulted from convergent and known-group validity tests. In the absence of an Indonesian value set, the Malaysian value set was more preferable to be used compared with the other value sets. Further studies on the development of an Indonesian value set need to be conducted. Copyright © 2018. Published by Elsevier Inc.

  4. Automotive RF immunity test set-up analysis : why test results can't compare

    NARCIS (Netherlands)

    Coenen, Mart; Pues, H.; Bousquet, T.

    2011-01-01

    Though the automotive RF emission and RF immunity requirements are highly justifiable, the application of those requirements in an non-intended manner leads to false conclusions and unnecessary redesigns for the electronics involved. When the test results become too dependent upon the test set-up

  5. PENGEMBANGAN PERANGKAT PEMBELAJARAN TEMATIK BERVISI SETS BERKARAKTER PEDULI LINGKUNGAN

    Directory of Open Access Journals (Sweden)

    Dwi Nur Heni

    2015-08-01

    Full Text Available Tujuan penelitian ini adalah mengembangkan perangkat pembelajaran pembelajaran tematik bervisi SETS berkarakter peduli lingkungan yang valid, efektif, dan praktis. Jenis penelitian adalah penelitian dan pengembangan (R&D. Hasil Pengembangan berupa 1 Silabus, 2 Rencana Pelaksanaan Pembelajaran, 4 Lembar Kegiatan Peserta Didik, 5 Bahan ajar, dan 6 Alat evaluasi. Hasil validasi dari pakar menunjukkan bahwa perangkat yang dikembangkan layak untuk digunakan. Perangkat pembelajaran yang dikembangkan memenuhi kriteria efektif. Hasil belajar siswa kelas eksperimen berdasarkan pengujian one sample t-test diperoleh thitung 5,96 > t table 1,729 , yakni rata-rata skor tes peserta didik kelas eksperimen mencapai 82 yang masih lebih dari skor tes peserta didik kelas kontrol sebesar 65. Perhitungan N- menunjukkan nilai 0,48.dengan kriteria sedang. Guru memberikan respons positif 14 dari 16 indikator pertanyaan  dan  jumlah respons peserta didik sebesar 329 kategori sangat baik. Dapat disimpulkan perangkat pembelajaran yang dikembangkan memenuhi kriteria valid, efektif dan praktis.The aim of this research is to develop thematic learning tools that feature visionary SETS, to obtain valid, effective, and practical learning tools. The type of this research is research and development. Development product in form of 1 syllabus, 2 lesson plan, 3 student activity sheet, 4 teaching materials, and 6 evaluation sheet. Validation result showed that developed learning tools is feasible for use. The developed learning tools had met the effective criteria.  Shown that student activities in the good and very good categories, student learning result in experimental group is better than in the control group, based on one sample t-test it was obtained  t count 5,96 > t table 1,729, the average of student scores is 82 that was better than average of student scores in the control class which 65. N calculation showed 0,48 with a moderate criteria. Teacher give

  6. UPAYA PENUMBUHAN SIKAP TANGGAP BENCANA TSUNAMI MELALUI PEMBELAJARAN BERVISI SETS IPA KELAS V SEKOLAH DASAR

    Directory of Open Access Journals (Sweden)

    Tri Puas Restiadi

    2013-03-01

    , validation, testing a limited scale, and large- scale trials. Data validity and practicality of learning devices were analyzed with descriptive percentages. Differences in learning outcomes were analyzed by paired t-test, whereas mastery of student learning outcomes calculated by the test-z .The results show that the learning-based SETS valid IPA with a mean value of 3.69 in the excellent category. The application of science-based learning SETS practical feasibi- lity in terms of the mean RPP 3.79 in the excellent category, student and teacher responses mecapai very good category. The application of science-based learning SETS effective in achieving learning outcomes classical completeness ≥ 75% and completeness of the individual (KKM ≥ 70, the mean experimental class learning outcomes greater than the mean control classroom learning outcomes, and significantly different because t count> t table.

  7. Screening for postdeployment conditions: development and cross-validation of an embedded validity scale in the neurobehavioral symptom inventory.

    Science.gov (United States)

    Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G

    2014-01-01

    To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.

  8. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    Science.gov (United States)

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  9. Rapid prediction of multi-dimensional NMR data sets

    International Nuclear Information System (INIS)

    Gradmann, Sabine; Ader, Christian; Heinrich, Ines; Nand, Deepak; Dittmann, Marc; Cukkemane, Abhishek; Dijk, Marc van; Bonvin, Alexandre M. J. J.; Engelhard, Martin; Baldus, Marc

    2012-01-01

    We present a computational environment for Fast Analysis of multidimensional NMR DAta Sets (FANDAS) that allows assembling multidimensional data sets from a variety of input parameters and facilitates comparing and modifying such “in silico” data sets during the various stages of the NMR data analysis. The input parameters can vary from (partial) NMR assignments directly obtained from experiments to values retrieved from in silico prediction programs. The resulting predicted data sets enable a rapid evaluation of sample labeling in light of spectral resolution and structural content, using standard NMR software such as Sparky. In addition, direct comparison to experimental data sets can be used to validate NMR assignments, distinguish different molecular components, refine structural models or other parameters derived from NMR data. The method is demonstrated in the context of solid-state NMR data obtained for the cyclic nucleotide binding domain of a bacterial cyclic nucleotide-gated channel and on membrane-embedded sensory rhodopsin II. FANDAS is freely available as web portal under WeNMR (http://www.wenmr.eu/services/FANDAShttp://www.wenmr.eu/services/FANDAS).

  10. Rapid prediction of multi-dimensional NMR data sets

    Energy Technology Data Exchange (ETDEWEB)

    Gradmann, Sabine; Ader, Christian [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands); Heinrich, Ines [Max Planck Institute for Molecular Physiology, Department of Physical Biochemistry (Germany); Nand, Deepak [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands); Dittmann, Marc [Max Planck Institute for Molecular Physiology, Department of Physical Biochemistry (Germany); Cukkemane, Abhishek; Dijk, Marc van; Bonvin, Alexandre M. J. J. [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands); Engelhard, Martin [Max Planck Institute for Molecular Physiology, Department of Physical Biochemistry (Germany); Baldus, Marc, E-mail: m.baldus@uu.nl [Utrecht University, Faculty of Science, Bijvoet Center for Biomolecular Research (Netherlands)

    2012-12-15

    We present a computational environment for Fast Analysis of multidimensional NMR DAta Sets (FANDAS) that allows assembling multidimensional data sets from a variety of input parameters and facilitates comparing and modifying such 'in silico' data sets during the various stages of the NMR data analysis. The input parameters can vary from (partial) NMR assignments directly obtained from experiments to values retrieved from in silico prediction programs. The resulting predicted data sets enable a rapid evaluation of sample labeling in light of spectral resolution and structural content, using standard NMR software such as Sparky. In addition, direct comparison to experimental data sets can be used to validate NMR assignments, distinguish different molecular components, refine structural models or other parameters derived from NMR data. The method is demonstrated in the context of solid-state NMR data obtained for the cyclic nucleotide binding domain of a bacterial cyclic nucleotide-gated channel and on membrane-embedded sensory rhodopsin II. FANDAS is freely available as web portal under WeNMR (http://www.wenmr.eu/services/FANDAShttp://www.wenmr.eu/services/FANDAS).

  11. Verification and validation of decision support software: Expert Choice{trademark} and PCM{trademark}

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Q.H.; Martin, J.D.

    1994-11-04

    This report documents the verification and validation of two decision support programs: EXPERT CHOICE{trademark} and PCM{trademark}. Both programs use the Analytic Hierarchy Process (AHP) -- or pairwise comparison technique -- developed by Dr. Thomas L. Saaty. In order to provide an independent method for the validating the two programs, the pairwise comparison algorithm was developed for a standard mathematical program. A standard data set -- selecting a car to purchase -- was used with each of the three programs for validation. The results show that both commercial programs performed correctly.

  12. New large solar photocatalytic plant: set-up and preliminary results.

    Science.gov (United States)

    Malato, S; Blanco, J; Vidal, A; Fernández, P; Cáceres, J; Trincado, P; Oliveira, J C; Vincent, M

    2002-04-01

    A European industrial consortium called SOLARDETOX has been created as the result of an EC-DGXII BRITE-EURAM-III-financed project on solar photocatalytic detoxification of water. The project objective was to develop a simple, efficient and commercially competitive water-treatment technology, based on compound parabolic collectors (CPCs) solar collectors and TiO2 photocatalysis, to make possible easy design and installation. The design, set-up and preliminary results of the main project deliverable, the first European industrial solar detoxification treatment plant, is presented. This plant has been designed for the batch treatment of 2 m3 of water with a 100 m2 collector-aperture area and aqueous aerated suspensions of polycrystalline TiO2 irradiated by sunlight. Fully automatic control reduces operation and maintenance manpower. Plant behaviour has been compared (using dichloroacetic acid and cyanide at 50 mg l(-1) initial concentration as model compounds) with the small CPC pilot plants installed at the Plataforma Solar de Almería several years ago. The first results with high-content cyanide (1 g l(-1)) waste water are presented and plant treatment capacity is calculated.

  13. Considerations for design and use of container challenge sets for qualification and validation of visible particulate inspection.

    Science.gov (United States)

    Melchore, James A; Berdovich, Dan

    2012-01-01

    The major compendia require sterile injectable and ophthalmic drugs, to be prepared in a manner that is designed to exclude particulate matter. This requirement is satisfied by testing for subvisual particles in the laboratory and 100% inspection of all containers for the presence of visible particles. Inspection for visible particles is performed in the operations area using one of three methods. Manual inspection is based on human visual acuity, the ability of the inspector to discern between conforming and nonconforming containers, and the ability to remove nonconforming units. Semi-automated inspection is a variation of manual inspection, in which a roller conveyor handles and presents the containers to the human inspector. Fully automated inspection systems perform handling, inspection, and rejection of defective containers. All inspection methods must meet the compendial requirement for sterile drug product to be "essentially free" of visible particulates. Given the random occurrence of particles within the batch, visual detection of a particle in an individual container is probabilistic. The probability of detection for a specific particle is affected by many variables that include product attributes, container size and shape, particle composition and size, and inspection capability. The challenge set is a useful tool to assess the particle detection in a product, and it may also be used to evaluate detection of container/closure defects. While the importance of a well-designed challenge set is not always recognized or understood, it serves as the cornerstone for qualification and/or validation of all inspection methods. This article is intended to provide useful information for the design, composition, and use of container challenge sets for particulate inspection studies. Regulations require drug products intended for injection or ophthalmic use to be sterile and free of particles that could harm the patient. This requirement is meet by 100% inspection of

  14. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  15. Identification of a robust gene signature that predicts breast cancer outcome in independent data sets

    International Nuclear Information System (INIS)

    Korkola, James E; Waldman, Frederic M; Blaveri, Ekaterina; DeVries, Sandy; Moore, Dan H II; Hwang, E Shelley; Chen, Yunn-Yi; Estep, Anne LH; Chew, Karen L; Jensen, Ronald H

    2007-01-01

    Breast cancer is a heterogeneous disease, presenting with a wide range of histologic, clinical, and genetic features. Microarray technology has shown promise in predicting outcome in these patients. We profiled 162 breast tumors using expression microarrays to stratify tumors based on gene expression. A subset of 55 tumors with extensive follow-up was used to identify gene sets that predicted outcome. The predictive gene set was further tested in previously published data sets. We used different statistical methods to identify three gene sets associated with disease free survival. A fourth gene set, consisting of 21 genes in common to all three sets, also had the ability to predict patient outcome. To validate the predictive utility of this derived gene set, it was tested in two published data sets from other groups. This gene set resulted in significant separation of patients on the basis of survival in these data sets, correctly predicting outcome in 62–65% of patients. By comparing outcome prediction within subgroups based on ER status, grade, and nodal status, we found that our gene set was most effective in predicting outcome in ER positive and node negative tumors. This robust gene selection with extensive validation has identified a predictive gene set that may have clinical utility for outcome prediction in breast cancer patients

  16. A high confidence, manually validated human blood plasma protein reference set

    DEFF Research Database (Denmark)

    Schenk, Susann; Schoenhals, Gary J; de Souza, Gustavo

    2008-01-01

    BACKGROUND: The immense diagnostic potential of human plasma has prompted great interest and effort in cataloging its contents, exemplified by the Human Proteome Organization (HUPO) Plasma Proteome Project (PPP) pilot project. Due to challenges in obtaining a reliable blood plasma protein list......-trap-Fourier transform (LTQ-FT) and a linear ion trap-Orbitrap (LTQ-Orbitrap) for mass spectrometry (MS) analysis. Both instruments allow the measurement of peptide masses in the low ppm range. Furthermore, we employed a statistical score that allows database peptide identification searching using the products of two...... consecutive stages of tandem mass spectrometry (MS3). The combination of MS3 with very high mass accuracy in the parent peptide allows peptide identification with orders of magnitude more confidence than that typically achieved. RESULTS: Herein we established a high confidence set of 697 blood plasma proteins...

  17. DESCQA: Synthetic Sky Catalog Validation Framework

    Science.gov (United States)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  18. Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting

    DEFF Research Database (Denmark)

    Kristensen, Solvejg; Sabroe, Svend; Bartels, Paul

    2015-01-01

    PURPOSE: Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee's perceptions...... of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ). The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data. MATERIALS AND METHODS: The SAQ...... tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach's alpha (α), and item and subscale scores. RESULTS: Participation was 73.2% (N=925) of invited health care workers. Goodness-of-fit indices from...

  19. Development and validation of the Characteristics of Resilience in Sports Teams Inventory.

    OpenAIRE

    Decroos, Steven; Lines, Robin L. J.; Morgan, Paul B. C.; Fletcher, David; Sarkar, Mustafa; Fransen, Katrien; Boen, Filip; Vande Broek, Gert

    2017-01-01

    This multistudy paper reports the development and initial validation of an inventory for the Characteristics of Resilience in Sports Teams (CREST). In 4 related studies, 1,225 athletes from Belgium and the United Kingdom were sampled. The first study provided content validity for an initial item set. The second study explored the factor structure of the CREST, yielding initial evidence but no conclusive results. In contrast, the third and fourth study provided evidence for a 2-factor measure,...

  20. Asthma management in a specialist setting: Results of an Italian Respiratory Society survey.

    Science.gov (United States)

    Braido, Fulvio; Baiardini, Ilaria; Alleri, Pietro; Bacci, Elena; Barbetta, Carlo; Bellocchia, Michela; Benfante, Alida; Blasi, Francesco; Bucca, Caterina; Busceti, Maria Teresa; Centanni, Stefano; Colanardi, Maria Cristina; Contoli, Marco; Corsico, Angelo; D'Amato, Maria; Di Marco, Fabiano; Marco, Dottorini; Ferrari, Marta; Florio, Giovanni; Fois, Alessandro Giuseppe; Foschino Barbaro, Maria Pia; Silvia, Garuti; Girbino, Giuseppe; Grosso, Amelia; Latorre, Manuela; Maniscalco, Sara; Mazza, Francesco; Mereu, Carlo; Molinengo, Giorgia; Ora, Josuel; Paggiaro, Pierluigi; Patella, Vincenzo; Pelaia, Girolamo; Pirina, Pietro; Proietto, Alfio; Rogliani, Paola; Santus, Pierachille; Scichilone, Nicola; Simioli, Francesca; Solidoro, Paolo; Terraneo, Silvia; Zuccon, Umberto; Canonica, Giorgio Walter

    2017-06-01

    Asthma considerably impairs patients' quality of life and increases healthcare costs. Severity, morbidity, and degree of disease control are the major drivers of its clinical and economic impact. National scientific societies are required to monitor the application of international guidelines and to adopt strategies to improve disease control and better allocate resources. to provide a detailed picture of the characteristics of asthma patients and modalities of asthma management by specialists in Italy and to develop recommendations for the daily management of asthma in a specialist setting. A quantitative research program was implemented. Data were collected using an ad hoc questionnaire developed by a group of specialists selected by the Italian Pneumology Society/Italian Respiratory Society. The records of 557 patients were analyzed. In the next few years, specialists are expected to focus their activity patients with more severe disease and will be responsible for selection of patients for personalized biological therapy; however, only 20% of patients attending Italian specialist surgery can be considered severe. In 84.4% of cases, the visit was a follow-up visit requested in 82.2% of cases by the specialist him/herself. The Asthma Control Test is used only in 65% of patients. When available, a significant association has been observed between the test score and asthma control as judged by the physician, although concordance was only moderate (κ = 0.68). Asthma was considered uncontrolled by the specialist managing the case in 29.1% of patients; nevertheless, treatment was not stepped up in uncontrolled or partly controlled patients (modified in only 37.2% of patients). The results of this survey support re-evaluation of asthma management by Italian specialists. More resources should be made available for the initial visit and for more severely ill patients. In addition, more extensive use should be made of validated tools, and available drugs should be used

  1. Psychological Capital: Convergent and discriminant validity of a reconfigured measure

    Directory of Open Access Journals (Sweden)

    Anton Grobler

    2018-04-01

    Full Text Available Background: Although attention has been given to the importance of positivity in the workplace, it has only recently been proposed as a new way in which to focus on organisational behaviour. The psychological resources which meet the criteria for positive organisational behaviour best are hope, self-efficacy, optimism and resilience.   Aim: The purpose of this study was to investigate the construct validity of the Psychological Capital Questionnaire (PCQ, with specific reference to its psychometric properties.   Setting: The sample included a total of 1749 respondents, 60 each from 30 organisations in South Africa.   Methods: A multi-factorial model was statistically explored and confirmed (with exploratory factor analysis and confirmatory factor analysis, respectively.   Results: The results support the original conceptualisation and empirically-confirmed factorial composition of Psychological Capital (PsyCap by four elements, namely Hope, Optimism, Resilience and Self-efficacy. However, the study yielded a three-factor solution, with Hope and Optimism as a combined factor and Resilience and Self-efficacy made up of a reconfigured set of substantively justifiable items (three of the original 24 items were found not to be suitable. The three reconfigured factors showed good psychometric properties, good fit (in support of construct validity and acceptable levels of convergent and discriminant validity. Recommendations were made for further studies.   Conclusion: Based on the results obtained, it seems that the PCQ is a suitable (valid and reliable instrument for measuring PsyCap. This study could thus serve as a reference for the accurate measurement of PsyCap.

  2. Development and validation of an Argentine set of facial expressions of emotion

    NARCIS (Netherlands)

    Vaiman, M.; Wagner, M.A.; Caicedo, E.; Pereno, G.L.

    2017-01-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion

  3. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  4. Addressing the Challenge of Defining Valid Proteomic Biomarkers and Classifiers

    LENUS (Irish Health Repository)

    Dakna, Mohammed

    2010-12-10

    Abstract Background The purpose of this manuscript is to provide, based on an extensive analysis of a proteomic data set, suggestions for proper statistical analysis for the discovery of sets of clinically relevant biomarkers. As tractable example we define the measurable proteomic differences between apparently healthy adult males and females. We choose urine as body-fluid of interest and CE-MS, a thoroughly validated platform technology, allowing for routine analysis of a large number of samples. The second urine of the morning was collected from apparently healthy male and female volunteers (aged 21-40) in the course of the routine medical check-up before recruitment at the Hannover Medical School. Results We found that the Wilcoxon-test is best suited for the definition of potential biomarkers. Adjustment for multiple testing is necessary. Sample size estimation can be performed based on a small number of observations via resampling from pilot data. Machine learning algorithms appear ideally suited to generate classifiers. Assessment of any results in an independent test-set is essential. Conclusions Valid proteomic biomarkers for diagnosis and prognosis only can be defined by applying proper statistical data mining procedures. In particular, a justification of the sample size should be part of the study design.

  5. Validation of OPERA3D PCMI Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Jeun, Ji Hoon; Choi, Jae Myung; Yoo, Jong Sung [KEPCO Nuclear Fuel Co., Daejeon (Korea, Republic of); Cheng, G.; Sim, K. S.; Chassie, Girma [Candu Energy INC.,Ontario (Canada)

    2013-10-15

    This report will describe introduction of validation of OPERA3D code, and validation results that are directly related with PCMI phenomena. OPERA3D was developed for the PCMI analysis and validated using the in-pile measurement data. Fuel centerline temperature and clad strain calculation results shows close expectations with measurement data. Moreover, 3D FEM fuel model of OPERA3D shows slight hour glassing behavior of fuel pellet in contact case. Further optimization will be conducted for future application of OPERA3D code. Nuclear power plant consists of many complicated systems, and one of the important objects of all the systems is maintaining nuclear fuel integrity. However, it is inevitable to experience PCMI (Pellet Cladding Mechanical Interaction) phenomena at current operating reactors and next generation reactors for advanced safety and economics as well. To evaluate PCMI behavior, many studies are on-going to develop 3-dimensional fuel performance evaluation codes. Moreover, these codes are essential to set the safety limits for the best estimated PCMI phenomena aimed for high burnup fuel.

  6. Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.

    Science.gov (United States)

    Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J

    2018-01-01

    The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.

  7. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Content validation of the dimensions constituting non-adherence to treatment of arterial hypertension

    Directory of Open Access Journals (Sweden)

    Jose Wicto Pereira Borges

    2013-10-01

    Full Text Available The objective of the study was to validate the content of the dimensions that constituted nonadherence to treatment of arterial systemic hypertension. It was a methodological study of content validation. Initially an integrative review was conducted that demonstrated four dimensions of nonadherence: person, disease/treatment, health service, and environment. Definitions of these dimensions were evaluated by 17 professionals, who were specialists in the area, including: nurses, pharmacists and physicians. The Content Validity Index was calculated for each dimension (IVCi and the set of the dimensions (IVCt, and the binomial test was conducted. The results permitted the validation of the dimensions with an IVCt of 0.88, demonstrating reasonable systematic comprehension of the phenomena of nonadherence.

  9. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    OpenAIRE

    Devendra T Mourya; Pragya D Yadav; Ajay Khare; Anwar H Khan

    2017-01-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no ac...

  10. Use of international data sets to evaluate and validate pathway assessment models applicable to exposure and dose reconstruction at DOE facilities. Monthly progress reports and final report, October--December 1994

    International Nuclear Information System (INIS)

    Hoffman, F.O.

    1995-01-01

    The objective of Task 7.lD was to (1) establish a collaborative US-USSR effort to improve and validate our methods of forecasting doses and dose commitments from the direct contamination of food sources, and (2) perform experiments and validation studies to improve our ability to predict rapidly and accurately the long-term internal dose from the contamination of agricultural soil. At early times following an accident, the direct contamination of pasture and food stuffs, particularly leafy vegetation and grain, can be of great importance. This situation has been modeled extensively. However, models employed then to predict the deposition, retention and transport of radionuclides in terrestrial environments employed concepts and data bases that were more than a decade old. The extent to which these models have been tested with independent data sets was limited. The data gathered in the former-USSR (and elsewhere throughout the Northern Hemisphere) offered a unique opportunity to test model predictions of wet and dry deposition, agricultural foodchain bioaccumulation, and short- and long-term retention, redistribution, and resuspension of radionuclides from a variety of natural and artificial surfaces. The current objective of this project is to evaluate and validate pathway-assessment models applicable to exposure and dose reconstruction at DOE facilities through use of international data sets. This project incorporates the activity of Task 7.lD into a multinational effort to evaluate models and data used for the prediction of radionuclide transfer through agricultural and aquatic systems to humans. It also includes participation in two studies, BIOMOVS (BIOspheric MOdel Validation Study) with the Swedish National Institute for Radiation Protection and VAMP (VAlidation of Model Predictions) with the International Atomic Energy Agency, that address testing the performance of models of radionuclide transport through foodchains

  11. Use of international data sets to evaluate and validate pathway assessment models applicable to exposure and dose reconstruction at DOE facilities. Monthly progress reports and final report, October--December 1994

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F.O. [Senes Oak Ridge, Inc., TN (United States). Center for Risk Analysis

    1995-04-01

    The objective of Task 7.lD was to (1) establish a collaborative US-USSR effort to improve and validate our methods of forecasting doses and dose commitments from the direct contamination of food sources, and (2) perform experiments and validation studies to improve our ability to predict rapidly and accurately the long-term internal dose from the contamination of agricultural soil. At early times following an accident, the direct contamination of pasture and food stuffs, particularly leafy vegetation and grain, can be of great importance. This situation has been modeled extensively. However, models employed then to predict the deposition, retention and transport of radionuclides in terrestrial environments employed concepts and data bases that were more than a decade old. The extent to which these models have been tested with independent data sets was limited. The data gathered in the former-USSR (and elsewhere throughout the Northern Hemisphere) offered a unique opportunity to test model predictions of wet and dry deposition, agricultural foodchain bioaccumulation, and short- and long-term retention, redistribution, and resuspension of radionuclides from a variety of natural and artificial surfaces. The current objective of this project is to evaluate and validate pathway-assessment models applicable to exposure and dose reconstruction at DOE facilities through use of international data sets. This project incorporates the activity of Task 7.lD into a multinational effort to evaluate models and data used for the prediction of radionuclide transfer through agricultural and aquatic systems to humans. It also includes participation in two studies, BIOMOVS (BIOspheric MOdel Validation Study) with the Swedish National Institute for Radiation Protection and VAMP (VAlidation of Model Predictions) with the International Atomic Energy Agency, that address testing the performance of models of radionuclide transport through foodchains.

  12. Validity and reliability of the Malay version of the Hill-Bone compliance to high blood pressure therapy scale for use in primary healthcare settings in Malaysia: A cross-sectional study.

    Science.gov (United States)

    Cheong, A T; Tong, S F; Sazlina, S G

    2015-01-01

    Hill-Bone compliance to high blood pressure therapy scale (HBTS) is one of the useful scales in primary care settings. It has been tested in America, Africa and Turkey with variable validity and reliability. The aim of this paper was to determine the validity and reliability of the Malay version of HBTS (HBTS-M) for the Malaysian population. HBTS comprises three subscales assessing compliance to medication, appointment and salt intake. The content validity of HBTS to the local population was agreed through consensus of expert panel. The 14 items used in the HBTS were adapted to reflect the local situations. It was translated into Malay and then back-translated into English. The translated version was piloted in 30 participants. This was followed by structural and predictive validity, and internal consistency testing in 262 patients with hypertension, who were on antihypertensive agent(s) for at least 1 year in two primary healthcare clinics in Kuala Lumpur, Malaysia. Exploratory factor analyses and the correlation between HBTS-M total score and blood pressure were performed. The Cronbach's alpha was calculated accordingly. Factor analysis revealed a three-component structure represented by two components on medication adherence and one on salt intake adherence. The Kaiser-Meyer-Olkin statistic was 0.764. The variance explained by each factors were 23.6%, 10.4% and 9.8%, respectively. However, the internal consistency for each component was suboptimal with Cronbach's alpha of 0.64, 0.55 and 0.29, respectively. Although there were two components representing medication adherence, the theoretical concepts underlying each concept cannot be differentiated. In addition, there was no correlation between the HBTS-M total score and blood pressure. HBTS-M did not conform to the structural and predictive validity of the original scale. Its reliability on assessing medication and salt intake adherence would most probably to be suboptimal in the Malaysian primary care setting.

  13. Validity and Reliability of Persian Version of Johns Hopkins Fall Risk Assessment Tool among Aged People

    Directory of Open Access Journals (Sweden)

    hadi hojati

    2018-04-01

    Full Text Available Background & Aim: It is crucial to identify aged patients in risk of falls in clinical settings. Johns Hopkins Fall Risk Assessment Tool (JHFRAT is one of most applied international instrument to assess elderly patients for the risk of falls. The aim of this study was to evaluate reliability and internal consistency of the JHFRAT. Methods & Materials: In this cross-sectional study for validity assessment of the tool, WHO’s standard protocol was applied for translation-back translation of the tool. Face and content validity of the tool was confirmed by ten person of expert faculty members for its applicability in clinical setting. In this pilot study, the inclusion criteria were being 60 or more years old, hospitalized in the last 8 hours prior to assessment and in proper cognitive condition assessed by MMSE. Subjects of the study were (n=70 elderly patients who were newly hospitalized in Shahroud Emam Hossein Hospital. Data were analyzed using SPSS software- version 16. Internal consistency of the tool was calculated by Cronbach’s alpha. Results: According to the results of the study Persian version of JHFRAT was a valid tool for application on clinical setting. The Persian version of the tool had Cronbach’s alpha equal to 0/733. Conclusion: Based on the findings of the current study, it can be concluded that Persian version of the JHFRAT is a valid and reliable tool to be applied for assessment of elderly senior citizens on admission in any clinical settings.

  14. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  15. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  16. Cultural adaptation and validation of an instrument on barriers for the use of research results.

    Science.gov (United States)

    Ferreira, Maria Beatriz Guimarães; Haas, Vanderlei José; Dantas, Rosana Aparecida Spadoti; Felix, Márcia Marques Dos Santos; Galvão, Cristina Maria

    2017-03-02

    to culturally adapt The Barriers to Research Utilization Scale and to analyze the metric validity and reliability properties of its Brazilian Portuguese version. methodological research conducted by means of the cultural adaptation process (translation and back-translation), face and content validity, construct validity (dimensionality and known groups) and reliability analysis (internal consistency and test-retest). The sample consisted of 335 nurses, of whom 43 participated in the retest phase. the validity of the adapted version of the instrument was confirmed. The scale investigates the barriers for the use of the research results in clinical practice. Confirmatory factorial analysis demonstrated that the Brazilian Portuguese version of the instrument is adequately adjusted to the dimensional structure the scale authors originally proposed. Statistically significant differences were observed among the nurses holding a Master's or Doctoral degree, with characteristics favorable to Evidence-Based Practice, and working at an institution with an organizational cultural that targets this approach. The reliability showed a strong correlation (r ranging between 0.77 and 0.84, pcultura organizacional dirigida hacia tal aproximación. La fiabilidad presentó correlación fuerte (r variando entre 0,77 y 0,84, pcultura organizacional direcionada para tal abordagem. A confiabilidade apresentou correlação forte (r variando entre 0,77e 0,84, p<0,001) e a consistência interna foi adequada (alfa de Cronbach variando entre 0,77 e 0,82) . a versão para o português brasileiro do instrumento The Barriers Scale demonstrou-se válida e confiável no grupo estudado.

  17. Birth Settings and the Validation of Neonatal Seizures Recorded in Birth Certificates Compared to Medicaid Claims and Hospital Discharge Abstracts Among Live Births in South Carolina, 1996-2013.

    Science.gov (United States)

    Li, Qing; Jenkins, Dorothea D; Kinsman, Stephen L

    2017-05-01

    Objective Neonatal seizures in the first 28 days of life often reflect underlying brain injury or abnormalities, and measure the quality of perinatal care in out-of-hospital births. Using the 2003 revision of birth certificates only, three studies reported more neonatal seizures recorded among home births ​or planned out-of-hospital births compared to hospital births. However, the validity of recording neonatal seizures or serious neurologic dysfunction across birth settings in birth certificates has not been evaluated. We aimed to validate seizure recording in birth certificates across birth settings using multiple datasets. Methods We examined checkbox items "seizures" and "seizure or serious neurologic dysfunction" in the 1989 and 2003 revisions of birth certificates in South Carolina from 1996 to 2013. Gold standards were ICD-9-CM codes 779.0, 345.X, and 780.3 in either hospital discharge abstracts or Medicaid encounters jointly. Results Sensitivity, positive predictive value, false positive rate, and the kappa statistic of neonatal seizures recording were 7%, 66%, 34%, and 0.12 for the 2003 revision of birth certificates in 547,177 hospital births from 2004 to 2013 and 5%, 33%, 67%, and 0.09 for the 1998 revision in 396,776 hospital births from 1996 to 2003, and 0, 0, 100%, -0.002 among 660 intended home births from 2004 to 2013 and 920 home births from 1996 to 2003, respectively. Conclusions for Practice Despite slight improvement across revisions, South Carolina birth certificates under-reported or falsely reported seizures among hospital births and especially home births. Birth certificates alone should not be used to measure neonatal seizures or serious neurologic dysfunction.

  18. Validating Measures of Mathematical Knowledge for Teaching

    Science.gov (United States)

    Kane, Michael

    2007-01-01

    According to Schilling, Blunk, and Hill, the set of papers presented in this journal issue had two main purposes: (1) to use an argument-based approach to evaluate the validity of the tests of mathematical knowledge for teaching (MKT), and (2) to critically assess the author's version of an argument-based approach to validation (Kane, 2001, 2004).…

  19. Validation of Safety-Critical Systems for Aircraft Loss-of-Control Prevention and Recovery

    Science.gov (United States)

    Belcastro, Christine M.

    2012-01-01

    Validation of technologies developed for loss of control (LOC) prevention and recovery poses significant challenges. Aircraft LOC can result from a wide spectrum of hazards, often occurring in combination, which cannot be fully replicated during evaluation. Technologies developed for LOC prevention and recovery must therefore be effective under a wide variety of hazardous and uncertain conditions, and the validation framework must provide some measure of assurance that the new vehicle safety technologies do no harm (i.e., that they themselves do not introduce new safety risks). This paper summarizes a proposed validation framework for safety-critical systems, provides an overview of validation methods and tools developed by NASA to date within the Vehicle Systems Safety Project, and develops a preliminary set of test scenarios for the validation of technologies for LOC prevention and recovery

  20. Psychometric properties and longitudinal validation of the self-reporting questionnaire (SRQ-20 in a Rwandan community setting: a validation study

    Directory of Open Access Journals (Sweden)

    van Lammeren Anouk

    2011-08-01

    Full Text Available Abstract Background This study took place to enable the measurement of the effects on mental health of a psychosocial intervention in Rwanda. It aimed to establish the capacities of the Self-Reporting Questionnaire (SRQ-20 to screen for mental disorder and to assess symptom change over time in a Rwandan community setting. Methods The SRQ-20 was translated into Kinyarwanda in a process of forward and back-translation. SRQ-20 data were collected in a Rwandan setting on 418 respondents; a random subsample of 230 respondents was assessed a second time with a three month time interval. Internal reliability was tested using Cronbach's alpha. The optimal cut-off point was determined by calculating Receiver Operating Curves, using semi-structured clinical interviews as standard in a random subsample of 99 respondents. Subsequently, predictive value, likelihood ratio, and interrater agreement were calculated. The factor structure of the SRQ-20 was determined through exploratory factor analysis. Factorial invariance over time was tested in a multigroup confirmatory factor analysis. Results The reliability of the SRQ-20 in women (α = 0.85 and men (α = 0.81 could be considered good. The instrument performed moderately well in detecting common mental disorders, with an area under the curve (AUC of 0.76 for women and 0.74 for men. Cut-off scores were different for women (10 and men (8. Factor analysis yielded five factors, explaining 38% of the total variance. The factor structure proved to be time invariant. Conclusions The SRQ-20 can be used as a screener to detect mental disorder in a Rwandan community setting, but cut-off scores need to be adjusted for women and men separately. The instrument also shows longitudinal factorial invariance, which is an important prerequisite for assessing changes in symptom severity. This is a significant finding as in non-western post-conflict settings the relevance of diagnostic categories is questionable. The use of the

  1. A set of BAC clones spanning the human genome.

    NARCIS (Netherlands)

    Krzywinski, M.; Bosdet, I.; Smailus, D.; Chiu, R.; Mathewson, C.; Wye, N.; Barber, S.; Brown-John, M.; Chan, S.; Chand, S.; Cloutier, A.; Girn, N.; Lee, D.; Masson, A.; Mayo, M.; Olson, T.; Pandoh, P.; Prabhu, A.L.; Schoenmakers, E.F.P.M.; Tsai, M.Y.; Albertson, D.; Lam, W.W.; Choy, C.O.; Osoegawa, K.; Zhao, S.; Jong, P.J. de; Schein, J.; Jones, S.; Marra, M.A.

    2004-01-01

    Using the human bacterial artificial chromosome (BAC) fingerprint-based physical map, genome sequence assembly and BAC end sequences, we have generated a fingerprint-validated set of 32 855 BAC clones spanning the human genome. The clone set provides coverage for at least 98% of the human

  2. Verification, validation and application of NEPTUNE-CFD to two-phase Pressurized Thermal Shocks

    Energy Technology Data Exchange (ETDEWEB)

    Mérigoux, N., E-mail: nicolas.merigoux@edf.fr [Electricité de France, R& D Division, 6 Quai Watier, 78401 Chatou (France); Laviéville, J.; Mimouni, S.; Guingo, M.; Baudry, C. [Electricité de France, R& D Division, 6 Quai Watier, 78401 Chatou (France); Bellet, S., E-mail: serge.bellet@edf.fr [Electricité de France, Thermal & Nuclear Studies and Projects Division, 12-14 Avenue Dutriévoz, 69628 Villeurbanne (France)

    2017-02-15

    Nuclear Power Plants are subjected to a variety of ageing mechanisms and, at the same time, exposed to potential Pressurized Thermal Shock (PTS) – characterized by a rapid cooling of the Reactor Pressure Vessel (RPV) wall. In this context, NEPTUNE-CFD is developed and used to model two-phase PTS in an industrial configuration, providing temperature and pressure fields required to assess the integrity of the RPV. Furthermore, when using CFD for nuclear safety demonstration purposes, EDF applies a methodology based on physical analysis, verification, validation and application to industrial scale (V&V), to demonstrate the quality of, and the confidence in results obtained. By following this methodology, each step must be proved to be consistent with the others, and with the final goal of the calculations. To this effect, a chart demonstrating how far the validation step of NEPTUNE-CFD is covering the PTS application will be drawn. A selection of the code verification and validation cases against different experiments will be described. For results consistency, a single and mature set of models – resulting from the knowledge acquired during the code development over the last decade – has been used. From these development and validation feedbacks, a methodology has been set up to perform industrial computations. Finally, the guidelines of this methodology based on NEPTUNE-CFD and SYRTHES coupling – to take into account the conjugate heat transfer between liquid and solid – will be presented. A short overview of the engineering approach will be given – starting from the meshing process, up to the results post-treatment and analysis.

  3. Hope Matters: Developing and Validating a Measure of Future Expectations Among Young Women in a High HIV Prevalence Setting in Rural South Africa (HPTN 068).

    Science.gov (United States)

    Abler, Laurie; Hill, Lauren; Maman, Suzanne; DeVellis, Robert; Twine, Rhian; Kahn, Kathleen; MacPhail, Catherine; Pettifor, Audrey

    2017-07-01

    Hope is a future expectancy characterized by an individual's perception that a desirable future outcome can be achieved. Though scales exist to measure hope, they may have limited relevance in low resource, high HIV prevalence settings. We developed and validated a hope scale among young women living in rural South Africa. We conducted formative interviews to identify the key elements of hope. Using items developed from these interviews, we administered the hope scale to 2533 young women enrolled in an HIV-prevention trial. Women endorsed scale items highly and the scale proved to be unidimensional in the sample. Hope scores were significantly correlated with hypothesized psycholosocial correlates with the exception of life stressors. Overall, our hope measure was found to have excellent reliability and to show encouraging preliminary indications of validity in this population. This study presents a promising measure to assess hope among young women in South Africa.

  4. Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions

    DEFF Research Database (Denmark)

    Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.

    2015-01-01

    , the method we report represents a powerful tool to identify inconsistencies in large-scale metabolic networks. AVAILABILITY AND IMPLEMENTATION: The method is available as source code on http://users.minet.uni-jena.de/ approximately m3kach/ASBIG/ASBIG.zip. CONTACT: christoph.kaleta@uni-jena.de SUPPLEMENTARY...... by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass...... components in a metabolic model. These autocatalytic sets are well-conserved across all domains of life, and their identification in specific genome-scale reconstructions allows us to draw conclusions about potential inconsistencies in these models. The method is capable of detecting inconsistencies, which...

  5. Identification and Validation of ESP Teacher Competencies: A Research Design

    Science.gov (United States)

    Venkatraman, G.; Prema, P.

    2013-01-01

    The paper presents the research design used for identifying and validating a set of competencies required of ESP (English for Specific Purposes) teachers. The identification of the competencies and the three-stage validation process are also discussed. The observation of classes of ESP teachers for field-testing the validated competencies and…

  6. STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.

    Science.gov (United States)

    Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron

    2015-08-07

    Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.

  7. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  8. Completeness and validity in a national clinical thyroid cancer database

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Mathiesen, Jes Sloth; Krogdahl, Annelise

    2014-01-01

    cancer database: DATHYRCA. STUDY DESIGN AND SETTING: National prospective cohort. Denmark; population 5.5 million. Completeness of case ascertainment was estimated by the independent case ascertainment method using three governmental registries as a reference. The reabstracted record method was used...... to appraise the validity. For validity assessment 100 cases were randomly selected from the DATHYRCA database; medical records were used as a reference. RESULT: The database held 1934 cases of thyroid carcinoma and completeness of case ascertainment was estimated to 90.9%. Completeness of registration......BACKGROUND: Although a prospective national clinical thyroid cancer database (DATHYRCA) has been active in Denmark since January 1, 1996, no assessment of data quality has been performed. The purpose of the study was to evaluate completeness and data validity in the Danish national clinical thyroid...

  9. Basic strategies for valid cytometry using image analysis

    NARCIS (Netherlands)

    Jonker, A.; Geerts, W. J.; Chieco, P.; Moorman, A. F.; Lamers, W. H.; van Noorden, C. J.

    1997-01-01

    The present review provides a starting point for setting up an image analysis system for quantitative densitometry and absorbance or fluorescence measurements in cell preparations, tissue sections or gels. Guidelines for instrumental settings that are essential for the valid application of image

  10. Wind and solar resource data sets

    DEFF Research Database (Denmark)

    Clifton, Andrew; Hodge, Bri-Mathias; Draxl, Caroline

    2017-01-01

    The range of resource data sets spans from static cartography showing the mean annual wind speed or solar irradiance across a region to high temporal and high spatial resolution products that provide detailed information at a potential wind or solar energy facility. These data sets are used...... to support continental-scale, national, or regional renewable energy development; facilitate prospecting by developers; and enable grid integration studies. This review first provides an introduction to the wind and solar resource data sets, then provides an overview of the common methods used...... for their creation and validation. A brief history of wind and solar resource data sets is then presented, followed by areas for future research. For further resources related to this article, please visit the WIREs website....

  11. PIV Data Validation Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  12. 20 CFR 404.725 - Evidence of a valid ceremonial marriage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of a valid ceremonial marriage. 404... DISABILITY INSURANCE (1950- ) Evidence Evidence of Age, Marriage, and Death § 404.725 Evidence of a valid ceremonial marriage. (a) General. A valid ceremonial marriage is one that follows procedures set by law in...

  13. A Study on Guide Sign Validity in Driving Simulator

    Directory of Open Access Journals (Sweden)

    Wei Zhonghua

    2011-12-01

    Full Text Available The role of guide sign to inform road user about the information of network is important. How to design and locate guide sign to increase traffic operation efficiency is a key point for traffic engineers. Driving simulator is useful devised to study guide sign in the process and system control. For the purpose of studying guide signs using the tool of driving simulator, guide sign's validity in driving simulator was studied. Results of this experiment are the foundation of further study on guide sign. Simulator calibration procedure for guide sign was set up in this study. Legibility distance as measure of performance was used to evaluate the validity of guide sign in driving simulator. Thirty two participants were recruited. Results indicated legibility distance and speed were inversely related with the method of data mining. Legibility distance and text height of guide sign were positive related. When speed is 20km/h, 30km/h, 40km/h, magnifying power of text height is 4.3, 4.1, 3.8, while guide signs are absolute validity in driving simulator.

  14. Validating the WHO maternal near miss tool: comparing high- and low-resource settings.

    Science.gov (United States)

    Witteveen, Tom; Bezstarosti, Hans; de Koning, Ilona; Nelissen, Ellen; Bloemenkamp, Kitty W; van Roosmalen, Jos; van den Akker, Thomas

    2017-06-19

    WHO proposed the WHO Maternal Near Miss (MNM) tool, classifying women according to several (potentially) life-threatening conditions, to monitor and improve quality of obstetric care. The objective of this study is to analyse merged data of one high- and two low-resource settings where this tool was applied and test whether the tool may be suitable for comparing severe maternal outcome (SMO) between these settings. Using three cohort studies that included SMO cases, during two-year time frames in the Netherlands, Tanzania and Malawi we reassessed all SMO cases (as defined by the original studies) with the WHO MNM tool (five disease-, four intervention- and seven organ dysfunction-based criteria). Main outcome measures were prevalence of MNM criteria and case fatality rates (CFR). A total of 3172 women were studied; 2538 (80.0%) from the Netherlands, 248 (7.8%) from Tanzania and 386 (12.2%) from Malawi. Total SMO detection was 2767 (87.2%) for disease-based criteria, 2504 (78.9%) for intervention-based criteria and 1211 (38.2%) for organ dysfunction-based criteria. Including every woman who received ≥1 unit of blood in low-resource settings as life-threatening, as defined by organ dysfunction criteria, led to more equally distributed populations. In one third of all Dutch and Malawian maternal death cases, organ dysfunction criteria could not be identified from medical records. Applying solely organ dysfunction-based criteria may lead to underreporting of SMO. Therefore, a tool based on defining MNM only upon establishing organ failure is of limited use for comparing settings with varying resources. In low-resource settings, lowering the threshold of transfused units of blood leads to a higher detection rate of MNM. We recommend refined disease-based criteria, accompanied by a limited set of intervention- and organ dysfunction-based criteria to set a measure of severity.

  15. Enhancement of chemical entity identification in text using semantic similarity validation.

    Directory of Open Access Journals (Sweden)

    Tiago Grego

    Full Text Available With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit, maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/.

  16. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  17. Diagnostic aid to rule out pneumonia in adults with cough and feeling of fever. A validation study in the primary care setting

    Directory of Open Access Journals (Sweden)

    Held Ulrike

    2012-12-01

    Full Text Available Abstract Background We recently reported the derivation of a diagnostic aid to rule out pneumonia in adults presenting with new onset of cough or worsening of chronic cough and increased body temperature. The aim of the present investigation was to validate the diagnostic aid in a new sample of primary care patients. Methods From two group practices in Zurich, we included 110 patients with the main symptoms of cough and subjective feeling of increased body temperature, and C-reactive protein levels below 50 μg/ml, no dyspnea, and not daily feeling of increased body temperature since the onset of cough. We excluded patients who were prescribed antibiotics at their first consultation. Approximately two weeks after inclusion, practice assistants contacted the participants by phone and asked four questions regarding the course of their complaints. In particular, they asked whether a prescription of antibiotics or hospitalization had been necessary within the last two weeks. Results In 107 of 110 patients, pneumonia could be ruled out with a high degree of certainty, and no prescription of antibiotics was necessary. Three patients were prescribed antibiotics between the time of inclusion in the study and the phone interview two weeks later. Acute rhinosinusitis was diagnosed in one patient, and antibiotics were prescribed to the other two patients because their symptoms had worsened and their CRP levels increased. Use of the diagnostic aid could have missed these two possible cases of pneumonia. These observations correspond to a false negative rate of 1.8% (95% confidence interval: 0.50%-6.4%. Conclusions This diagnostic aid is helpful to rule out pneumonia in patients from a primary care setting. After further validation application of this aid in daily practice may help to reduce the prescription rate of unnecessary antibiotics in patients with respiratory tract infections.

  18. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions

    KAUST Repository

    Atallah, Nabil M.

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester.

  19. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions

    KAUST Repository

    Atallah, Nabil M.; El-Fadel, Mutasem E.; Ghanimeh, Sophia A.; Saikaly, Pascal; Abou Najm, Majdi R.

    2014-01-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester.

  20. Validation of the Gratitude Questionnaire in Filipino Secondary School Students.

    Science.gov (United States)

    Valdez, Jana Patricia M; Yang, Weipeng; Datu, Jesus Alfonso D

    2017-10-11

    Most studies have assessed the psychometric properties of the Gratitude Questionnaire - Six-Item Form (GQ-6) in the Western contexts while very few research has been generated to explore the applicability of this scale in non-Western settings. To address this gap, the aim of the study was to examine the factorial validity and gender invariance of the Gratitude Questionnaire in the Philippines through a construct validation approach. There were 383 Filipino high school students who participated in the research. In terms of within-network construct validity, results of confirmatory factor analyses revealed that the five-item version of the questionnaire (GQ-5) had better fit compared to the original six-item version of the gratitude questionnaire. The scores from the GQ-5 also exhibited invariance across gender. Between-network construct validation showed that gratitude was associated with higher levels of academic achievement (β = .46, p gratitude was linked to lower degree of amotivation (β = -.51, p <.001). Theoretical and practical implications are discussed.

  1. Date and acquaintance rape. Development and validation of a set of scales.

    Science.gov (United States)

    Walsh, J F; Devellis, B M; Devellis, R F

    1997-02-01

    Increasing recognition of the prevalence of date/acquaintance rape (DAR) in the US, especially among college women, has led to an understanding that the techniques needed to fend off attacks from friends and acquaintances differ from those used to prevent rape by strangers. This study developed and tested the reliability and validity of the following DAR constructs: perceived vulnerability (underestimation of vulnerability discourages adequate self-protection), self-efficacy, relational priority (neglecting self-interest to save a relationship), rape myth acceptance (subscribing to myths about rape allows women to avoid facing their own vulnerability), and commitment to self-defense. These constructs were also correlated with scales measuring masculinity, self-esteem, and degree of belief in a "just world." Data were gathered to test these constructs via a questionnaire administered to 800 female undergraduate dormitory residents (47% response rate). Analysis of the data allowed refinement of 50 items into 25 items that constitute reliable scales of perceived vulnerability, self-efficacy, and self-determination and a marginally reliable scale of victim-blaming (rape myth). Support was found for 5/6 predicted correlates between DAR scales and 3/5 hypothesized correlations between DAR scales and convergent/discrimination validity scales. Research into this rape prevention tool will continue.

  2. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  3. In-Drift Microbial Communities Model Validation Calculations

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  4. In-Drift Microbial Communities Model Validation Calculation

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  5. In-Drift Microbial Communities Model Validation Calculations

    International Nuclear Information System (INIS)

    Jolley, D.M.

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data

  6. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    International Nuclear Information System (INIS)

    D.M. Jolley

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data

  7. Validation of WIMS-AECL/(MULTICELL)/RFSP system by the results of phase-B test at Wolsung-II unit

    Energy Technology Data Exchange (ETDEWEB)

    Hong, In Seob; Min, Byung Joo; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-03-01

    The object of this study is the validation of WIMS-AECL lattice code which has been proposed for the substitution of POWDERPUFS-V(PPV) code. For the validation of this code, WIMS-AECL/(MULTICELL)/RFSP (lattice calculation/(incremental cross section calculation)/core calculation) code system has been used for the Post-Simulation of Phase-B physics Test at Wolsung-II unit. This code system had been used for the Wolsong-I and Point Lepraeu reactors, but after a few modifications of WIMS-AECL input values for Wolsong-II, the results of WIMS-AECL/RFSP code calculations are much improved to those of the old ones. Most of the results show good estimation except moderator temperature coefficient test. And the verification of this result must be done, which is one of the further work. 6 figs., 15 tabs. (Author)

  8. What is validation

    International Nuclear Information System (INIS)

    Clark, H.K.

    1985-01-01

    Criteria for establishing the validity of a computational method to be used in assessing nuclear criticality safety, as set forth in ''American Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors,'' ANSI/ANS-8.1-1983, are examined and discussed. Application of the criteria is illustrated by describing the procedures followed in deriving subcritical limits that have been incorporated in the Standard

  9. Field assessment of balance in 10 to 14 year old children, reproducibility and validity of the Nintendo Wii board

    Science.gov (United States)

    2014-01-01

    Background Because body proportions in childhood are different to those in adulthood, children have a relatively higher centre of mass location. This biomechanical difference and the fact that children’s movements have not yet fully matured result in different sway performances in children and adults. When assessing static balance, it is essential to use objective, sensitive tools, and these types of measurement have previously been performed in laboratory settings. However, the emergence of technologies like the Nintendo Wii Board (NWB) might allow balance assessment in field settings. As the NWB has only been validated and tested for reproducibility in adults, the purpose of this study was to examine reproducibility and validity of the NWB in a field setting, in a population of children. Methods Fifty-four 10–14 year-olds from the CHAMPS-Study DK performed four different balance tests: bilateral stance with eyes open (1), unilateral stance on dominant (2) and non-dominant leg (3) with eyes open, and bilateral stance with eyes closed (4). Three rounds of the four tests were completed with the NWB and with a force platform (AMTI). To assess reproducibility, an intra-day test-retest design was applied with a two-hour break between sessions. Results Bland-Altman plots supplemented by Minimum Detectable Change (MDC) and concordance correlation coefficient (CCC) demonstrated satisfactory reproducibility for the NWB and the AMTI (MDC: 26.3-28.2%, CCC: 0.76-0.86) using Centre Of Pressure path Length as measurement parameter. Bland-Altman plots demonstrated satisfactory concurrent validity between the NWB and the AMTI, supplemented by satisfactory CCC in all tests (CCC: 0.74-0.87). The ranges of the limits of agreement in the validity study were comparable to the limits of agreement of the reproducibility study. Conclusion Both NWB and AMTI have satisfactory reproducibility for testing static balance in a population of children. Concurrent validity of NWB compared

  10. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  11. Validating a perceptual distraction model using a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user's perceived distraction caused by audio-on-audio interference. Originally, the distraction model was trained with music targets and interferers using a simple loudspeaker setup, consisting of only two...... sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. The results show that the model performance is equally good in both zones, i.e., with both speech- on-music and music-on-speech stimuli...

  12. Using secondary data sets in health care management : opportunities and challenges

    OpenAIRE

    Buttigieg, Sandra; Annual Meeting of the American Academy of Management

    2013-01-01

    The importance of secondary data sets within the medical services and management sector is discussed. Secondary data sets which are readily available and thus reduce considerable costs, can also provide accurate, valid and reliable evidence.

  13. The Danish anal sphincter rupture questionnaire: Validity and reliability

    DEFF Research Database (Denmark)

    Due, Ulla; Ottesen, Marianne

    2008-01-01

    Objective. To revise, validate and test for reliability an anal sphincter rupture questionnaire in relation to construct, content and face validity. Setting and background. Since 1996 women with anal sphincter rupture (ASR) at one of the public university hospitals in Copenhagen, Denmark have bee...

  14. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  15. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.

    Science.gov (United States)

    Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F

    2017-05-01

    To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.

  16. Validation and evaluation of common large-area display set (CLADS) performance specification

    Science.gov (United States)

    Hermann, David J.; Gorenflo, Ronald L.

    1998-09-01

    Battelle is under contract with Warner Robins Air Logistics Center to design a Common Large Area Display Set (CLADS) for use in multiple Command, Control, Communications, Computers, and Intelligence (C4I) applications that currently use 19- inch Cathode Ray Tubes (CRTs). Battelle engineers have built and fully tested pre-production prototypes of the CLADS design for AWACS, and are completing pre-production prototype displays for three other platforms simultaneously. With the CLADS design, any display technology that can be packaged to meet the form, fit, and function requirements defined by the Common Large Area Display Head Assembly (CLADHA) performance specification is a candidate for CLADS applications. This technology independent feature reduced the risk of CLADS development, permits life long technology insertion upgrades without unnecessary redesign, and addresses many of the obsolescence problems associated with COTS technology-based acquisition. Performance and environmental testing were performed on the AWACS CLADS and continues on other platforms as a part of the performance specification validation process. A simulator assessment and flight assessment were successfully completed for the AWACS CLADS, and lessons learned from these assessments are being incorporated into the performance specifications. Draft CLADS specifications were released to potential display integrators and manufacturers for review in 1997, and the final version of the performance specifications are scheduled to be released to display integrators and manufacturers in May, 1998. Initial USAF applications include replacements for the E-3 AWACS color monitor assembly, E-8 Joint STARS graphics display unit, and ABCCC airborne color display. Initial U.S. Navy applications include the E-2C ACIS display. For these applications, reliability and maintainability are key objectives. The common design will reduce the cost of operation and maintenance by an estimated 3.3M per year on E-3 AWACS

  17. Brazilian Portuguese version of the Revised Fibromyalgia Impact Questionnaire (FIQR-Br): cross-cultural validation, reliability, and construct and structural validation.

    Science.gov (United States)

    Lupi, Jaqueline Basilio; Carvalho de Abreu, Daniela Cristina; Ferreira, Mariana Candido; Oliveira, Renê Donizeti Ribeiro de; Chaves, Thais Cristina

    2017-08-01

    This study aimed to culturally adapt and validate the Revised Fibromyalgia Impact Questionnaire (FIQR) to Brazilian Portuguese, by the use of analysis of internal consistency, reliability, and construct and structural validity. A total of 100 female patients with fibromyalgia participated in the validation process of the Brazilian Portuguese version of the FIQR (FIQR-Br).The intraclass correlation coefficient (ICC) was used for statistical analysis of reliability (test-retest), Cronbach's alpha for internal consistency, Pearson's rank correlation for construct validity, and confirmatory factor analysis (CFA) for structural validity. It was verified excellent levels of reliability, with ICC greater than 0.75 for all questions and domains of the FIQR-Br. For internal consistency, alpha values greater than 0.70 for the items and domains of the questionnaire were observed. Moderate (0.40  0.70) correlations were observed for the scores of domains and total score between the FIQR-Br and FIQ-Br. The structure of the three domains of the FIQR-Br was confirmed by CFA. The results of this study suggest that that the FIQR-Br is a reliable and valid instrument for assessing fibromyalgia-related impact, and supports its use in clinical settings and research. The structure of the three domains of the FIQR-Br was also confirmed. Implications for Rehabilitation Fibromyalgia is a chronic musculoskeletal disorder characterized by widespread and diffuse pain, fatigue, sleep disturbances, and depression. The disease significantly impairs patients' quality of life and can be highly disabling. To be used in multicenter research efforts, the Revised Fibromyalgia Impact Questionnaire (FIQR) must be cross-culturally validated and psychometrically tested. This paper will make available a new version of the FIQR-Br since another version already exists, but there are concerns about its measurement properties. The availability of an instrument adapted to and validated for Brazilian

  18. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  19. In-vessel core degradation code validation matrix

    International Nuclear Information System (INIS)

    Haste, T.J.; Adroguer, B.; Gauntt, R.O.; Martinez, J.A.; Ott, L.J.; Sugimoto, J.; Trambauer, K.

    1996-01-01

    The objective of the current Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The scope of the review covers PWR and BWR designs of Western origin: the coverage of phenomena extends from the initial heat-up through to the introduction of melt into the lower plenum. Concerning fission product behaviour, the effect of core degradation on fission product release is considered. The report provides brief overviews of the main LWR severe accident sequences and of the dominant phenomena involved. The experimental database is summarised. These data are cross-referenced against a condensed set of the phenomena and test condition headings presented earlier, judging the results against a set of selection criteria and identifying key tests of particular value. The main conclusions and recommendations are listed. (K.A.)

  20. The Development of a Novel, Validated, Rapid and Simple Method for the Detection of Sarcocystis fayeri in Horse Meat in the Sanitary Control Setting.

    Science.gov (United States)

    Furukawa, Masato; Minegishi, Yasutaka; Izumiyama, Shinji; Yagita, Kenji; Mori, Hideto; Uemura, Taku; Etoh, Yoshiki; Maeda, Eriko; Sasaki, Mari; Ichinose, Kazuya; Harada, Seiya; Kamata, Yoichi; Otagiri, Masaki; Sugita-Konishi, Yoshiko; Ohnishi, Takahiro

    2016-01-01

    Sarcocystis fayeri (S. fayeri) is a newly identified causative agent of foodborne disease that is associated with the consumption of raw horse meat. The testing methods prescribed by the Ministry of Health, Labour and Welfare of Japan are time consuming and require the use of expensive equipment and a high level of technical expertise. Accordingly, these methods are not suitable for use in the routine sanitary control setting to prevent outbreaks of foodborne disease. In order to solve these problems, we have developed a new, rapid and simple testing method using LAMP, which takes only 1 hour to perform and which does not involve the use of any expensive equipment or expert techniques. For the validation of this method, an inter-laboratory study was performed among 5 institutes using 10 samples infected with various concentrations of S. fayeri. The results of the inter-laboratory study demonstrated that our LAMP method could detect S. fayeri at concentrations greater than 10(4) copies/g. Thus, this new method could be useful in screening for S. fayeri as a routine sanitary control procedure.

  1. Does the number of choice sets matter? Results from a web survey applying a discrete choice experiment

    DEFF Research Database (Denmark)

    Bech, Mickael; Kjær, Trine; Lauridsen, Jørgen Trankjær

    2011-01-01

    choice sets presented to each respondent on response rate, self-reported choice certainty, perceived choice difficulty, willingness-to-pay (WTP) estimates, and response variance. A sample of 1053 respondents was exposed to 5, 9 or 17 choice sets in a DCE eliciting preferences for dental services. Our...... results showed no differences in response rates and no systematic differences in the respondents' self-reported perception of the uncertainty of their DCE answers. There were some differences in WTP estimates suggesting that estimated preferences are to some extent context-dependent, but no differences...... in standard deviations for WTP estimates or goodness-of-fit statistics. Respondents exposed to 17 choice sets had somewhat higher response variance compared to those exposed to 5 choice sets, indicating that cognitive burden may increase with the number of choice sets beyond a certain threshold. Overall, our...

  2. FORENSIC-CLINICAL INTERVIEW: RELIABILITY AND VALIDITY FOR THE EVALUATION OF PSYCHOLOGICAL INJURY

    Directory of Open Access Journals (Sweden)

    Francisca Fariña

    2013-01-01

    Full Text Available Forensic evaluation of psychological injury involves the use of a multimethod approximation i.e., a psychometric instrument, normally the MMPI-2, and a clinical interview. In terms of the clinical interview, the traditional clinical interview (e.g., SCID is not valid for forensic settings as it does not fulfil the triple objective of forensic evaluation: diagnosis of psychological injury in terms of Post Traumatic Stress Disorder (PTSD, a differential diagnosis of feigning, and establishing a causal relationship between allegations of intimate partner violence (IPV and psychological injury. To meet this requirement, Arce and Fariña (2001 created the forensic-clinical interview based on two techniques that do not contaminate the contents i.e., reinstating the contexts and free recall, and a methodic categorical system of contents analysis for the diagnosis of psychological injury and a differential diagnosis of feigning. The reliability and validity of the forensic-clinical interview designed for the forensic evaluation of psychological injury was assessed in 51 genuine cases of (IPV and 54 mock victims of IPV who were evaluated using a forensic-clinical interview and the MMPI-2. The result revealed that the forensic-clinical interview was a reliable instrument (α = .85 for diagnostic criteria of psychological injury, and α = .744 for feigning strategies. Moreover, the results corroborated the predictive validity (the diagnosis of PTSD was similar to the expected rate; the convergence validity (the diagnosis of PTSD in the interview strongly correlated with the Pk Scale of the MMPI-2, and discriminant validity (the diagnosis of PTSD in the interview did not correlate with the Pk Scale in feigners. The feigning strategies (differential diagnosis also showed convergent validity (high correlation with the Scales and indices of the MMPI2 for the measure of feigning and discriminant validity (no genuine victim was classified as a feigner

  3. Validation of multi-body modelling methodology for reconfigurable underwater robots

    DEFF Research Database (Denmark)

    Nielsen, M.C.; Eidsvik, O. A.; Blanke, Mogens

    2016-01-01

    This paper investigates the problem of employing reconfigurable robots in an underwater setting. The main results presented is the experimental validation of a modelling methodology for a system consisting of N dynamically connected robots with heterogeneous dynamics. Two distinct types...... of experiments are performed, a series of hydrostatic free-decay tests and a series of open-loop trajectory tests. The results are compared to a simulation based on the modelling methodology. The modelling methodology shows promising results for usage with systems composed of reconfigurable underwater modules...

  4. Field assessment of balance in 10 to 14 year old children, reproducibility and validity of the Nintendo Wii board.

    Science.gov (United States)

    Larsen, Lisbeth Runge; Jørgensen, Martin Grønbech; Junge, Tina; Juul-Kristensen, Birgit; Wedderkopp, Niels

    2014-06-10

    Because body proportions in childhood are different to those in adulthood, children have a relatively higher centre of mass location. This biomechanical difference and the fact that children's movements have not yet fully matured result in different sway performances in children and adults. When assessing static balance, it is essential to use objective, sensitive tools, and these types of measurement have previously been performed in laboratory settings. However, the emergence of technologies like the Nintendo Wii Board (NWB) might allow balance assessment in field settings. As the NWB has only been validated and tested for reproducibility in adults, the purpose of this study was to examine reproducibility and validity of the NWB in a field setting, in a population of children. Fifty-four 10-14 year-olds from the CHAMPS-Study DK performed four different balance tests: bilateral stance with eyes open (1), unilateral stance on dominant (2) and non-dominant leg (3) with eyes open, and bilateral stance with eyes closed (4). Three rounds of the four tests were completed with the NWB and with a force platform (AMTI). To assess reproducibility, an intra-day test-retest design was applied with a two-hour break between sessions. Bland-Altman plots supplemented by Minimum Detectable Change (MDC) and concordance correlation coefficient (CCC) demonstrated satisfactory reproducibility for the NWB and the AMTI (MDC: 26.3-28.2%, CCC: 0.76-0.86) using Centre Of Pressure path Length as measurement parameter. Bland-Altman plots demonstrated satisfactory concurrent validity between the NWB and the AMTI, supplemented by satisfactory CCC in all tests (CCC: 0.74-0.87). The ranges of the limits of agreement in the validity study were comparable to the limits of agreement of the reproducibility study. Both NWB and AMTI have satisfactory reproducibility for testing static balance in a population of children. Concurrent validity of NWB compared with AMTI was satisfactory. Furthermore, the

  5. Validation of an air–liquid interface toxicological set-up using Cu, Pd, and Ag well-characterized nanostructured aggregates and spheres

    International Nuclear Information System (INIS)

    Svensson, C. R.; Ameer, S. S.; Ludvigsson, L.; Ali, N.; Alhamdow, A.; Messing, M. E.; Pagels, J.; Gudmundsson, A.; Bohgard, M.; Sanfins, E.; Kåredal, M.; Broberg, K.; Rissler, J.

    2016-01-01

    Systems for studying the toxicity of metal aggregates on the airways are normally not suited for evaluating the effects of individual particle characteristics. This study validates a set-up for toxicological studies of metal aggregates using an air–liquid interface approach. The set-up used a spark discharge generator capable of generating aerosol metal aggregate particles and sintered near spheres. The set-up also contained an exposure chamber, The Nano Aerosol Chamber for In Vitro Toxicity (NACIVT). The system facilitates online characterization capabilities of mass mobility, mass concentration, and number size distribution to determine the exposure. By dilution, the desired exposure level was controlled. Primary and cancerous airway cells were exposed to copper (Cu), palladium (Pd), and silver (Ag) aggregates, 50–150 nm in median diameter. The aggregates were composed of primary particles <10 nm in diameter. For Cu and Pd, an exposure of sintered aerosol particles was also produced. The doses of the particles were expressed as particle numbers, masses, and surface areas. For the Cu, Pd, and Ag aerosol particles, a range of mass surface concentrations on the air–liquid interface of 0.4–10.7, 0.9–46.6, and 0.1–1.4 µg/cm"2, respectively, were achieved. Viability was measured by WST-1 assay, cytokines (Il-6, Il-8, TNF-a, MCP) by Luminex technology. Statistically significant effects and dose response on cytokine expression were observed for SAEC cells after exposure to Cu, Pd, or Ag particles. Also, a positive dose response was observed for SAEC viability after Cu exposure. For A549 cells, statistically significant effects on viability were observed after exposure to Cu and Pd particles. The set-up produced a stable flow of aerosol particles with an exposure and dose expressed in terms of number, mass, and surface area. Exposure-related effects on the airway cellular models could be asserted.Graphical Abstract

  6. Validation of an air–liquid interface toxicological set-up using Cu, Pd, and Ag well-characterized nanostructured aggregates and spheres

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, C. R., E-mail: christian.svensson@design.lth.se [Lund University, Department of Design Sciences, Ergonomics and Aerosol Technology (Sweden); Ameer, S. S. [Lund University, Division of Occupational and Environmental Medicine, Department of Laboratory Medicine (Sweden); Ludvigsson, L. [Lund University, Department of Physics, Solid State Physics (Sweden); Ali, N.; Alhamdow, A. [Lund University, Division of Occupational and Environmental Medicine, Department of Laboratory Medicine (Sweden); Messing, M. E. [Lund University, Department of Physics, Solid State Physics (Sweden); Pagels, J.; Gudmundsson, A.; Bohgard, M. [Lund University, Department of Design Sciences, Ergonomics and Aerosol Technology (Sweden); Sanfins, E. [Atomic Energy Commission (CEA), Institute of Emerging Diseases and Innovative Therapies (iMETI), Division of Prions and Related Diseases - SEPIA (France); Kåredal, M.; Broberg, K. [Lund University, Division of Occupational and Environmental Medicine, Department of Laboratory Medicine (Sweden); Rissler, J. [Lund University, Department of Design Sciences, Ergonomics and Aerosol Technology (Sweden)

    2016-04-15

    Systems for studying the toxicity of metal aggregates on the airways are normally not suited for evaluating the effects of individual particle characteristics. This study validates a set-up for toxicological studies of metal aggregates using an air–liquid interface approach. The set-up used a spark discharge generator capable of generating aerosol metal aggregate particles and sintered near spheres. The set-up also contained an exposure chamber, The Nano Aerosol Chamber for In Vitro Toxicity (NACIVT). The system facilitates online characterization capabilities of mass mobility, mass concentration, and number size distribution to determine the exposure. By dilution, the desired exposure level was controlled. Primary and cancerous airway cells were exposed to copper (Cu), palladium (Pd), and silver (Ag) aggregates, 50–150 nm in median diameter. The aggregates were composed of primary particles <10 nm in diameter. For Cu and Pd, an exposure of sintered aerosol particles was also produced. The doses of the particles were expressed as particle numbers, masses, and surface areas. For the Cu, Pd, and Ag aerosol particles, a range of mass surface concentrations on the air–liquid interface of 0.4–10.7, 0.9–46.6, and 0.1–1.4 µg/cm{sup 2}, respectively, were achieved. Viability was measured by WST-1 assay, cytokines (Il-6, Il-8, TNF-a, MCP) by Luminex technology. Statistically significant effects and dose response on cytokine expression were observed for SAEC cells after exposure to Cu, Pd, or Ag particles. Also, a positive dose response was observed for SAEC viability after Cu exposure. For A549 cells, statistically significant effects on viability were observed after exposure to Cu and Pd particles. The set-up produced a stable flow of aerosol particles with an exposure and dose expressed in terms of number, mass, and surface area. Exposure-related effects on the airway cellular models could be asserted.Graphical Abstract.

  7. Quality of life and hormone use: new validation results of MRS scale

    Directory of Open Access Journals (Sweden)

    Heinemann Lothar AJ

    2006-05-01

    Full Text Available Abstract Background The Menopause Rating Scale is a health-related Quality of Life scale developed in the early 1990s and step-by-step validated since then. Recently the MRS scale was validated as outcomes measure for hormone therapy. The suspicion however was expressed that the data were too optimistic due to methodological problems of the study. A new study became available to check how founded this suspicion was. Method An open post-marketing study of 3282 women with pre- and post- treatment data of the self-administered version of the MRS scale was analyzed to evaluate the capacity of the scale to detect hormone treatment related effects with the MRS scale. The main results were then compared with the old study where the interview-based version of the MRS scale was used. Results The hormone-therapy related improvement of complaints relative to the baseline score was about or less than 30% in total or domain scores, whereas it exceeded 30% improvement in the old study. Similarly, the relative improvement after therapy, stratified by the degree of severity at baseline, was lower in the new than in the old study, but had the same slope. Although we cannot exclude different treatment effects with the study method used, this supports our hypothesis that the individual MRS interviews performed by the physician biased the results towards over-estimation of the treatment effects. This hypothesis is underlined by the degree of concordance of physician's assessment and patient's perception of treatment success (MRS results: Sensitivity (correct prediction of the positive assessment by the treating physician of the MRS and specificity (correct prediction of a negative assessment by the physician were lower than the results obtained with the interview-based MRS scale in the previous publication. Conclusion The study confirmed evidence for the capacity of the MRS scale to measure treatment effects on quality of life across the full range of severity of

  8. Failure mode and effects analysis outputs: are they valid?

    Directory of Open Access Journals (Sweden)

    Shebl Nada

    2012-06-01

    Full Text Available Abstract Background Failure Mode and Effects Analysis (FMEA is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies

  9. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    Science.gov (United States)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  10. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    International Nuclear Information System (INIS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)

  11. THE GLOBAL TANDEM-X DEM: PRODUCTION STATUS AND FIRST VALIDATION RESULTS

    Directory of Open Access Journals (Sweden)

    M. Huber

    2012-07-01

    Full Text Available The TanDEM-X mission will derive a global digital elevation model (DEM with satellite SAR interferometry. Two radar satellites (TerraSAR-X and TanDEM-X will map the Earth in a resolution and accuracy with an absolute height error of 10m and a relative height error of 2m for 90% of the data. In order to fulfill the height requirements in general two global coverages are acquired and processed. Besides the final TanDEM-X DEM, an intermediate DEM with reduced accuracy is produced after the first coverage is completed. The last step in the whole workflow for generating the TanDEM-X DEM is the calibration of remaining systematic height errors and the merge of single acquisitions to 1°x1° DEM tiles. In this paper the current status of generating the intermediate DEM and first validation results based on GPS tracks, laser scanning DEMs, SRTM data and ICESat points are shown for different test sites.

  12. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  13. Failure mode and effects analysis outputs: are they valid?

    Science.gov (United States)

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  14. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    Science.gov (United States)

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Translating and validating a Training Needs Assessment tool into Greek

    Directory of Open Access Journals (Sweden)

    Hicks Carolyn M

    2007-05-01

    Full Text Available Abstract Background The translation and cultural adaptation of widely accepted, psychometrically tested tools is regarded as an essential component of effective human resource management in the primary care arena. The Training Needs Assessment (TNA is a widely used, valid instrument, designed to measure professional development needs of health care professionals, especially in primary health care. This study aims to describe the translation, adaptation and validation of the TNA questionnaire into Greek language and discuss possibilities of its use in primary care settings. Methods A modified version of the English self-administered questionnaire consisting of 30 items was used. Internationally recommended methodology, mandating forward translation, backward translation, reconciliation and pretesting steps, was followed. Tool validation included assessing item internal consistency, using the alpha coefficient of Cronbach. Reproducibility (test – retest reliability was measured by the kappa correlation coefficient. Criterion validity was calculated for selected parts of the questionnaire by correlating respondents' research experience with relevant research item scores. An exploratory factor analysis highlighted how the items group together, using a Varimax (oblique rotation and subsequent Cronbach's alpha assessment. Results The psychometric properties of the Greek version of the TNA questionnaire for nursing staff employed in primary care were good. Internal consistency of the instrument was very good, Cronbach's alpha was found to be 0.985 (p 1.0, KMO (Kaiser-Meyer-Olkin measure of sampling adequacy = 0.680 and Bartlett's test of sphericity, p Conclusion The translated and adapted Greek version is comparable with the original English instrument in terms of validity and reliability and it is suitable to assess professional development needs of nursing staff in Greek primary care settings.

  16. Translating and validating a Training Needs Assessment tool into Greek

    Science.gov (United States)

    Markaki, Adelais; Antonakis, Nikos; Hicks, Carolyn M; Lionis, Christos

    2007-01-01

    Background The translation and cultural adaptation of widely accepted, psychometrically tested tools is regarded as an essential component of effective human resource management in the primary care arena. The Training Needs Assessment (TNA) is a widely used, valid instrument, designed to measure professional development needs of health care professionals, especially in primary health care. This study aims to describe the translation, adaptation and validation of the TNA questionnaire into Greek language and discuss possibilities of its use in primary care settings. Methods A modified version of the English self-administered questionnaire consisting of 30 items was used. Internationally recommended methodology, mandating forward translation, backward translation, reconciliation and pretesting steps, was followed. Tool validation included assessing item internal consistency, using the alpha coefficient of Cronbach. Reproducibility (test – retest reliability) was measured by the kappa correlation coefficient. Criterion validity was calculated for selected parts of the questionnaire by correlating respondents' research experience with relevant research item scores. An exploratory factor analysis highlighted how the items group together, using a Varimax (oblique) rotation and subsequent Cronbach's alpha assessment. Results The psychometric properties of the Greek version of the TNA questionnaire for nursing staff employed in primary care were good. Internal consistency of the instrument was very good, Cronbach's alpha was found to be 0.985 (p 1.0, KMO (Kaiser-Meyer-Olkin) measure of sampling adequacy = 0.680 and Bartlett's test of sphericity, p < 0.001. Conclusion The translated and adapted Greek version is comparable with the original English instrument in terms of validity and reliability and it is suitable to assess professional development needs of nursing staff in Greek primary care settings. PMID:17474989

  17. Adaptation and validation of the Alzheimer's Disease Assessment Scale - Cognitive (ADAS-Cog) in a low-literacy setting in sub-Saharan Africa.

    Science.gov (United States)

    Paddick, Stella-Maria; Kisoli, Aloyce; Mkenda, Sarah; Mbowe, Godfrey; Gray, William Keith; Dotchin, Catherine; Ogunniyi, Adesola; Kisima, John; Olakehinde, Olaide; Mushi, Declare; Walker, Richard William

    2017-08-01

    This study aimed to assess the feasibility of a low-literacy adaptation of the Alzheimer's Disease Assessment Scale - Cognitive (ADAS-Cog) for use in rural sub-Saharan Africa (SSA) for interventional studies in dementia. No such adaptations currently exist. Tanzanian and Nigerian health professionals adapted the ADAS-Cog by consensus. Validation took place in a cross-sectional sample of 34 rural-dwelling older adults with mild/moderate dementia alongside 32 non-demented controls in Tanzania. Participants were oversampled for lower educational level. Inter-rater reliability was conducted by two trained raters in 22 older adults (13 with dementia) from the same population. Assessors were blind to diagnostic group. Median ADAS-Cog scores were 28.75 (interquartile range (IQR), 22.96-35.54) in mild/moderate dementia and 12.75 (IQR 9.08-16.16) in controls. The area under the receiver operating characteristic curve (AUC) was 0.973 (95% confidence interval (CI) 0.936-1.00) for dementia. Internal consistency was high (Cronbach's α 0.884) and inter-rater reliability was excellent (intra-class correlation coefficient 0.905, 95% CI 0.804-0.964). The low-literacy adaptation of the ADAS-Cog had good psychometric properties in this setting. Further evaluation in similar settings is required.

  18. Achieving external validity in home advantage research: generalizing crowd noise effects

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2014-06-01

    Full Text Available Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials’ decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer some confirmation of the validity of laboratory experimental studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.

  19. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution.

    Science.gov (United States)

    Zermoglio, Paula F; Guralnick, Robert P; Wieczorek, John R

    2016-01-01

    Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume) and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data.

  20. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution.

    Directory of Open Access Journals (Sweden)

    Paula F Zermoglio

    Full Text Available Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data.

  1. On the Validity of Student Evaluation of Teaching: The State of the Art

    Science.gov (United States)

    Spooren, Pieter; Brockx, Bert; Mortelmans, Dimitri

    2013-01-01

    This article provides an extensive overview of the recent literature on student evaluation of teaching (SET) in higher education. The review is based on the SET meta-validation model, drawing upon research reports published in peer-reviewed journals since 2000. Through the lens of validity, we consider both the more traditional research themes in…

  2. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    Science.gov (United States)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product

  3. A Comparison of Heuristics with Modularity Maximization Objective using Biological Data Sets

    Directory of Open Access Journals (Sweden)

    Pirim Harun

    2016-01-01

    Full Text Available Finding groups of objects exhibiting similar patterns is an important data analytics task. Many disciplines have their own terminologies such as cluster, group, clique, community etc. defining the similar objects in a set. Adopting the term community, many exact and heuristic algorithms are developed to find the communities of interest in available data sets. Here, three heuristic algorithms to find communities are compared using five gene expression data sets. The heuristics have a common objective function of maximizing the modularity that is a quality measure of a partition and a reflection of objects’ relevance in communities. Partitions generated by the heuristics are compared with the real ones using the adjusted rand index, one of the most commonly used external validation measures. The paper discusses the results of the partitions on the mentioned biological data sets.

  4. Guide to verification and validation of the SCALE-4 radiation shielding software

    Energy Technology Data Exchange (ETDEWEB)

    Broadhead, B.L.; Emmett, M.B.; Tang, J.S.

    1996-12-01

    Whenever a decision is made to newly install the SCALE radiation shielding software on a computer system, the user should run a set of verification and validation (V&V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V&V in that it specifies test cases to run and gives expected results. The report describes the V&V that has been performed for the radiation shielding software in a version of SCALE-4. This report provides documentation of sample problems which are recommended for use in the V&V of the SCALE-4 system for all releases. The results reported in this document are from the SCALE-4.2P version which was run on an IBM RS/6000 work-station. These results verify that the SCALE-4 radiation shielding software has been correctly installed and is functioning properly. A set of problems for use by other shielding codes (e.g., MCNP, TWOTRAN, MORSE) performing similar V&V are discussed. A validation has been performed for XSDRNPM and MORSE-SGC6 utilizing SASI and SAS4 shielding sequences and the SCALE 27-18 group (27N-18COUPLE) cross-section library for typical nuclear reactor spent fuel sources and a variety of transport package geometries. The experimental models used for the validation were taken from two previous applications of the SASI and SAS4 methods.

  5. Guide to verification and validation of the SCALE-4 radiation shielding software

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Emmett, M.B.; Tang, J.S.

    1996-12-01

    Whenever a decision is made to newly install the SCALE radiation shielding software on a computer system, the user should run a set of verification and validation (V ampersand V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V ampersand V in that it specifies test cases to run and gives expected results. The report describes the V ampersand V that has been performed for the radiation shielding software in a version of SCALE-4. This report provides documentation of sample problems which are recommended for use in the V ampersand V of the SCALE-4 system for all releases. The results reported in this document are from the SCALE-4.2P version which was run on an IBM RS/6000 work-station. These results verify that the SCALE-4 radiation shielding software has been correctly installed and is functioning properly. A set of problems for use by other shielding codes (e.g., MCNP, TWOTRAN, MORSE) performing similar V ampersand V are discussed. A validation has been performed for XSDRNPM and MORSE-SGC6 utilizing SASI and SAS4 shielding sequences and the SCALE 27-18 group (27N-18COUPLE) cross-section library for typical nuclear reactor spent fuel sources and a variety of transport package geometries. The experimental models used for the validation were taken from two previous applications of the SASI and SAS4 methods

  6. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  7. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    Science.gov (United States)

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  8. Satisfaction with information provided to Danish cancer patients: validation and survey results.

    Science.gov (United States)

    Ross, Lone; Petersen, Morten Aagaard; Johnsen, Anna Thit; Lundstrøm, Louise Hyldborg; Groenvold, Mogens

    2013-11-01

    To validate five items (CPWQ-inf) regarding satisfaction with information provided to cancer patients from health care staff, assess the prevalence of dissatisfaction with this information, and identify factors predicting dissatisfaction. The questionnaire was validated by patient-observer agreement and cognitive interviews. The prevalence of dissatisfaction was assessed in a cross-sectional sample of all cancer patients in contact with hospitals during the past year in three Danish counties. The validation showed that the CPWQ performed well. Between 3 and 23% of the 1490 participating patients were dissatisfied with each of the measured aspects of information. The highest level of dissatisfaction was reported regarding the guidance, support and help provided when the diagnosis was given. Younger patients were consistently more dissatisfied than older patients. The brief CPWQ performs well for survey purposes. The survey depicts the heterogeneous patient population encountered by hospital staff and showed that younger patients probably had higher expectations or a higher need for information and that those with more severe diagnoses/prognoses require extra care in providing information. Four brief questions can efficiently assess information needs. With increasing demands for information, a wide range of innovative initiatives is needed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Design and validation of an observational instrument for technical and tactical actions in beach volleyball

    Directory of Open Access Journals (Sweden)

    José Manuel Palao

    2015-06-01

    Full Text Available Technical and tactical actions determine performance in beach volleyball. This research develops and tests an instrument to monitor and evaluate the manner of execution and efficacy of the actions in beach volleyball. The purpose of this paper was to design and validate an observational instrument to analyze technical and tactical actions in beach volleyball. The instrument collects information regarding: a information about the match (context, b information about game situations, c information about technical situations (serve, reception, set, attack, block, and court defense in relation to player execution, role, manner of execution, execution zone, and efficacy, and d information about the result of the play (win-lose and way point is obtained. Instrument design and validation was done in seven stages: a review of literature and consultation of experts; b pilot observation and data analysis; c expert review of instrument (qualitative and quantitative evaluation; d observer training test; e expert review of instrument (content validity; f measurement of the ability of the instrument to discriminate the result of the set; and g measurement of the ability of the instrument to differentiate between competition age groups. The results show that the instrument allows for obtaining objective and valid information about the players and team from offensive and defensive technical and tactical actions, as well as indirectly from physical actions. The instrument can be used, in its entirety or partially, for researching and coaching purposes.

  10. The measurement of collaboration within healthcare settings: a systematic review of measurement properties of instruments.

    Science.gov (United States)

    Walters, Stephen John; Stern, Cindy; Robertson-Malt, Suzanne

    2016-04-01

    Register of Controlled Trials, Emerald Fulltext, MD Consult Australia, PsycARTICLES, Psychology and Behavioural Sciences Collection, PsycINFO, Informit Health Databases, Scopus, UpToDate and Web of Science. The search for unpublished studies included EThOS (Electronic Thesis Online Service), Index to Theses and ProQuest- Dissertations and Theses. The assessment of methodological quality of the included studies was undertaken using the COSMIN checklist which is a validated tool that assesses the process of design and validation of healthcare measurement instruments. An Excel spreadsheet version of COSMIN was developed for data collection which included a worksheet for extracting participant characteristics and interpretability data. Statistical pooling of data was not possible for this review. Therefore, the findings are presented in a narrative form including tables and figures to aid in data presentation. To make a synthesis of the assessments of methodological quality of the different studies, each instrument was rated by accounting for the number of studies performed with an instrument, the appraisal of methodological quality and the consistency of results between studies. Twenty-one studies of 12 instruments were included in the review. The studies were diverse in their theoretical underpinnings, target population/setting and measurement objectives. Measurement objectives included: investigating beliefs, behaviors, attitudes, perceptions and relationships associated with collaboration; measuring collaboration between different levels of care or within a multi-rater/target group; assessing collaboration across teams; or assessing internal participation of both teams and patients.Studies produced validity or interpretability data but none of the studies assessed all validity and reliability properties. However, most of the included studies produced a factor structure or referred to prior factor analysis. A narrative synthesis of the individual study factor structures was

  11. Reliability of the International Spinal Cord Injury Musculoskeletal Basic Data Set

    DEFF Research Database (Denmark)

    Baunsgaard, C B; Chhabra, H S; Harvey, L A

    2016-01-01

    STUDY DESIGN: Psychometric study. OBJECTIVES: To determine the intra- and inter-rater reliability and content validity of the International Spinal Cord Injury (SCI) Musculoskeletal Basic Data Set (ISCIMSBDS). SETTING: Four centers with one in each of the countries in Australia, England, India and...

  12. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  13. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    NARCIS (Netherlands)

    P.C. Austin (Peter); D. van Klaveren (David); Y. Vergouwe (Yvonne); D. Nieboer (Daan); D.S. Lee (Douglas); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractObjective: Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting: We

  14. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  15. Content validity and reliability of test of gross motor development in Chilean children

    Directory of Open Access Journals (Sweden)

    Marcelo Cano-Cappellacci

    2015-01-01

    Full Text Available ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2 for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries.

  16. Validation of the Care-Related Quality of Life Instrument in different study settings : findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS)

    NARCIS (Netherlands)

    Lutomski, J. E.; van Exel, N. J. A.; Kempen, G. I. J. M.; van Charante, E. P. Moll; den Elzen, W. P. J.; Jansen, A. P. D.; Krabbe, P. F. M.; Steunenberg, B.; Steyerberg, E. W.; Rikkert, M. G. M. Olde; Melis, R. J. F.

    PURPOSE: Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs.

  17. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS)

    NARCIS (Netherlands)

    Lutomski, J.E.; Exel, N.J. van; Kempen, G.I.; Charante, E.P. Moll van; Elzen, W.P. den; Jansen, A.P.; Krabbe, P.F.M.; Steunenberg, B.; Steyerberg, E.W.; Olde Rikkert, M.G.M.; Melis, R.J.F.

    2015-01-01

    PURPOSE: Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs.

  18. Signal validation in nuclear power plants using redundant measurements

    International Nuclear Information System (INIS)

    Glockler, O.; Upadhyaya, B.R.; Morgenstern, V.M.

    1989-01-01

    This paper discusses the basic principles of a multivariable signal validation software system utilizing redundant sensor readings of process variables in nuclear power plants (NPPs). The technique has been tested in numerical experiments, and was applied to actual data from a pressurized water reactor (PWR). The simultaneous checking within one redundant measurement set, and the cross-checking among redundant measurement sets of dissimilar process variables, results in an algorithm capable of detecting and isolating bias-type errors. A case in point occurs when a majority of the direct redundant measurements of more than one process variable has failed simultaneously by a common-mode or correlated failures can be detected by the developed approach. 5 refs

  19. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  20. Development and validation of an app-based cell counter for use in the clinical laboratory setting

    Directory of Open Access Journals (Sweden)

    Alexander C Thurman

    2015-01-01

    Full Text Available Introduction: For decades cellular differentials have been generated exclusively on analog tabletop cell counters. With the advent of tablet computers, digital cell counters - in the form of mobile applications ("apps" - now represent an alternative to analog devices. However, app-based counters have not been widely adopted by clinical laboratories, perhaps owing to a presumed decrease in count accuracy related to the lack of tactile feedback inherent in a touchscreen interface. We herein provide the first systematic evidence that digital cell counters function similarly to standard tabletop units. Methods: We developed an app-based cell counter optimized for use in the clinical laboratory setting. Paired counts of 188 peripheral blood smears and 62 bone marrow aspirate smears were performed using our app-based counter and a standard analog device. Differences between paired data sets were analyzed using the correlation coefficient, Student′s t-test for paired samples and Bland-Altman plots. Results: All counts showed excellent agreement across all users and touch screen devices. With the exception of peripheral blood basophils (r = 0.684, differentials generated for the measured cell categories within the paired data sets were highly correlated (all r ≥ 0.899. Results of paired t-tests did not reach statistical significance for any cell type (all P > 0.05, and Bland-Altman plots showed a narrow spread of the difference about the mean without evidence of significant outliers. Conclusions: Our analysis suggests that no systematic differences exist between cellular differentials obtained via app-based or tabletop counters and that agreement between these two methods is excellent.

  1. Development, Validation, and Implementation of a Clinic Nurse Staffing Guideline.

    Science.gov (United States)

    Deeken, Debra Jean; Wakefield, Douglas; Kite, Cora; Linebaugh, Jeanette; Mitchell, Blair; Parkinson, Deidre; Misra, Madhukar

    2017-10-01

    Ensuring that the level of nurse staffing used to care for patients is appropriate to the setting and service intensity is essential for high-quality and cost-effective care. This article describes the development, validation, and implementation of the clinic technical skills permission list developed specifically to guide nurse staffing decisions in physician clinics of an academic medical center. Results and lessons learned in using this staffing guideline are presented.

  2. Validation of a semi-quantitative Food Frequency Questionnaire for Argentinean adults.

    Directory of Open Access Journals (Sweden)

    Mahshid Dehghan

    Full Text Available BACKGROUND: The Food Frequency Questionnaire (FFQ is the most commonly used method for ranking individuals based on long term food intake in large epidemiological studies. The validation of an FFQ for specific populations is essential as food consumption is culture dependent. The aim of this study was to develop a Semi-quantitative Food Frequency Questionnaire (SFFQ and evaluate its validity and reproducibility in estimating nutrient intake in urban and rural areas of Argentina. METHODS/PRINCIPAL FINDINGS: Overall, 256 participants in the Argentinean arm of the ongoing Prospective Urban and Rural Epidemiological study (PURE were enrolled for development and validation of the SFFQ. One hundred individuals participated in the SFFQ development. The other 156 individuals completed the SFFQs on two occasions, four 24-hour Dietary Recalls (24DRs in urban, and three 24DRs in rural areas during a one-year period. Correlation coefficients (r and de-attenuated correlation coefficients between 24DRs and SFFQ were calculated for macro and micro-nutrients. The level of agreement between the two methods was evaluated using classification into same and extreme quartiles and the Bland-Altman method. The reproducibility of the SFFQ was assessed by Pearson correlation coefficients and Intra-class Correlation Coefficients (ICC. The SFFQ consists of 96 food items. In both urban and rural settings de-attenuated correlations exceeded 0.4 for most of the nutrients. The classification into the same and adjacent quartiles was more than 70% for urban and 60% for rural settings. The Pearson correlation between two SFFQs varied from 0.30-0.56 and 0.32-0.60 in urban and rural settings, respectively. CONCLUSION: Our results showed that this SFFQ had moderate relative validity and reproducibility for macro and micronutrients in relation to the comparison method and can be used to rank individuals based on habitual nutrient intake.

  3. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  4. Verification and Validation Issues in Systems of Systems

    Directory of Open Access Journals (Sweden)

    Eric Honour

    2013-11-01

    Full Text Available The cutting edge in systems development today is in the area of "systems of systems" (SoS large networks of inter-related systems that are developed and managed separately, but that also perform collective activities. Such large systems typically involve constituent systems operating with different life cycles, often with uncoordinated evolution. The result is an ever-changing SoS in which adaptation and evolution replace the older engineering paradigm of "development". This short paper presents key thoughts about verification and validation in this environment. Classic verification and validation methods rely on having (a a basis of proof, in requirements and in operational scenarios, and (b a known system configuration to be proven. However, with constant SoS evolution, management of both requirements and system configurations are problematic. Often, it is impossible to maintain a valid set of requirements for the SoS due to the ongoing changes in the constituent systems. Frequently, it is even difficult to maintain a vision of the SoS operational use as users find new ways to adapt the SoS. These features of the SoS result in significant challenges for system proof. In addition to discussing the issues, the paper also indicates some of the solutions that are currently used to prove the SoS.

  5. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, C., E-mail: hansec@uw.edu [PSI-Center, University of Washington, Seattle, Washington 98195 (United States); Columbia University, New York, New York 10027 (United States); Victor, B.; Morgan, K.; Hossack, A.; Sutherland, D. [HIT-SI Group, University of Washington, Seattle, Washington 98195 (United States); Jarboe, T.; Nelson, B. A. [HIT-SI Group, University of Washington, Seattle, Washington 98195 (United States); PSI-Center, University of Washington, Seattle, Washington 98195 (United States); Marklin, G. [PSI-Center, University of Washington, Seattle, Washington 98195 (United States)

    2015-05-15

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numerical validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.

  6. Content validation applied to job simulation and written examinations

    International Nuclear Information System (INIS)

    Saari, L.M.; McCutchen, M.A.; White, A.S.; Huenefeld, J.C.

    1984-08-01

    The application of content validation strategies in work settings have become increasingly popular over the last few years, perhaps spurred by an acknowledgment in the courts of content validation as a method for validating employee selection procedures (e.g., Bridgeport Guardians v. Bridgeport Police Dept., 1977). Since criterion-related validation is often difficult to conduct, content validation methods should be investigated as an alternative for determining job related selection procedures. However, there is not yet consensus among scientists and professionals concerning how content validation should be conducted. This may be because there is a lack of clear cut operations for conducting content validation for different types of selection procedures. The purpose of this paper is to discuss two content validation approaches being used for the development of a licensing examination that involves a job simulation exam and a written exam. These represent variations in methods for applying content validation. 12 references

  7. Design description and validation results for the IFMIF High Flux Test Module as outcome of the EVEDA phase

    Directory of Open Access Journals (Sweden)

    F. Arbeiter

    2016-12-01

    Full Text Available During the Engineering Validation and Engineering Design Activities (EVEDA phase (2007-2014 of the International Fusion Materials Irradiation Facility (IFMIF, an advanced engineering design of the High Flux Test Module (HFTM has been developed with the objective to facilitate the controlled irradiation of steel samples in the high flux area directly behind the IFMIF neutron source. The development process addressed included manufacturing techniques, CAD, neutronic, thermal-hydraulic and mechanical analyses complemented by a series of validation activities. Validation included manufacturing of 1:1 parts and mockups, test of prototypes in the FLEX and HELOKA-LP helium loops of KIT for verification of the thermal and mechanical properties, and irradiation of specimen filled capsule prototypes in the BR2 test reactor. The prototyping activities were backed by several R&D studies addressing focused issues like handling of liquid NaK (as filling medium and insertion of Small Specimen Test Technique (SSTT specimens into the irradiation capsules. This paper provides an up-todate design description of the HFTM irradiation device, and reports on the achieved performance criteria related to the requirements. Results of the validation activities are accounted for and the most important issues for further development are identified.

  8. Validation of the Italian Tinnitus Questionnaire Short Form (TQ 12-I as a Brief Test for the Assessment of Tinnitus-Related Distress: Results of a Cross-Sectional Multicenter-Study

    Directory of Open Access Journals (Sweden)

    Roland Moschen

    2018-01-01

    Full Text Available Objectives: The use of reliable and valid psychometric tools to assess subjectively experienced distress due to tinnitus is broadly recommended. The purpose of the study was the validation of the Italian version of Tinnitus Questionnaire 12 item short form (TQ 12-I as a brief test for the assessment of patient reported tinnitus-related distress.Design: Cross-sectional multicenter questionnaire study.Setting: Tinnitus Center, European Hospital (Rome, the Department of Otorhinolaryngology, “Guglielmo da Saliceto” Hospital (Piacenza, and the Department of Audiology and Phoniatry, “Mater Domini” University Hospital (Catanzaro.Participants: One hundred and forty-three outpatients with tinnitus treated at one of the participating medical centers.Main Outcome Measures: Tinnitus Questionnaire Short Form (TQ 12-I, compared to the Tinnitus Handicap Inventory (THI, Brief Symptom Inventory (BSI, and Short Form (SF-36 Health Survey.Results: Our factor analysis revealed a two-factor solution (health anxiety, cognitive distress, accounting for 53.5% of the variance. Good internal consistency for the total score (α = 0.86 and both factors (α = 0.79–0.87 was found. Moderate correlations with the THI (r = 0.65, p < 0.001 indicated good convergent validity. Tinnitus distress was further correlated to increased psychological distress (r = 0.31, p < 0.001 and reduced emotional well-being (r = -0.34, p < 0.001.Conclusion: The study clearly showed that the TQ 12-I is a reliable and valid instrument to assess tinnitus-related distress which can be used in clinical practice as well as for research.

  9. A Supervised Learning Process to Validate Online Disease Reports for Use in Predictive Models.

    Science.gov (United States)

    Patching, Helena M M; Hudson, Laurence M; Cooke, Warrick; Garcia, Andres J; Hay, Simon I; Roberts, Mark; Moyes, Catherine L

    2015-12-01

    Pathogen distribution models that predict spatial variation in disease occurrence require data from a large number of geographic locations to generate disease risk maps. Traditionally, this process has used data from public health reporting systems; however, using online reports of new infections could speed up the process dramatically. Data from both public health systems and online sources must be validated before they can be used, but no mechanisms exist to validate data from online media reports. We have developed a supervised learning process to validate geolocated disease outbreak data in a timely manner. The process uses three input features, the data source and two metrics derived from the location of each disease occurrence. The location of disease occurrence provides information on the probability of disease occurrence at that location based on environmental and socioeconomic factors and the distance within or outside the current known disease extent. The process also uses validation scores, generated by disease experts who review a subset of the data, to build a training data set. The aim of the supervised learning process is to generate validation scores that can be used as weights going into the pathogen distribution model. After analyzing the three input features and testing the performance of alternative processes, we selected a cascade of ensembles comprising logistic regressors. Parameter values for the training data subset size, number of predictors, and number of layers in the cascade were tested before the process was deployed. The final configuration was tested using data for two contrasting diseases (dengue and cholera), and 66%-79% of data points were assigned a validation score. The remaining data points are scored by the experts, and the results inform the training data set for the next set of predictors, as well as going to the pathogen distribution model. The new supervised learning process has been implemented within our live site and is

  10. A study on construction, validation and determination of normalization of adolescents depression scale

    Directory of Open Access Journals (Sweden)

    Khadijeh Babakhani

    2014-01-01

    Full Text Available This paper presents an empirical investigation to construct, to validate and to determine normalization factors associated with adolescents depression scale. The study is performed among 750 randomly selected guided and high school students, 364 male and 386 female, who live in city of Zanjan, Iran. Validity of Beck Depression Inventory (BDI, Validity of Simpson-Angus Scale (SAS and divergence validity of the Coopersmith self- esteem coefficients are 0.72, 0.37 and -0.71, respectively. Result suggests that adolescents’ depression test is a reliable and valid tool for assessing depression, with utility in both research and clinical settings, counseling centers. In addition, the results of correlation test indicate there are some meaningful differences between depression levels of female and male students. In fact, our survey indicates that female students have more depression than male students do (F-value = 33.06, Sig. = 0.000. In addition, there are some meaningful differences between depression levels in various educational levels (F-value = 8.59, Sig. = 0.000. However, the study does not find sufficient evidence to believe there is any meaningful correlation between educational backgrounds and gender.

  11. Development and validation of a stock addiction inventory (SAI).

    Science.gov (United States)

    Youn, HyunChul; Choi, Jung-Seok; Kim, Dai-Jin; Choi, Sam-Wook

    2016-01-01

    Investing in financial markets is promoted and protected by the government as an essential economic activity, but can turn into a gambling addiction problem. Until now, few scales have widely been used to identify gambling addicts in financial markets. This study aimed to develop a self-rating scale to distinguish them. In addition, the reliability and validity of the stock addiction inventory (SAI) were demonstrated. A set of questionnaires, including the SAI, south oaks gambling screen (SOGS), and DSM-5 diagnostic criteria, for gambling disorder was completed by 1005 participants. Factor analysis, internal consistency testing, t tests, analysis of variance, and partial correlation analysis were conducted to verify the reliability and validity of SAI. The factor analysis results showed the final SAI consisting of two factors and nine items. The internal consistency and concurrent validity of SAI were verified. The Cronbach's α for the total scale was 0.892, and the SAI and its factors were significantly correlated with SOGS. This study developed a specific scale for financial market investments or trading; this scale proved to be reliable and valid. Our scale expands the understanding of gambling addiction in financial markets and provides a diagnostic reference.

  12. Fire Intensity Data for Validation of the Radiative Transfer Equation

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, Thomas K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jernigan, Dann A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    A set of experiments and test data are outlined in this report that provides radiation intensity data for the validation of models for the radiative transfer equation. The experiments were performed with lightly-sooting liquid hydrocarbon fuels that yielded fully turbulent fires 2 m diameter). In addition, supplemental measurements of air flow and temperature, fuel temperature and burn rate, and flame surface emissive power, wall heat, and flame height and width provide a complete set of boundary condition data needed for validation of models used in fire simulations.

  13. Validity of Management Control Topoi ? Towards Constructivist Pragmatism

    DEFF Research Database (Denmark)

    Nørreklit, Lennart; Nørreklit, Hanne; Israelsen, Poul

    2006-01-01

    For decades, management accounting research paradigms have been in competition without reaching any apparent closure, and struggles to bridge the gap between knowledge and doing have not been successful either. This paper argues that this state of affairs is due to an insufficient understanding...... of reality, which is rooted in the management accounting paradigms. The paper establishes a concept of reality as an integrated set of conditions for actions and argues that, without such a concept, the issue of validity cannot be addressed: management accounting and control only provide valid results...... in practice if they incorporate the four aspects of the world of human life-facts, logic, values and communication. On the basis of these aspects, some predominant research paradigms are subsequently analysed and, using a case study, the paper shows how the four dimensions are integrated in the practice...

  14. Fast detection of vascular plaque in optical coherence tomography images using a reduced feature set

    Science.gov (United States)

    Prakash, Ammu; Ocana Macias, Mariano; Hewko, Mark; Sowa, Michael; Sherif, Sherif

    2018-03-01

    Optical coherence tomography (OCT) images are capable of detecting vascular plaque by using the full set of 26 Haralick textural features and a standard K-means clustering algorithm. However, the use of the full set of 26 textural features is computationally expensive and may not be feasible for real time implementation. In this work, we identified a reduced set of 3 textural feature which characterizes vascular plaque and used a generalized Fuzzy C-means clustering algorithm. Our work involves three steps: 1) the reduction of a full set 26 textural feature to a reduced set of 3 textural features by using genetic algorithm (GA) optimization method 2) the implementation of an unsupervised generalized clustering algorithm (Fuzzy C-means) on the reduced feature space, and 3) the validation of our results using histology and actual photographic images of vascular plaque. Our results show an excellent match with histology and actual photographic images of vascular tissue. Therefore, our results could provide an efficient pre-clinical tool for the detection of vascular plaque in real time OCT imaging.

  15. An ancestry informative marker set for determining continental origin: validation and extension using human genome diversity panels

    Directory of Open Access Journals (Sweden)

    Gregersen Peter K

    2009-07-01

    Full Text Available Abstract Background Case-control genetic studies of complex human diseases can be confounded by population stratification. This issue can be addressed using panels of ancestry informative markers (AIMs that can provide substantial population substructure information. Previously, we described a panel of 128 SNP AIMs that were designed as a tool for ascertaining the origins of subjects from Europe, Sub-Saharan Africa, Americas, and East Asia. Results In this study, genotypes from Human Genome Diversity Panel populations were used to further evaluate a 93 SNP AIM panel, a subset of the 128 AIMS set, for distinguishing continental origins. Using both model-based and relatively model-independent methods, we here confirm the ability of this AIM set to distinguish diverse population groups that were not previously evaluated. This study included multiple population groups from Oceana, South Asia, East Asia, Sub-Saharan Africa, North and South America, and Europe. In addition, the 93 AIM set provides population substructure information that can, for example, distinguish Arab and Ashkenazi from Northern European population groups and Pygmy from other Sub-Saharan African population groups. Conclusion These data provide additional support for using the 93 AIM set to efficiently identify continental subject groups for genetic studies, to identify study population outliers, and to control for admixture in association studies.

  16. Challenges of forest landscape modeling - simulating large landscapes and validating results

    Science.gov (United States)

    Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson

    2011-01-01

    Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...

  17. Detection of Overreported Psychopathology with the MMPI-2 RF Form Validity Scales

    Science.gov (United States)

    Sellbom, Martin; Bagby, R. Michael

    2010-01-01

    We examined the utility of the validity scales on the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2 RF; Ben-Porath & Tellegen, 2008) to detect overreported psychopathology. This set of validity scales includes a newly developed scale and revised versions of the original MMPI-2 validity scales. We…

  18. Validating a dance-specific screening test for balance: preliminary results from multisite testing.

    Science.gov (United States)

    Batson, Glenna

    2010-09-01

    Few dance-specific screening tools adequately capture balance. The aim of this study was to administer and modify the Star Excursion Balance Test (oSEBT) to examine its utility as a balance screen for dancers. The oSEBT involves standing on one leg while lightly targeting with the opposite foot to the farthest distance along eight spokes of a star-shaped grid. This task simulates dance in the spatial pattern and movement quality of the gesturing limb. The oSEBT was validated for distance on athletes with history of ankle sprain. Thirty-three dancers (age 20.1 +/- 1.4 yrs) participated from two contemporary dance conservatories (UK and US), with or without a history of lower extremity injury. Dancers were verbally instructed (without physical demonstration) to execute the oSEBT and four modifications (mSEBT): timed (speed), timed with cognitive interference (answering questions aloud), and sensory disadvantaging (foam mat). Stepping strategies were tracked and performance strategies video-recorded. Unlike the oSEBT results, distances reached were not significant statistically (p = 0.05) or descriptively (i.e., shorter) for either group. Performance styles varied widely, despite sample homogeneity and instructions to control for strategy. Descriptive analysis of mSEBT showed an increased number of near-falls and decreased timing on the injured limb. Dancers appeared to employ variable strategies to keep balance during this test. Quantitative analysis is warranted to define balance strategies for further validation of SEBT modifications to determine its utility as a balance screening tool.

  19. Validation of natural language processing to extract breast cancer pathology procedures and results

    Directory of Open Access Journals (Sweden)

    Arika E Wieneke

    2015-01-01

    Full Text Available Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%, and evaluation (324, 10% purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related. Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity, but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance.

  20. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  1. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    Science.gov (United States)

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  2. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  3. Nursing Minimum Data Sets for documenting nutritional care for adults in primary healthcare: a scoping review.

    Science.gov (United States)

    Håkonsen, Sasja Jul; Pedersen, Preben Ulrich; Bjerrum, Merete; Bygholm, Ann; Peters, Micah D J

    2018-01-01

    . Multiple databases (PubMed, CINAHL, Embase, Scopus, Swemed+, MedNar, CDC, MEDION, Health Technology Assessment Database, TRIP database, NTIS, ProQuest Dissertations and Theses, Google Scholar, Current Contents) were searched from their inception to September 2016. The results from the studies were extracted using pre-developed extraction tools to all three questions, and have been presented narratively and by using figures to support the text. Twenty-nine nutritional screening tools that were validated within a primary care setting, and two documents on consensus statements regarding expert opinion were identified. No studies on the patients or relatives views were identified. The nutritional screening instruments have solely been validated in an over-55 population. Construct validity was the type of validation most frequently used in the validation process covering a total of 25 of the 29 tools. Two studies were identified in relation to the third review question. These two documents are both consensus statement documents developed by experts within the geriatric and nutritional care field. Overall, experts find it appropriate to: i) conduct a comprehensive geriatric assessment, ii) use a validated nutritional screening instrument, and iii) conduct a history and clinical diagnosis, physical examination and dietary assessment when assessing primarily the elderly's nutritional status in primary health care.

  4. Endogenous protein "barcode" for data validation and normalization in quantitative MS analysis.

    Science.gov (United States)

    Lee, Wooram; Lazar, Iulia M

    2014-07-01

    Quantitative proteomic experiments with mass spectrometry detection are typically conducted by using stable isotope labeling and label-free quantitation approaches. Proteins with housekeeping functions and stable expression level such actin, tubulin, and glyceraldehyde-3-phosphate dehydrogenase are frequently used as endogenous controls. Recent studies have shown that the expression level of such common housekeeping proteins is, in fact, dependent on various factors such as cell type, cell cycle, or disease status and can change in response to a biochemical stimulation. The interference of such phenomena can, therefore, substantially compromise their use for data validation, alter the interpretation of results, and lead to erroneous conclusions. In this work, we advance the concept of a protein "barcode" for data normalization and validation in quantitative proteomic experiments. The barcode comprises a novel set of proteins that was generated from cell cycle experiments performed with MCF7, an estrogen receptor positive breast cancer cell line, and MCF10A, a nontumorigenic immortalized breast cell line. The protein set was selected from a list of ~3700 proteins identified in different cellular subfractions and cell cycle stages of MCF7/MCF10A cells, based on the stability of spectral count data generated with an LTQ ion trap mass spectrometer. A total of 11 proteins qualified as endogenous standards for the nuclear and 62 for the cytoplasmic barcode, respectively. The validation of the protein sets was performed with a complementary SKBR3/Her2+ cell line.

  5. Nutrition screening tools: Does one size fit all? A systematic review of screening tools for the hospital setting

    NARCIS (Netherlands)

    van Bokhorst-de van der Schueren, M.A.E.; Guaitoli, P.R.; Jansma, E.P.; de Vet, H.C.W.

    2014-01-01

    Background & aims: Numerous nutrition screening tools for the hospital setting have been developed. The aim of this systematic review is to study construct or criterion validity and predictive validity of nutrition screening tools for the general hospital setting. Methods: A systematic review of

  6. Validation of public health competencies and impact variables for low- and middle-income countries

    Science.gov (United States)

    2014-01-01

    Background The number of Master of Public Health (MPH) programmes in low- and middle-income countries (LMICs) is increasing, but questions have been raised regarding the relevance of their outcomes and impacts on context. Although processes for validating public health competencies have taken place in recent years in many high-income countries, validation in LMICs is needed. Furthermore, impact variables of MPH programmes in the workplace and in society have not been developed. Method A set of public health competencies and impact variables in the workplace and in society was designed using the competencies and learning objectives of six participating institutions offering MPH programmes in or for LMICs, and the set of competencies of the Council on Linkages Between Academia and Public Health Practice as a reference. The resulting competencies and impact variables differ from those of the Council on Linkages in scope and emphasis on social determinants of health, context specificity and intersectoral competencies. A modified Delphi method was used in this study to validate the public health competencies and impact variables; experts and MPH alumni from China, Vietnam, South Africa, Sudan, Mexico and the Netherlands reviewed them and made recommendations. Results The competencies and variables were validated across two Delphi rounds, first with public health experts (N = 31) from the six countries, then with MPH alumni (N = 30). After the first expert round, competencies and impact variables were refined based on the quantitative results and qualitative comments. Both rounds showed high consensus, more so for the competencies than the impact variables. The response rate was 100%. Conclusion This is the first time that public health competencies have been validated in LMICs across continents. It is also the first time that impact variables of MPH programmes have been proposed and validated in LMICs across continents. The high degree of consensus between

  7. European Portuguese adaptation and validation of dilemmas used to assess moral decision-making.

    Science.gov (United States)

    Fernandes, Carina; Gonçalves, Ana Ribeiro; Pasion, Rita; Ferreira-Santos, Fernando; Paiva, Tiago Oliveira; Melo E Castro, Joana; Barbosa, Fernando; Martins, Isabel Pavão; Marques-Teixeira, João

    2018-03-01

    Objective To adapt and validate a widely used set of moral dilemmas to European Portuguese, which can be applied to assess decision-making. Moreover, the classical formulation of the dilemmas was compared with a more focused moral probe. Finally, a shorter version of the moral scenarios was tested. Methods The Portuguese version of the set of moral dilemmas was tested in 53 individuals from several regions of Portugal. In a second study, an alternative way of questioning on moral dilemmas was tested in 41 participants. Finally, the shorter version of the moral dilemmas was tested in 137 individuals. Results Results evidenced no significant differences between English and Portuguese versions. Also, asking whether actions are "morally acceptable" elicited less utilitarian responses than the original question, although without reaching statistical significance. Finally, all tested versions of moral dilemmas exhibited the same pattern of responses, suggesting that the fundamental elements to the moral decision-making were preserved. Conclusions We found evidence of cross-cultural validity for moral dilemmas. However, the moral focus might affect utilitarian/deontological judgments.

  8. European Portuguese adaptation and validation of dilemmas used to assess moral decision-making

    Directory of Open Access Journals (Sweden)

    Carina Fernandes

    2018-04-01

    Full Text Available Abstract Objective To adapt and validate a widely used set of moral dilemmas to European Portuguese, which can be applied to assess decision-making. Moreover, the classical formulation of the dilemmas was compared with a more focused moral probe. Finally, a shorter version of the moral scenarios was tested. Methods The Portuguese version of the set of moral dilemmas was tested in 53 individuals from several regions of Portugal. In a second study, an alternative way of questioning on moral dilemmas was tested in 41 participants. Finally, the shorter version of the moral dilemmas was tested in 137 individuals. Results Results evidenced no significant differences between English and Portuguese versions. Also, asking whether actions are “morally acceptable” elicited less utilitarian responses than the original question, although without reaching statistical significance. Finally, all tested versions of moral dilemmas exhibited the same pattern of responses, suggesting that the fundamental elements to the moral decision-making were preserved. Conclusions We found evidence of cross-cultural validity for moral dilemmas. However, the moral focus might affect utilitarian/deontological judgments.

  9. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  10. Family-centred services in the Netherlands : validating a self-report measure for paediatric service providers

    NARCIS (Netherlands)

    Siebes, RC; Ketelaar, M; Wijnroks, L; van Schie, PE; Nijhuis, Bianca J G; Vermeer, A; Gorter, JW

    Objective: To validate the Dutch translation of the Canadian Measure of Processes of Care for Service Providers questionnaire (MPOC-SP) for use in paediatric rehabilitation settings in the Netherlands. Design: The construct validity, content validity, face validity, and reliability of the Dutch

  11. Validity of the Perceived Health Competence Scale in a UK primary care setting.

    OpenAIRE

    Dempster, Martin; Donnelly, Michael

    2008-01-01

    The Perceived Health Competence Scale (PHCS) is a measure of self-efficacy regarding general healthrelated behaviour. This brief paper examines the psychometric properties of the PHCS in a UK context. Questionnaires containing the PHCS, the SF-36 and questions about perceived health needs were posted to 486 patients randomly selected from a GP practice list. Complete questionnaires were returned by 320 patients. Analyses of these responses provide strong evidence for the validity of the PHCS ...

  12. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....

  13. Trait and state anxiety across academic evaluative contexts: development and validation of the MTEA-12 and MSEA-12 scales.

    Science.gov (United States)

    Sotardi, Valerie A

    2018-05-01

    Educational measures of anxiety focus heavily on students' experiences with tests yet overlook other assessment contexts. In this research, two brief multiscale questionnaires were developed and validated to measure trait evaluation anxiety (MTEA-12) and state evaluation anxiety (MSEA-12) for use in various assessment contexts in non-clinical, educational settings. The research included a cross-sectional analysis of self-report data using authentic assessment settings in which evaluation anxiety was measured. Instruments were tested using a validation sample of 241 first-year university students in New Zealand. Scale development included component structures for state and trait scales based on existing theoretical frameworks. Analyses using confirmatory factor analysis and descriptive statistics indicate that the scales are reliable and structurally valid. Multivariate general linear modeling using subscales from the MTEA-12, MSEA-12, and student grades suggest adequate criterion-related validity. Initial predictive validity in which one relevant MTEA-12 factor explained between 21% and 54% of the variance in three MSEA-12 factors. Results document MTEA-12 and MSEA-12 as reliable measures of trait and state dimensions of evaluation anxiety for test and writing contexts. Initial estimates suggest the scales as having promising validity, and recommendations for further validation are outlined.

  14. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    Science.gov (United States)

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Sluggish cognitive tempo and attention-deficit/hyperactivity disorder (ADHD) inattention in the home and school contexts: Parent and teacher invariance and cross-setting validity.

    Science.gov (United States)

    Burns, G Leonard; Becker, Stephen P; Servera, Mateu; Bernad, Maria Del Mar; García-Banda, Gloria

    2017-02-01

    This study examined whether sluggish cognitive tempo (SCT) and attention-deficit/hyperactivity disorder (ADHD) inattention (IN) symptoms demonstrated cross-setting invariance and unique associations with symptom and impairment dimensions across settings (i.e., home SCT and ADHD-IN uniquely predicting school symptom and impairment dimensions, and vice versa). Mothers, fathers, primary teachers, and secondary teachers rated SCT, ADHD-IN, ADHD-hyperactivity/impulsivity (HI), oppositional defiant disorder (ODD), anxiety, depression, academic impairment, social impairment, and peer rejection dimensions for 585 Spanish 3rd-grade children (53% boys). Within-setting (i.e., mothers, fathers; primary, secondary teachers) and cross-settings (i.e., home, school) invariance was found for both SCT and ADHD-IN. From home to school, higher levels of home SCT predicted lower levels of school ADHD-HI and higher levels of school academic impairment after controlling for home ADHD-IN, whereas higher levels of home ADHD-IN predicted higher levels of school ADHD-HI, ODD, anxiety, depression, academic impairment, and peer rejection after controlling for home SCT. From school to home, higher levels of school SCT predicted lower levels of home ADHD-HI and ODD and higher levels of home anxiety, depression, academic impairment, and social impairment after controlling for school ADHD-IN, whereas higher levels of school ADHD-IN predicted higher levels of home ADHD-HI, ODD, and academic impairment after controlling for school SCT. Although SCT at home and school was able to uniquely predict symptom and impairment dimensions in the other setting, SCT at school was a better predictor than ADHD-IN at school of psychopathology and impairment at home. Findings provide additional support for SCT's validity relative to ADHD-IN. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Validation of Non-Invasive Waste Assay System (Gamma Box Counter) Performance at AECL Whiteshell Laboratories - 13136

    International Nuclear Information System (INIS)

    Attas, E.M.; Bialas, E.; Rhodes, M.J.

    2013-01-01

    Low-level radioactive waste (LLW) in solid form, resulting from decommissioning and operations activities at AECL's Whiteshell Laboratories (WL), is packaged in B-25 and B-1000 standard waste containers and characterized before it is shipped to an on-site interim storage facility, pending AECL decisions on long term management of its LLW. Assay of the waste packages before shipment contributes to an inventory of the interim storage facility and provides data to support acceptance at a future repository. A key characterization step is a gamma spectrometric measurement carried out under standard conditions using an automated, multi-detector Waste Assay System (WAS), purchased from Antech Corporation. A combination of ORTEC gamma acquisition software and custom software is used in this system to incorporate multiple measurements from two collimated high-resolution detectors. The software corrects the intensities of the gamma spectral lines for geometry and attenuation, and generates a table of calculated activities or limits of detection for a user-defined list of radioisotopes that may potentially be present. Validation of WAS performance was a prerequisite to routine operation. Documentation of the validation process provides assurance of the quality of the results produced, which may be needed one or two decades after they were generated. Aspects of the validation included setting up a quality control routine, measurements of standard point sources in reproducible positions, study of the gamma background, optimization of user-selectable software parameters, investigation of the effect of non-uniform distribution of materials and radionuclides, and comparison of results with measurements made using other gamma detector systems designed to assay bulk materials. The following key components of the validation process have been established. A daily quality control routine has been instituted, to verify stability of the gamma detector operation and the background levels

  17. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    Science.gov (United States)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  18. Validation of quality indicators for the organization of palliative care: a modified RAND Delphi study in seven European countries (the Europall project).

    Science.gov (United States)

    Woitha, Kathrin; Van Beek, Karen; Ahmed, Nisar; Jaspers, Birgit; Mollard, Jean M; Ahmedzai, Sam H; Hasselaar, Jeroen; Menten, Johan; Vissers, Kris; Engels, Yvonne

    2014-02-01

    Validated quality indicators can help health-care professionals to evaluate their medical practices in a comparative manner to deliver optimal clinical care. No international set of quality indicators to measure the organizational aspects of palliative care settings exists. To develop and validate a set of structure and process indicators for palliative care settings in Europe. A two-round modified RAND Delphi process was conducted to rate clarity and usefulness of a previously developed set of 110 quality indicators. In total, 20 multi-professional palliative care teams of centers of excellence from seven European countries. In total, 56 quality indicators were rated as useful. These valid quality indicators concerned the following domains: the definition of a palliative care service (2 quality indicators), accessibility to palliative care (16 quality indicators), specific infrastructure to deliver palliative care (8 quality indicators), symptom assessment tools (1 quality indicator), specific personnel in palliative care services (9 quality indicators), documentation methodology of clinical data (14 quality indicators), evaluation of quality and safety procedures (1 quality indicator), reporting of clinical activities (1 quality indicator), and education in palliative care (4 quality indicator). The modified RAND Delphi process resulted in 56 international face-validated quality indicators to measure and compare organizational aspects of palliative care. These quality indicators, aimed to assess and improve the organization of palliative care, will be pilot tested in palliative care settings all over Europe and be used in the EU FP7 funded IMPACT project.

  19. Validation of the Spanish version of the Index of Spouse Abuse.

    Science.gov (United States)

    Plazaola-Castaño, Juncal; Ruiz-Pérez, Isabel; Escribà-Agüir, Vicenta; Jiménez-Martín, Juan Manuel; Hernández-Torres, Elisa

    2009-04-01

    Partner violence against women is a major public health problem. Although there are currently a number of validated screening and diagnostic tools that can be used to evaluate this type of violence, such tools are not available in Spain. The aim of this study is to analyze the validity and reliability of the Spanish version of the Index of Spouse Abuse (ISA). A cross-sectional study was carried out in 2005 in two health centers in Granada, Spain, in 390 women between 18 and 70 years old. Analyses of the factorial structure, internal consistency, test-retest reliability, and construct validity were conducted. Cutoff points for each subscale were also defined. For the construct validity analysis, the SF-36 perceived general health dimension, the Rosenberg Self-Esteem Scale and the Goldberg 12-item General Health Questionnaire were included. The psychometric analysis shows that the instrument has good internal consistency, reproducibility, and construct validity. The scale is useful for the analysis of partner violence against women in both a research setting and a healthcare setting.

  20. Construction and validation of the Self-care Assessment Instrument for patients with type 2 diabetes mellitus

    Directory of Open Access Journals (Sweden)

    Simonize Cunha Barreto de Mendonça

    Full Text Available ABSTRACT Objective: to construct and validate the contents of the Self-care Assessment instrument for patients with type 2 diabetes mellitus. Method: methodological study, based on Orem's General Theory of Nursing. The empirical categories and the items of the instrument were elucidated through a focus group. The content validation process was performed by seven specialists and the semantic analysis by 14 patients. The Content Validity Indices of the items, ≥0.78, and of the scale, ≥0.90, were considered excellent. Results: the instrument contains 131 items in six dimensions corresponding to the health deviation self-care requisites. Regarding the maintenance, a Content Validity Index of 0.98 was obtained for the full set of items, and, regarding the relevance, Content Validity Indices ≥0.80 were obtained for the majority of the assessed psychometric criteria. Conclusion: the instrument showed evidence of content validity.

  1. Clinical validation of an epigenetic assay to predict negative histopathological results in repeat prostate biopsies.

    Science.gov (United States)

    Partin, Alan W; Van Neste, Leander; Klein, Eric A; Marks, Leonard S; Gee, Jason R; Troyer, Dean A; Rieger-Christ, Kimberly; Jones, J Stephen; Magi-Galluzzi, Cristina; Mangold, Leslie A; Trock, Bruce J; Lance, Raymond S; Bigley, Joseph W; Van Criekinge, Wim; Epstein, Jonathan I

    2014-10-01

    The DOCUMENT multicenter trial in the United States validated the performance of an epigenetic test as an independent predictor of prostate cancer risk to guide decision making for repeat biopsy. Confirming an increased negative predictive value could help avoid unnecessary repeat biopsies. We evaluated the archived, cancer negative prostate biopsy core tissue samples of 350 subjects from a total of 5 urological centers in the United States. All subjects underwent repeat biopsy within 24 months with a negative (controls) or positive (cases) histopathological result. Centralized blinded pathology evaluation of the 2 biopsy series was performed in all available subjects from each site. Biopsies were epigenetically profiled for GSTP1, APC and RASSF1 relative to the ACTB reference gene using quantitative methylation specific polymerase chain reaction. Predetermined analytical marker cutoffs were used to determine assay performance. Multivariate logistic regression was used to evaluate all risk factors. The epigenetic assay resulted in a negative predictive value of 88% (95% CI 85-91). In multivariate models correcting for age, prostate specific antigen, digital rectal examination, first biopsy histopathological characteristics and race the test proved to be the most significant independent predictor of patient outcome (OR 2.69, 95% CI 1.60-4.51). The DOCUMENT study validated that the epigenetic assay was a significant, independent predictor of prostate cancer detection in a repeat biopsy collected an average of 13 months after an initial negative result. Due to its 88% negative predictive value adding this epigenetic assay to other known risk factors may help decrease unnecessary repeat prostate biopsies. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  2. Results from the radiometric validation of Sentinel-3 optical sensors using natural targets

    Science.gov (United States)

    Fougnie, Bertrand; Desjardins, Camille; Besson, Bruno; Bruniquel, Véronique; Meskini, Naceur; Nieke, Jens; Bouvet, Marc

    2016-09-01

    The recently launched SENTINEL-3 mission measures sea surface topography, sea/land surface temperature, and ocean/land surface colour with high accuracy. The mission provides data continuity with the ENVISAT mission through acquisitions by multiple sensing instruments. Two of them, OLCI (Ocean and Land Colour Imager) and SLSTR (Sea and Land Surface Temperature Radiometer) are optical sensors designed to provide continuity with Envisat's MERIS and AATSR instruments. During the commissioning, in-orbit calibration and validation activities are conducted. Instruments are in-flight calibrated and characterized primarily using on-board devices which include diffusers and black body. Afterward, vicarious calibration methods are used in order to validate the OLCI and SLSTR radiometry for the reflective bands. The calibration can be checked over dedicated natural targets such as Rayleigh scattering, sunglint, desert sites, Antarctica, and tentatively deep convective clouds. Tools have been developed and/or adapted (S3ETRAC, MUSCLE) to extract and process Sentinel-3 data. Based on these matchups, it is possible to provide an accurate checking of many radiometric aspects such as the absolute and interband calibrations, the trending correction, the calibration consistency within the field-of-view, and more generally this will provide an evaluation of the radiometric consistency for various type of targets. Another important aspect will be the checking of cross-calibration between many other instruments such as MERIS and AATSR (bridge between ENVISAT and Sentinel-3), MODIS (bridge to the GSICS radiometric standard), as well as Sentinel-2 (bridge between Sentinel missions). The early results, based on the available OLCI and SLSTR data, will be presented and discussed.

  3. Health Services OutPatient Experience questionnaire: factorial validity and reliability of a patient-centered outcome measure for outpatient settings in Italy

    Directory of Open Access Journals (Sweden)

    Coluccia A

    2014-09-01

    Full Text Available Anna Coluccia, Fabio Ferretti, Andrea PozzaDepartment of Medical Sciences, Surgery and Neurosciences, Santa Maria alle Scotte University Hospital, University of Siena, Siena, ItalyPurpose: The patient-centered approach to health care does not seem to be sufficiently developed in the Italian context, and is still characterized by the biomedical model. In addition, there is a lack of validated outcome measures to assess outpatient experience as an aspect common to a variety of settings. The current study aimed to evaluate the factorial validity, reliability, and invariance across sex of the Health Services OutPatient Experience (HSOPE questionnaire, a short ten-item measure of patient-centeredness for Italian adult outpatients. The rationale for unidimensionality of the measure was that it could cover global patient experience as a process common to patients with a variety of diseases and irrespective of the phase of treatment course.Patients and methods: The HSOPE was compiled by 1,532 adult outpatients (51% females, mean age 59.22 years, standard deviation 16.26 receiving care in ten facilities at the Santa Maria alle Scotte University Hospital of Siena, Italy. The sample represented all the age cohorts. Twelve percent were young adults, 57% were adults, and 32% were older adults. Exploratory and confirmatory factor analyses were conducted to evaluate factor structure. Reliability was evaluated as internal consistency using Cronbach’s α. Factor invariance was assessed through multigroup analyses.Results: Both exploratory and confirmatory analyses suggested a clearly defined unidimensional structure of the measure, with all the ten items having salient loadings on a single factor. Internal consistency was excellent (α=0.95. Indices of model fit supported a single-factor structure for both male and female outpatient groups. Young adult outpatients had significantly lower scores on perceived patient-centeredness relative to older adults. No

  4. Towards validation of ammonia (NH3) measurements from the IASI satellite

    Science.gov (United States)

    Van Damme, M.; Clarisse, L.; Dammers, E.; Liu, X.; Nowak, J. B.; Clerbaux, C.; Flechard, C. R.; Galy-Lacaux, C.; Xu, W.; Neuman, J. A.; Tang, Y. S.; Sutton, M. A.; Erisman, J. W.; Coheur, P. F.

    2015-03-01

    Limited availability of ammonia (NH3) observations is currently a barrier for effective monitoring of the nitrogen cycle. It prevents a full understanding of the atmospheric processes in which this trace gas is involved and therefore impedes determining its related budgets. Since the end of 2007, the Infrared Atmospheric Sounding Interferometer (IASI) satellite has been observing NH3 from space at a high spatio-temporal resolution. This valuable data set, already used by models, still needs validation. We present here a first attempt to validate IASI-NH3 measurements using existing independent ground-based and airborne data sets. The yearly distributions reveal similar patterns between ground-based and space-borne observations and highlight the scarcity of local NH3 measurements as well as their spatial heterogeneity and lack of representativity. By comparison with monthly resolved data sets in Europe, China and Africa, we show that IASI-NH3 observations are in fair agreement, but they are characterized by a smaller variation in concentrations. The use of hourly and airborne data sets to compare with IASI individual observations allows investigations of the impact of averaging as well as the representativity of independent observations for the satellite footprint. The importance of considering the latter and the added value of densely located airborne measurements at various altitudes to validate IASI-NH3 columns are discussed. Perspectives and guidelines for future validation work on NH3 satellite observations are presented.

  5. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  6. Obtaining valid laboratory data in clinical trials conducted in resource diverse settings: lessons learned from a microbicide phase III clinical trial.

    Directory of Open Access Journals (Sweden)

    Tania Crucitti

    2010-10-01

    Full Text Available Over the last decade several phase III microbicides trials have been conducted in developing countries. However, laboratories in resource constrained settings do not always have the experience, infrastructure, and the capacity to deliver laboratory data meeting the high standards of clinical trials. This paper describes the design and outcomes of a laboratory quality assurance program which was implemented during a phase III clinical trial evaluating the efficacy of the candidate microbicide Cellulose Sulfate 6% (CS [1].In order to assess the effectiveness of CS for HIV and STI prevention, a phase III clinical trial was conducted in 5 sites: 3 in Africa and 2 in India. The trial sponsor identified an International Central Reference Laboratory (ICRL, responsible for the design and management of a quality assurance program, which would guarantee the reliability of laboratory data. The ICRL provided advice on the tests, assessed local laboratories, organized trainings, conducted supervision visits, performed re-tests, and prepared control panels. Local laboratories were provided with control panels for HIV rapid tests and Chlamydia trachomatis/Neisseria gonorrhoeae (CT/NG amplification technique. Aliquots from respective control panels were tested by local laboratories and were compared with results obtained at the ICRL.Overall, good results were observed. However, discordances between the ICRL and site laboratories were identified for HIV and CT/NG results. One particular site experienced difficulties with HIV rapid testing shortly after study initiation. At all sites, DNA contamination was identified as a cause of invalid CT/NG results. Both problems were timely detected and solved. Through immediate feedback, guidance and repeated training of laboratory staff, additional inaccuracies were prevented.Quality control guidelines when applied in field laboratories ensured the reliability and validity of final study data. It is essential that sponsors

  7. Validation of a checklist to assess ward round performance in internal medicine

    DEFF Research Database (Denmark)

    Nørgaard, Kirsten; Ringsted, Charlotte; Dolmans, Diana

    2004-01-01

    and construct validity of the task-specific checklist. METHODS: To determine content validity, a questionnaire was mailed to 295 internists. They were requested to give their opinion on the relevance of each item included on the checklist and to indicate the comprehensiveness of the checklist. To determine...... construct validity, an observer assessed 4 groups of doctors during performance of a complete ward round (n = 32). The nurse who accompanied the doctor on rounds made a global assessment of the performance. RESULTS: The response rate to the questionnaire was 80.7%. The respondents found that all 10 items......BACKGROUND: Ward rounds are an essential responsibility for doctors in hospital settings. Tools for guiding and assessing trainees' performance of ward rounds are needed. A checklist was developed for that purpose for use with trainees in internal medicine. OBJECTIVE: To assess the content...

  8. Influence of different process settings conditions on the accuracy of micro injection molding simulations: an experimental validation

    DEFF Research Database (Denmark)

    Tosello, Guido; Gava, Alberto; Hansen, Hans Nørgaard

    2009-01-01

    Currently available software packages exhibit poor results accuracy when performing micro injection molding (µIM) simulations. However, with an appropriate set-up of the processing conditions, the quality of results can be improved. The effects on the simulation results of different and alternative...... process conditions are investigated, namely the nominal injection speed, as well as the cavity filling time and the evolution of the cavity injection pressure as experimental data. In addition, the sensitivity of the results to the quality of the rheological data is analyzed. Simulated results...... are compared with experiments in terms of flow front position at part and micro features levels, as well as cavity injection filling time measurements....

  9. Wide Angle Imaging Lidar (WAIL): Theory of Operation and Results from Cross-Platform Validation at the ARM Southern Great Plains Site

    Science.gov (United States)

    Polonsky, I. N.; Davis, A. B.; Love, S. P.

    2004-05-01

    WAIL was designed to determine physical and geometrical characteristics of optically thick clouds using the off-beam component of the lidar return that can be accurately modeled within the 3D photon diffusion approximation. The theory shows that the WAIL signal depends not only on the cloud optical characteristics (phase function, extinction and scattering coefficients) but also on the outer thickness of the cloud layer. This makes it possible to estimate the mean optical and geometrical thicknesses of the cloud. The comparison with Monte Carlo simulation demonstrates the high accuracy of the diffusion approximation for moderately to very dense clouds. During operation WAIL is able to collect a complete data set from a cloud every few minutes, with averaging over horizontal scale of a kilometer or so. In order to validate WAIL's ability to deliver cloud properties, the LANL instrument was deployed as a part of the THickness from Off-beam Returns (THOR) validation IOP. The goal was to probe clouds above the SGP CART site at night in March 2002 from below (WAIL and ARM instruments) and from NASA's P3 aircraft (carrying THOR, the GSFC counterpart of WAIL) flying above the clouds. The permanent cloud instruments we used to compare with the results obtained from WAIL were ARM's laser ceilometer, micro-pulse lidar (MPL), millimeter-wavelength cloud radar (MMCR), and micro-wave radiometer (MWR). The comparison shows that, in spite of an unusually low cloud ceiling, an unfavorable observation condition for WAIL's present configuration, cloud properties obtained from the new instrument are in good agreement with their counterparts obtained by other instruments. So WAIL can duplicate, at least for single-layer clouds, the cloud products of the MWR and MMCR together. But WAIL does this with green laser light, which is far more representative than microwaves of photon transport processes at work in the climate system.

  10. CosmoQuest:Using Data Validation for More Than Just Data Validation

    Science.gov (United States)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  11. Building a Practically Useful Theory of Goal Setting and Task Motivation.

    Science.gov (United States)

    Locke, Edwin A.; Latham, Gary P.

    2002-01-01

    Summarizes 35 years of empirical research on goal-setting theory, describing core findings of the theory, mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. Explains the external validity and practical significance of goal setting theory,…

  12. Validation of Housing Standards Addressing Accessibility

    DEFF Research Database (Denmark)

    Helle, Tina

    2013-01-01

    The aim was to explore the use of an activity-based approach to determine the validity of a set of housing standards addressing accessibility. This included examination of the frequency and the extent of accessibility problems among older people with physical functional limitations who used...... participant groups were examined. Performing well-known kitchen activities was associated with accessibility problems for all three participant groups, in particular those using a wheelchair. The overall validity of the housing standards examined was poor. Observing older people interacting with realistic...... environments while performing real everyday activities seems to be an appropriate method for assessing accessibility problems....

  13. Internal construct validity of the Shirom-Melamed Burnout Questionnaire (SMBQ)

    OpenAIRE

    Lundgren-Nilsson Åsa; Jonsdottir Ingibjörg H; Pallant Julie; Ahlborg Gunnar

    2012-01-01

    Abstract Background Burnout is a mental condition defined as a result of continuous and long-term stress exposure, particularly related to psychosocial factors at work. This paper seeks to examine the psychometric properties of the Shirom-Melamed Burnout Questionnaire (SMBQ) for validation of use in a clinical setting. Methods Data from both a clinical (319) and general population (319) samples of health care and social insurance workers were included in the study. Data were analysed using bo...

  14. Geophysical validation of MIPAS-ENVISAT operational ozone data

    Directory of Open Access Journals (Sweden)

    U. Cortesi

    2007-09-01

    , using common criteria for the selection of individual validation data sets, and similar methods for the comparisons. This enabled merging the individual results from a variety of independent reference measurements of proven quality (i.e. well characterized error budget into an overall evaluation of MIPAS O3 data quality, having both statistical strength and the widest spatial and temporal coverage. Collocated measurements from ozone sondes and ground-based lidar and microwave radiometers of the Network for the Detection Atmospheric Composition Change (NDACC were selected to carry out comparisons with time series of MIPAS O3 partial columns and to identify groups of stations and time periods with a uniform pattern of ozone differences, that were subsequently used for a vertically resolved statistical analysis. The results of the comparison are classified according to synoptic and regional systems and to altitude intervals, showing a generally good agreement within the comparison error bars in the upper and middle stratosphere. Significant differences emerge in the lower stratosphere and are only partly explained by the larger contributions of horizontal and vertical smoothing differences and of collocation errors to the total uncertainty. Further results obtained from a purely statistical analysis of the same data set from NDACC ground-based lidar stations, as well as from additional ozone soundings at middle latitudes and from NDACC ground-based FTIR measurements, confirm the validity of MIPAS O3 profiles down to the lower stratosphere, with evidence of larger discrepancies at the lowest altitudes. The validation against O3 VMR profiles using collocated observations performed by other satellite sensors (SAGE II, POAM III, ODIN-SMR, ACE-FTS, HALOE, GOME and ECMWF assimilated ozone fields leads to consistent results, that are to a great extent compatible with those obtained from the comparison with ground-based measurements

  15. Uncertainties and understanding of experimental and theoretical results regarding reactions forming heavy and superheavy nuclei

    Science.gov (United States)

    Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.

    2018-02-01

    Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.

  16. Validity of instruments to assess students' travel and pedestrian safety

    Directory of Open Access Journals (Sweden)

    Baranowski Tom

    2010-05-01

    Full Text Available Abstract Background Safe Routes to School (SRTS programs are designed to make walking and bicycling to school safe and accessible for children. Despite their growing popularity, few validated measures exist for assessing important outcomes such as type of student transport or pedestrian safety behaviors. This research validated the SRTS school travel survey and a pedestrian safety behavior checklist. Methods Fourth grade students completed a brief written survey on how they got to school that day with set responses. Test-retest reliability was obtained 3-4 hours apart. Convergent validity of the SRTS travel survey was assessed by comparison to parents' report. For the measure of pedestrian safety behavior, 10 research assistants observed 29 students at a school intersection for completion of 8 selected pedestrian safety behaviors. Reliability was determined in two ways: correlations between the research assistants' ratings to that of the Principal Investigator (PI and intraclass correlations (ICC across research assistant ratings. Results The SRTS travel survey had high test-retest reliability (κ = 0.97, n = 96, p Conclusions These validated instruments can be used to assess SRTS programs. The pedestrian safety behavior checklist may benefit from further formative work.

  17. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  18. [MusiQol: international questionnaire investigating quality of life in multiple sclerosis: validation results for the German subpopulation in an international comparison].

    Science.gov (United States)

    Flachenecker, P; Vogel, U; Simeoni, M C; Auquier, P; Rieckmann, P

    2011-10-01

    The existing health-related quality of life questionnaires on multiple sclerosis (MS) only partially reflect the patient's point of view on the reduction of activities of daily living. Their development and validation was not performed in different languages. That is what prompted the development of the Multiple Sclerosis International Quality of Life (MusiQoL) Questionnaire as an international multidimensional measurement instrument. This paper presents this new development and the results of the German subgroup versus the total international sample. A total of 1,992 MS patients from 15 countries, including 209 German patients, took part in the study between January 2004 and February 2005. The patients took the MusiQoL survey at baseline and at 21±7 days as well as completing a symptom-related checklist and the SF-36 short form survey. Demographics, history and MS classification data were also generated. Reproducibility, sensitivity, convergent and discriminant validity were analysed. Convergent and discriminant validity and reproducibility were satisfactory for all dimensions of the MusiQoL. The dimensional scores correlated moderately but significantly with the SF-36 scores, but showed a discriminant validity in terms of gender, socioeconomic status and health status that was more pronounced in the overall population than in the German subpopulation. The highest correlations were observed between the MusiQoL dimension of activities of daily living and the Expanded Disability Status Scale (EDSS). The results of this study confirm the validity and reliability of MusiQoL as an instrument for measuring the quality of life of German and international MS patients.

  19. Validation of reference genes for RT-qPCR analysis in Herbaspirillum seropedicae.

    Science.gov (United States)

    Pessoa, Daniella Duarte Villarinho; Vidal, Marcia Soares; Baldani, José Ivo; Simoes-Araujo, Jean Luiz

    2016-08-01

    The RT-qPCR technique needs a validated set of reference genes for ensuring the consistency of the results from the gene expression. Expression stabilities for 9 genes from Herbaspirillum seropedicae, strain HRC54, grown with different carbon sources were calculated using geNorm and NormFinder, and the gene rpoA showed the best stability values. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Uncertainty propagation applied to multi-scale thermal-hydraulics coupled codes. A step towards validation

    Energy Technology Data Exchange (ETDEWEB)

    Geffray, Clotaire Clement

    2017-03-20

    The work presented here constitutes an important step towards the validation of the use of coupled system thermal-hydraulics and computational fluid dynamics codes for the simulation of complex flows in liquid metal cooled pool-type facilities. First, a set of methods suited for uncertainty and sensitivity analysis and validation activities with regards to the specific constraints of the work with coupled and expensive-to-run codes is proposed. Then, these methods are applied to the ATHLET - ANSYS CFX model of the TALL-3D facility. Several transients performed at this latter facility are investigated. The results are presented, discussed and compared to the experimental data. Finally, assessments of the validity of the selected methods and of the quality of the model are offered.

  1. A fuzzy set preference model for market share analysis

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share

  2. Three Dimensional (3D Lumbar Vertebrae Data Set

    Directory of Open Access Journals (Sweden)

    H. Bennani

    2016-08-01

    Full Text Available 3D modelling can be used for a variety of purposes, including biomedical modelling for orthopaedic or anatomical applications. Low back pain is prevalent in society yet few validated 3D models of the lumbar spine exist to facilitate assessment. We therefore created a 3D surface data set for lumbar vertebrae from human vertebrae. Models from 86 lumbar vertebrae were constructed using an inexpensive method involving image capture by digital camera and reconstruction of 3D models via an image-based technique. The reconstruction method was validated using a laser-based arm scanner and measurements derived from real vertebrae using electronic callipers. Results show a mean relative error of 5.2% between image-based models and real vertebrae, a mean relative error of 4.7% between image-based and arm scanning models and 95% of vertices’ errors are less than 3.5 millimetres with a median of 1.1 millimetres. The accuracy of the method indicates that the generated models could be useful for biomechanical modelling or 3D visualisation of the spine.

  3. A Cross-Cultural Approach to Speech-Act-Sets: The Case of Apologies

    Directory of Open Access Journals (Sweden)

    Válková Silvie

    2014-07-01

    Full Text Available The aim of this paper is to contribute to the validity of recent research into speech act theory by advocating the idea that with some of the traditional speech acts, their overt language manifestations that emerge from corpus data remind us of ritualised scenarios of speech-act-sets rather than single acts, with configurations of core and peripheral units reflecting the socio-cultural norms of the expectations and culture-bound values of a given language community. One of the prototypical manifestations of speech-act-sets, apologies, will be discussed to demonstrate a procedure which can be used to identify, analyse, describe and cross-culturally compare the validity of speech-act-set theory and provide evidence of its relevance for studying the English-Czech interface in this particular domain of human interaction.

  4. Furthering our Understanding of Land Surface Interactions using SVAT modelling: Results from SimSphere's Validation

    Science.gov (United States)

    North, Matt; Petropoulos, George; Ireland, Gareth; Rendal, Daisy; Carlson, Toby

    2015-04-01

    With current predicted climate change, there is an increased requirement to gain knowledge on the terrestrial biosphere, for numerous agricultural, hydrological and meteorological applications. To this end, Soil Vegetation Atmospheric Transfer (SVAT) models are quickly becoming the preferred scientific tool to monitor, at fine temporal and spatial resolutions, detailed information on numerous parameters associated with Earth system interactions. Validation of any model is critical to assess its accuracy, generality and realism to distinctive ecosystems and subsequently acts as important step before its operational distribution. In this study, the SimSphere SVAT model has been validated to fifteen different sites of the FLUXNET network, where model performance was statistically evaluated by directly comparing the model predictions vs in situ data, for cloud free days with a high energy balance closure. Specific focus is given to the models ability to simulate parameters associated with the energy balance, namely Shortwave Incoming Solar Radiation (Rg), Net Radiation (Rnet), Latent Heat (LE), Sensible Heat (H), Air Temperature at 1.3m (Tair 1.3m) and Air temperature at 50m (Tair 50m). Comparisons were performed for a number distinctive ecosystem types and for 150 days in total using in-situ data from ground observational networks acquired from the year 2011 alone. Evaluation of the models' coherence to reality was evaluated on the basis of a series of statistical parameters including RMSD, R2, Scatter, Bias, MAE , NASH index, Slope and Intercept. Results showed good to very good agreement between predicted and observed datasets, particularly so for LE, H, Tair 1.3m and Tair 50m where mean error distribution values indicated excellent model performance. Due to the systematic underestimation, poorer simulation accuracies were exhibited for Rg and Rnet, yet all values reported are still analogous to other validatory studies of its kind. In overall, the model

  5. Selection and validation of a set of reliable reference genes for quantitative RT-PCR studies in the brain of the Cephalopod Mollusc Octopus vulgaris

    Directory of Open Access Journals (Sweden)

    Biffali Elio

    2009-07-01

    Full Text Available Abstract Background Quantitative real-time polymerase chain reaction (RT-qPCR is valuable for studying the molecular events underlying physiological and behavioral phenomena. Normalization of real-time PCR data is critical for a reliable mRNA quantification. Here we identify reference genes to be utilized in RT-qPCR experiments to normalize and monitor the expression of target genes in the brain of the cephalopod mollusc Octopus vulgaris, an invertebrate. Such an approach is novel for this taxon and of advantage in future experiments given the complexity of the behavioral repertoire of this species when compared with its relatively simple neural organization. Results We chose 16S, and 18S rRNA, actB, EEF1A, tubA and ubi as candidate reference genes (housekeeping genes, HKG. The expression of 16S and 18S was highly variable and did not meet the requirements of candidate HKG. The expression of the other genes was almost stable and uniform among samples. We analyzed the expression of HKG into two different set of animals using tissues taken from the central nervous system (brain parts and mantle (here considered as control tissue by BestKeeper, geNorm and NormFinder. We found that HKG expressions differed considerably with respect to brain area and octopus samples in an HKG-specific manner. However, when the mantle is treated as control tissue and the entire central nervous system is considered, NormFinder revealed tubA and ubi as the most suitable HKG pair. These two genes were utilized to evaluate the relative expression of the genes FoxP, creb, dat and TH in O. vulgaris. Conclusion We analyzed the expression profiles of some genes here identified for O. vulgaris by applying RT-qPCR analysis for the first time in cephalopods. We validated candidate reference genes and found the expression of ubi and tubA to be the most appropriate to evaluate the expression of target genes in the brain of different octopuses. Our results also underline the

  6. Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Greenwald, Martin

    2017-06-02

    The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucial for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management

  7. Protein-energy malnutrition in the rehabilitation setting: Evidence to improve identification.

    Science.gov (United States)

    Marshall, Skye

    2016-04-01

    Methods of identifying malnutrition in the rehabilitation setting require further examination so that patient outcomes may be improved. The purpose of this narrative review was to: (1) examine the defining characteristics of malnutrition, starvation, sarcopenia and cachexia; (2) review the validity of nutrition screening tools and nutrition assessment tools in the rehabilitation setting; and (3) determine the prevalence of malnutrition in the rehabilitation setting by geographical region and method of diagnosis. A narrative review was conducted drawing upon international literature. Starvation represents one form of malnutrition. Inadequate energy and protein intake are the critical factor in the aetiology of malnutrition, which is distinct from sarcopenia and cachexia. Eight nutrition screening tools and two nutrition assessment tools have been evaluated for criterion validity in the rehabilitation setting, and consideration must be given to the resources of the facility and the patient group in order to select the appropriate tool. The prevalence of malnutrition in the rehabilitation setting ranges from 14-65% worldwide with the highest prevalence reported in rural, European and Australian settings. Malnutrition is highly prevalent in the rehabilitation setting, and consideration must be given to the patient group when determining the most appropriate method of identification so that resources may be used efficaciously and the chance of misdiagnosis minimised. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Therapeutic drug monitoring of nevirapine in resource-limited settings.

    NARCIS (Netherlands)

    L'homme, R.F.A.; Muro, E.P.; Droste, J.A.H.; Wolters, L.R.; Ewijk-Beneken Kolmer, E.W.J. van; Schimana, W.; Burger, D.M.

    2008-01-01

    BACKGROUND: We developed a simple and inexpensive thin-layer chromatography (TLC) assay for semiquantitative detection of saliva concentrations of nevirapine in resource-limited settings. The method was validated in an African target population. METHODS: Paired plasma and saliva nevirapine

  9. Development and external validation of a new PTA assessment scale

    Directory of Open Access Journals (Sweden)

    Jacobs Bram

    2012-08-01

    Full Text Available Abstract Background Post-traumatic amnesia (PTA is a key symptom of traumatic brain injury (TBI. Accurate assessment of PTA is imperative in guiding clinical decision making. Our aim was to develop and externally validate a short, examiner independent and practical PTA scale, by selecting the most discriminative items from existing scales and using a three-word memory test. Methods Mild, moderate and severe TBI patients and control subjects were assessed in two separate cohorts, one for derivation and one for validation, using a questionnaire comprised of items from existing PTA scales. We tested which individual items best discriminated between TBI patients and controls, represented by sensitivity and specificity. We then created our PTA scale based on these results. This new scale was externally evaluated for its discriminative value using Receiver Operating Characteristic (ROC analysis and compared to existing PTA scales. Results The derivation cohort included 126 TBI patients and 31 control subjects; the validation cohort consisted of 132 patients and 30 controls. A set of seven items was eventually selected to comprise the new PTA scale: age, name of hospital, time, day of week, month, mode of transport and recall of three words. This scale demonstrated adequate discriminative values compared to existing PTA scales on three consecutive administrations in the validation cohort. Conclusion We introduce a valid, practical and examiner independent PTA scale, which is suitable for mild TBI patients at the emergency department and yet still valuable for the follow-up of more severely injured TBI patients.

  10. Learning from biomedical linked data to suggest valid pharmacogenes.

    Science.gov (United States)

    Dalleau, Kevin; Marzougui, Yassine; Da Silva, Sébastien; Ringot, Patrice; Ndiaye, Ndeye Coumba; Coulet, Adrien

    2017-04-20

    A standard task in pharmacogenomics research is identifying genes that may be involved in drug response variability, i.e., pharmacogenes. Because genomic experiments tended to generate many false positives, computational approaches based on the use of background knowledge have been proposed. Until now, only molecular networks or the biomedical literature were used, whereas many other resources are available. We propose here to consume a diverse and larger set of resources using linked data related either to genes, drugs or diseases. One of the advantages of linked data is that they are built on a standard framework that facilitates the joint use of various sources, and thus facilitates considering features of various origins. We propose a selection and linkage of data sources relevant to pharmacogenomics, including for example DisGeNET and Clinvar. We use machine learning to identify and prioritize pharmacogenes that are the most probably valid, considering the selected linked data. This identification relies on the classification of gene-drug pairs as either pharmacogenomically associated or not and was experimented with two machine learning methods -random forest and graph kernel-, which results are compared in this article. We assembled a set of linked data relative to pharmacogenomics, of 2,610,793 triples, coming from six distinct resources. Learning from these data, random forest enables identifying valid pharmacogenes with a F-measure of 0.73, on a 10 folds cross-validation, whereas graph kernel achieves a F-measure of 0.81. A list of top candidates proposed by both approaches is provided and their obtention is discussed.

  11. Statistical validity of using ratio variables in human kinetics research.

    Science.gov (United States)

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  12. Health promoting behaviors in adolescence: validation of the Portuguese version of the Adolescent Lifestyle Profile.

    Science.gov (United States)

    Sousa, Pedro; Gaspar, Pedro; Fonseca, Helena; Hendricks, Constance; Murdaugh, Carolyn

    2015-01-01

    Reliable and valid instruments are essential for understanding health-promoting behaviors in adolescents. This study analyzed the psychometric properties of the Portuguese version of the Adolescent Lifestyle Profile (ALP). A linguistic and cultural translation of the ALP was conducted with 236 adolescents from two different settings: a community (n=141) and a clinical setting (n=95). Internal consistency reliability and confirmatory factor analysis were performed. Results showed an adequate fit to data, yielding a 36-item, seven-factor structure (CMIN/DF=1.667, CFI=0.807, GFI=0.822, RMR=0.051, RMSEA=0.053, PNFI=0.575, PCFI=0.731). The ALP presented a high internal consistency (α=0.866), with the subscales presenting moderate reliability values (from 0.492 to 0.747). The highest values were in Interpersonal Relations (3.059±0.523) and Positive Life Perspective (2.985±0.588). Some gender differences were found. Findings showed that adolescents from the clinic reported an overall healthier lifestyle than those from the community setting (2.598±0.379 vs. 2.504±0.346; t=1.976, p=0.049). The ALP Portuguese version is a psychometrically reliable, valid, and useful measurement instrument for assessing health-promoting lifestyles in adolescence. The ALP is cross-culturally validated and can decisively contribute to a better understanding of adolescent health promotion needs. Additional research is needed to evaluate the instrument's predictive validity, as well as its clinical relevance for practice and research. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  13. The development, validation and initial results of an integrated model for determining the environmental sustainability of biogas production pathways

    NARCIS (Netherlands)

    Pierie, Frank; van Someren, Christian; Benders, René M.J.; Bekkering, Jan; van Gemert, Wim; Moll, Henri C.

    2016-01-01

    Biogas produced through Anaerobic Digestion can be seen as a flexible and storable energy carrier. However, the environmental sustainability and efficiency of biogas production is not fully understood. Within this article the use, operation, structure, validation, and results of a model for the

  14. The Danish anal sphincter rupture questionnaire: Validity and reliability

    DEFF Research Database (Denmark)

    Due, Ulla; Ottesen, Marianne

    2008-01-01

    Objective. To revise, validate and test for reliability an anal sphincter rupture questionnaire in relation to construct, content and face validity. Setting and background. Since 1996 women with anal sphincter rupture (ASR) at one of the public university hospitals in Copenhagen, Denmark have been...... main questions but one. Two questions needed further explanation. Seven women made minor errors. Conclusion. The validated Danish questionnaire has a good construct, content and face validity. It is a well accepted, reliable, simple and clinically relevant screening tool. It reveals physical problems...... offered pelvic floor muscle examination and instruction by a specialist physiotherapist. In relation to that, a non-validated questionnaire about anal and urinary incontinence was to be answered six months after childbirth. Method. The original questionnaire was revised and a pilot test was performed...

  15. K-means clustering versus validation measures: a data-distribution perspective.

    Science.gov (United States)

    Xiong, Hui; Wu, Junjie; Chen, Jian

    2009-04-01

    K-means is a well-known and widely used partitional clustering method. While there are considerable research efforts to characterize the key features of the K-means clustering algorithm, further investigation is needed to understand how data distributions can have impact on the performance of K-means clustering. To that end, in this paper, we provide a formal and organized study of the effect of skewed data distributions on K-means clustering. Along this line, we first formally illustrate that K-means tends to produce clusters of relatively uniform size, even if input data have varied "true" cluster sizes. In addition, we show that some clustering validation measures, such as the entropy measure, may not capture this uniform effect and provide misleading information on the clustering performance. Viewed in this light, we provide the coefficient of variation (CV) as a necessary criterion to validate the clustering results. Our findings reveal that K-means tends to produce clusters in which the variations of cluster sizes, as measured by CV, are in a range of about 0.3-1.0. Specifically, for data sets with large variation in "true" cluster sizes (e.g., CV > 1.0), K-means reduces variation in resultant cluster sizes to less than 1.0. In contrast, for data sets with small variation in "true" cluster sizes (e.g., CV K-means increases variation in resultant cluster sizes to greater than 0.3. In other words, for the earlier two cases, K-means produces the clustering results which are away from the "true" cluster distributions.

  16. Achieving external validity in home advantage research: generalizing crowd noise effects.

    Science.gov (United States)

    Myers, Tony D

    2014-01-01

    Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials' decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer a level of confirmation of the findings of laboratory studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed.

  17. CFD Validation Studies for Hypersonic Flow Prediction

    Science.gov (United States)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  18. Test re-test reliability and construct validity of the star-track test of manual dexterity

    DEFF Research Database (Denmark)

    Kildebro, Niels; Amirian, Ilda; Gögenur, Ismail

    2015-01-01

    Objectives. We wished to determine test re-test reliability and construct validity of the star-track test of manual dexterity. Design. Test re-test reliability was examined in a controlled study. Construct validity was tested in a blinded randomized crossover study. Setting. The study was performed...... at a university hospital in Denmark. Participants. A total of 11 subjects for test re-test and 20 subjects for the construct validity study were included. All subjects were healthy volunteers. Intervention. The test re-test trial had two measurements with 2 days pause in between. The interventions...... in the construct validity study included baseline measurement, intervention 1: fatigue, intervention 2: stress, and intervention 3: fatigue and stress. There was a 2 day pause between each intervention. Main outcome measure. An integrated measure of completion time and number of errors was used. Results. All...

  19. Use of Crowdsourcing to Assess the Ecological Validity of Perceptual-Training Paradigms in Dysarthria.

    Science.gov (United States)

    Lansford, Kaitlin L; Borrie, Stephanie A; Bystricky, Lukas

    2016-05-01

    It has been documented in laboratory settings that familiarizing listeners with dysarthric speech improves intelligibility of that speech. If these findings can be replicated in real-world settings, the ability to improve communicative function by focusing on communication partners has major implications for extending clinical practice in dysarthria rehabilitation. An important step toward development of a listener-targeted treatment approach requires establishment of its ecological validity. To this end, the present study leveraged the mechanism of crowdsourcing to determine whether perceptual-training benefits achieved by listeners in the laboratory could be elicited in an at-home computer-based scenario. Perceptual-training data (i.e., intelligibility scores from a posttraining transcription task) were collected from listeners in 2 settings-the laboratory and the crowdsourcing website Amazon Mechanical Turk. Consistent with previous findings, results revealed a main effect of training condition (training vs. control) on intelligibility scores. There was, however, no effect of training setting (Mechanical Turk vs. laboratory). Thus, the perceptual benefit achieved via Mechanical Turk was comparable to that achieved in the laboratory. This study provides evidence regarding the ecological validity of perceptual-training paradigms designed to improve intelligibility of dysarthric speech, thereby supporting their continued advancement as a listener-targeted treatment option.

  20. Temporal and Geographic variation in the validity and internal consistency of the Nursing Home Resident Assessment Minimum Data Set 2.0.

    Science.gov (United States)

    Mor, Vincent; Intrator, Orna; Unruh, Mark Aaron; Cai, Shubing

    2011-04-15

    The Minimum Data Set (MDS) for nursing home resident assessment has been required in all U.S. nursing homes since 1990 and has been universally computerized since 1998. Initially intended to structure clinical care planning, uses of the MDS expanded to include policy applications such as case-mix reimbursement, quality monitoring and research. The purpose of this paper is to summarize a series of analyses examining the internal consistency and predictive validity of the MDS data as used in the "real world" in all U.S. nursing homes between 1999 and 2007. We used person level linked MDS and Medicare denominator and all institutional claim files including inpatient (hospital and skilled nursing facilities) for all Medicare fee-for-service beneficiaries entering U.S. nursing homes during the period 1999 to 2007. We calculated the sensitivity and positive predictive value (PPV) of diagnoses taken from Medicare hospital claims and from the MDS among all new admissions from hospitals to nursing homes and the internal consistency (alpha reliability) of pairs of items within the MDS that logically should be related. We also tested the internal consistency of commonly used MDS based multi-item scales and examined the predictive validity of an MDS based severity measure viz. one year survival. Finally, we examined the correspondence of the MDS discharge record to hospitalizations and deaths seen in Medicare claims, and the completeness of MDS assessments upon skilled nursing facility (SNF) admission. Each year there were some 800,000 new admissions directly from hospital to US nursing homes and some 900,000 uninterrupted SNF stays. Comparing Medicare enrollment records and claims with MDS records revealed reasonably good correspondence that improved over time (by 2006 only 3% of deaths had no MDS discharge record, only 5% of SNF stays had no MDS, but over 20% of MDS discharges indicating hospitalization had no associated Medicare claim). The PPV and sensitivity levels of

  1. A validated set of tool pictures with matched objects and non-objects for laterality research.

    Science.gov (United States)

    Verma, Ark; Brysbaert, Marc

    2015-01-01

    Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article.

  2. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  3. Is the Job Satisfaction Survey a good tool to measure job satisfaction amongst health workers in Nepal? Results of a validation analysis.

    Science.gov (United States)

    Batura, Neha; Skordis-Worrall, Jolene; Thapa, Rita; Basnyat, Regina; Morrison, Joanna

    2016-07-27

    Job satisfaction is an important predictor of an individual's intention to leave the workplace. It is increasingly being used to consider the retention of health workers in low-income countries. However, the determinants of job satisfaction vary in different contexts, and it is important to use measurement methods that are contextually appropriate. We identified a measurement tool developed by Paul Spector, and used mixed methods to assess its validity and reliability in measuring job satisfaction among maternal and newborn health workers (MNHWs) in government facilities in rural Nepal. We administered the tool to 137 MNHWs and collected qualitative data from 78 MNHWs, and district and central level stakeholders to explore definitions of job satisfaction and factors that affected it. We calculated a job satisfaction index for all MNHWs using quantitative data and tested for validity, reliability and sensitivity. We conducted qualitative content analysis and compared the job satisfaction indices with qualitative data. Results from the internal consistency tests offer encouraging evidence of the validity, reliability and sensitivity of the tool. Overall, the job satisfaction indices reflected the qualitative data. The tool was able to distinguish levels of job satisfaction among MNHWs. However, the work environment and promotion dimensions of the tool did not adequately reflect local conditions. Further, community fit was found to impact job satisfaction but was not captured by the tool. The relatively high incidence of missing responses may suggest that responding to some statements was perceived as risky. Our findings indicate that the adapted job satisfaction survey was able to measure job satisfaction in Nepal. However, it did not include key contextual factors affecting job satisfaction of MNHWs, and as such may have been less sensitive than a more inclusive measure. The findings suggest that this tool can be used in similar settings and populations, with the

  4. Pooled results from five validation studies of dietary self-report instruments using recovery biomarkers for potassium and sodium intake

    Science.gov (United States)

    We have pooled data from five large validation studies of dietary self-report instruments that used recovery biomarkers as referents to assess food frequency questionnaires (FFQs) and 24-hour recalls. We reported on total potassium and sodium intakes, their densities, and their ratio. Results were...

  5. Adding support to cross-cultural emotional assessment: Validation of the International Affective Picture System in a Chilean sample

    Directory of Open Access Journals (Sweden)

    Rocío Mayol Troncoso

    2011-05-01

    Full Text Available The present study aimed to obtain a valid set of images of the International Affective Picture System (Lang, Bradley, & Cuthbert, 2005 –a widely used instrumentation in emotion research- in a Chilean sample, as well as to compare these results with those obtained from the US study in order to contribute to its cross-cultural validation. A sample of 135 college students assessed 188 pictures according to standard instructions in valence and arousal dimensions. The results showed the expected organization of affectivity, with main variations between sex in valence judgments, and differences between countries in the arousal dimension. It is concluded that the Chilean adaptation of the IAPS is consistent with previous evidence, adding support to it cross-cultural validity.

  6. Study of Validity Criteria for Radionuclide-Analysis of Low- and Intermediate-Level Radioactive Waste

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Uk; Baek, Hyun Suk; Jeong, Sung Yeop [Sungwoo E and T Co., Hanam (Korea, Republic of); Shin, Seung Kyu [Korea Radioactive waste Management Corporation, Gyeongju (Korea, Republic of)

    2013-05-15

    Literature survey on the deviation of the measuring equipment and statistical analysis on the measured data of domestic LILW were performed in order to set evaluation criteria quantitatively when comparing the result of each test and inspections. This study provided opportunity to increase credibility and re-assure validity of Waste Acceptance Criteria (WAC). Through the statistical analysis for deviation of measurement by comparing repository inspection with generator self-test, the quantitative acceptance criteria were set depending on specific activity of Co-60 and Cs-137. The acceptance criteria is a relative bias of KRMC result to generator result and set from low 50 % to high 150 % for Co-60, from low 30 % to 250 % for Cs-137. In this study, because the statistical analysis results of the waste drum assay are not enough representing whole range specified at WAC, an additional research that include characteristic analysis of LILW generated other birthplace should be done.

  7. Validation of Power Output for the WIND Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    King, J.; Clifton, A.; Hodge, B. M.

    2014-09-01

    Renewable energy integration studies require wind data sets of high quality with realistic representations of the variability, ramping characteristics, and forecast performance for current wind power plants. The Wind Integration National Data Set (WIND) Toolkit is meant to be an update for and expansion of the original data sets created for the weather years from 2004 through 2006 during the Western Wind and Solar Integration Study and the Eastern Wind Integration Study. The WIND Toolkit expands these data sets to include the entire continental United States, increasing the total number of sites represented, and it includes the weather years from 2007 through 2012. In addition, the WIND Toolkit has a finer resolution for both the temporal and geographic dimensions. Three separate data sets will be created: a meteorological data set, a wind power data set, and a forecast data set. This report describes the validation of the wind power data set.

  8. Virtual facial expressions of emotions: An initial concomitant and construct validity study.

    Directory of Open Access Journals (Sweden)

    Christian eJoyal

    2014-09-01

    Full Text Available Abstract. Background. Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. Objectives. The goal of this study was to initially assess concomitant and construct validity of a newly developed set of virtual faces expressing 6 fundamental emotions (happiness, surprise, anger, sadness, fear, or disgust. Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles, and regional gaze fixation latencies (eyes and mouth regions were compared in 41 adult volunteers (20 ♂, 21 ♀ during the presentation of video clips depicting real vs. virtual adults expressing emotions. Results. Emotions expressed by each sets of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Conclusion. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feed forward interactions based on facial emotion expressions can also be conducted with these stimuli.

  9. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    Science.gov (United States)

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits

  10. A Novel Control Algorithm Expressions Set for not Negligible Resistive Parameters PM Brushless AC Motors

    Directory of Open Access Journals (Sweden)

    Renato RIZZO

    2012-08-01

    Full Text Available This paper deals with Permanent Magnet Brushless Motors. In particular is proposed a new set of control algorithm expressions that is realized taking into account resistive parameters of the motor, differently from simplified models of this type of motors where these parameters are usually neglected. The control is set up and an analysis of the performance is reported in the paper, where the validation of the new expressions is done with reference to a motor prototype particularly compact because is foreseen for application on tram propulsion drives. The results are evidenced in the last part of the paper.

  11. Validation of dispersion model of RTARC-DSS based on ''KIT'' field experiments

    International Nuclear Information System (INIS)

    Duran, J.

    2000-01-01

    The aim of this study is to present the performance of the Gaussian dispersion model RTARC-DSS (Real Time Accident Release Consequences - Decision Support System) at the 'Kit' field experiments. The Model Validation Kit is a collection of three experimental data sets from Kincaid, Copenhagen, Lillestrom and supplementary Indianopolis experimental campaigns accompanied by software for model evaluation. The validation of the model has been performed on the basis of the maximum arc-wise concentrations using the Bootstrap resampling procedure the variation of the model residuals. Validation was performed for the short-range distances (about 1 - 10 km, maximum for Kincaid data set - 50 km from source). Model evaluation procedure and amount of relative over- or under-prediction are discussed and compared with the model. (author)

  12. Validation of the German version of the Ford Insomnia Response to Stress Test.

    Science.gov (United States)

    Dieck, Arne; Helbig, Susanne; Drake, Christopher L; Backhaus, Jutta

    2018-06-01

    The purpose of this study was to assess the psychometric properties of a German version of the Ford Insomnia Response to Stress Test with groups with and without sleep problems. Three studies were analysed. Data set 1 was based on an initial screening for a sleep training program (n = 393), data set 2 was based on a study to test the test-retest reliability of the Ford Insomnia Response to Stress Test (n = 284) and data set 3 was based on a study to examine the influence of competitive sport on sleep (n = 37). Data sets 1 and 2 were used to test internal consistency, factor structure, convergent validity, discriminant validity and test-retest reliability of the Ford Insomnia Response to Stress Test. Content validity was tested using data set 3. Cronbach's alpha of the Ford Insomnia Response to Stress Test was good (α = 0.80) and test-retest reliability was satisfactory (r = 0.72). Overall, the one-factor model showed the best fit. Furthermore, significant positive correlations between the Ford Insomnia Response to Stress Test and impaired sleep quality, depression and stress reactivity were in line with the expectations regarding the convergent validity. Subjects with sleep problems had significantly higher scores in the Ford Insomnia Response to Stress Test than subjects without sleep problems (P Stress Test had significantly lower sleep quality (P = 0.01), demonstrating that vulnerability for stress-induced sleep disturbances accompanies poorer sleep quality in stressful episodes. The findings show that the German version of the Ford Insomnia Response to Stress Test is a reliable and valid questionnaire to assess the vulnerability to stress-induced sleep disturbances. © 2017 European Sleep Research Society.

  13. Set size influences the relationship between ANS acuity and math performance: a result of different strategies?

    Science.gov (United States)

    Dietrich, Julia Felicitas; Nuerk, Hans-Christoph; Klein, Elise; Moeller, Korbinian; Huber, Stefan

    2017-08-29

    Previous research has proposed that the approximate number system (ANS) constitutes a building block for later mathematical abilities. Therefore, numerous studies investigated the relationship between ANS acuity and mathematical performance, but results are inconsistent. Properties of the experimental design have been discussed as a potential explanation of these inconsistencies. In the present study, we investigated the influence of set size and presentation duration on the association between non-symbolic magnitude comparison and math performance. Moreover, we focused on strategies reported as an explanation for these inconsistencies. In particular, we employed a non-symbolic magnitude comparison task and asked participants how they solved the task. We observed that set size was a significant moderator of the relationship between non-symbolic magnitude comparison and math performance, whereas presentation duration of the stimuli did not moderate this relationship. This supports the notion that specific design characteristics contribute to the inconsistent results. Moreover, participants reported different strategies including numerosity-based, visual, counting, calculation-based, and subitizing strategies. Frequencies of these strategies differed between different set sizes and presentation durations. However, we found no specific strategy, which alone predicted arithmetic performance, but when considering the frequency of all reported strategies, arithmetic performance could be predicted. Visual strategies made the largest contribution to this prediction. To conclude, the present findings suggest that different design characteristics contribute to the inconsistent findings regarding the relationship between non-symbolic magnitude comparison and mathematical performance by inducing different strategies and additional processes.

  14. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  15. A systematic review and meta-analysis of the criterion validity of nutrition assessment tools for diagnosing protein-energy malnutrition in the older community setting (the MACRo study).

    Science.gov (United States)

    Marshall, Skye; Craven, Dana; Kelly, Jaimon; Isenring, Elizabeth

    2017-10-12

    Malnutrition is a significant barrier to healthy and independent ageing in older adults who live in their own homes, and accurate diagnosis is a key step in managing the condition. However, there has not been sufficient systematic review or pooling of existing data regarding malnutrition diagnosis in the geriatric community setting. The current paper was conducted as part of the MACRo (Malnutrition in the Ageing Community Review) Study and seeks to determine the criterion (concurrent and predictive) validity and reliability of nutrition assessment tools in making a diagnosis of protein-energy malnutrition in the general older adult community. A systematic literature review was undertaken using six electronic databases in September 2016. Studies in any language were included which measured malnutrition via a nutrition assessment tool in adults ≥65 years living in their own homes. Data relating to the predictive validity of tools were analysed via meta-analyses. GRADE was used to evaluate the body of evidence. There were 6412 records identified, of which 104 potentially eligible records were screened via full text. Eight papers were included; two which evaluated the concurrent validity of the Mini Nutritional Assessment (MNA) and Subjective Global Assessment (SGA) and six which evaluated the predictive validity of the MNA. The quality of the body of evidence for the concurrent validity of both the MNA and SGA was very low. The quality of the body of evidence for the predictive validity of the MNA in detecting risk of death was moderate (RR: 1.92 [95% CI: 1.55-2.39]; P < 0.00001; n = 2013 participants; n = 4 studies; I 2 : 0%). The quality of the body of evidence for the predictive validity of the MNA in detecting risk of poor physical function was very low (SMD: 1.02 [95%CI: 0.24-1.80]; P = 0.01; n = 4046 participants; n = 3 studies; I 2 :89%). Due to the small number of studies identified and no evaluation of the predictive validity of tools other than

  16. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  17. Validation study of the early onset schizophrenia diagnosis in the Danish Psychiatric Central Research Register

    DEFF Research Database (Denmark)

    Vernal, Ditte Lammers; Stenstrøm, Anne Dorte; Staal, Nina

    2018-01-01

    on classification. Compared to diagnoses made in outpatient settings, EOS diagnoses during hospitalizations were more likely to be valid and had fewer registration errors. Diagnosed in inpatient settings, EOS diagnoses are reliable and valid for register-based research. Schizophrenia diagnosed in children...... and adolescents in outpatient settings were found to have a high number of false-positives, both due to registration errors and diagnostic practice. Utilizing this knowledge, it is possible to reduce the number of false-positives in register-based research of EOS....

  18. Development and Validation of the Minnesota Borderline Personality Disorder Scale

    Science.gov (United States)

    Bornovalova, Marina A.; Hicks, Brian M.; Patrick, Christopher J.; Iacono, William G.; McGue, Matt

    2011-01-01

    Although large epidemiological data sets can inform research on the etiology and development of borderline personality disorder (BPD), they rarely include BPD measures. In some cases, however, proxy measures can be constructed using instruments already in these data sets. In this study, the authors developed and validated a self-report measure of…

  19. Am I getting an accurate picture: a tool to assess clinical handover in remote settings?

    Directory of Open Access Journals (Sweden)

    Malcolm Moore

    2017-11-01

    Full Text Available Abstract Background Good clinical handover is critical to safe medical care. Little research has investigated handover in rural settings. In a remote setting where nurses and medical students give telephone handover to an aeromedical retrieval service, we developed a tool by which the receiving clinician might assess the handover; and investigated factors impacting on the reliability and validity of that assessment. Methods Researchers consulted with clinicians to develop an assessment tool, based on the ISBAR handover framework, combining validity evidence and the existing literature. The tool was applied ‘live’ by receiving clinicians and from recorded handovers by academic assessors. The tool’s performance was analysed using generalisability theory. Receiving clinicians and assessors provided feedback. Results Reliability for assessing a call was good (G = 0.73 with 4 assessments. The scale had a single factor structure with good internal consistency (Cronbach’s alpha = 0.8. The group mean for the global score for nurses and students was 2.30 (SD 0.85 out of a maximum 3.0, with no difference between these sub-groups. Conclusions We have developed and evaluated a tool to assess high-stakes handover in a remote setting. It showed good reliability and was easy for working clinicians to use. Further investigation and use is warranted beyond this setting.

  20. k0IAEA software validation at CDTN/CNEN, Brazil, using certified reference materials

    International Nuclear Information System (INIS)

    Menezes, M.A.B.C.; Jacimovic, R.

    2007-01-01

    The IAEA distributed the k 0I AEA software package program to several laboratories. The Laboratory for Neutron Activation Analysis, at CDTN/CNEN (Centro de Desenvolvimento da Tecnologia Nuclear/Comissao Nacional de Energia Nuclear), Belo Horizonte, Brazil, acquired the k 0I AEA software package during the Workshop on Nuclear Data for Activation Analysis, 2005, held at the Abdus Salam International Centre for Theoretical Physics, Trieste, Italy. This paper is about the validation procedure carried out at the local laboratory aiming at the validation of the k 0I AEA software package. After the software was set up according to the guidelines, the procedure followed at CDTN/CNEN to validate the k 0I AEA software was to analyse several reference materials. The overall results pointed out that the k 0I AEA software is working properly. (author)

  1. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  2. Validation of a Predictive Model for Survival in Metastatic Cancer Patients Attending an Outpatient Palliative Radiotherapy Clinic

    International Nuclear Information System (INIS)

    Chow, Edward; Abdolell, Mohamed; Panzarella, Tony; Harris, Kristin; Bezjak, Andrea; Warde, Padraig; Tannock, Ian

    2009-01-01

    Purpose: To validate a predictive model for survival of patients attending a palliative radiotherapy clinic. Methods and Materials: We described previously a model that had good predictive value for survival of patients referred during 1999 (1). The six prognostic factors (primary cancer site, site of metastases, Karnofsky performance score, and the fatigue, appetite and shortness-of-breath items from the Edmonton Symptom Assessment Scale) identified in this training set were extracted from the prospective database for the year 2000. We generated a partial score whereby each prognostic factor was assigned a value proportional to its prognostic weight. The sum of the partial scores for each patient was used to construct a survival prediction score (SPS). Patients were also grouped according to the number of these risk factors (NRF) that they possessed. The probability of survival at 3, 6, and 12 months was generated. The models were evaluated for their ability to predict survival in this validation set with appropriate statistical tests. Results: The median survival and survival probabilities of the training and validation sets were similar when separated into three groups using both SPS and NRF methods. There was no statistical difference in the performance of the SPS and NRF methods in survival prediction. Conclusion: Both the SPS and NRF models for predicting survival in patients referred for palliative radiotherapy have been validated. The NRF model is preferred because it is simpler and avoids the need to remember the weightings among the prognostic factors

  3. Refining and validating the Social Interaction Anxiety Scale and the Social Phobia Scale.

    Science.gov (United States)

    Carleton, R Nicholas; Collimore, Kelsey C; Asmundson, Gordon J G; McCabe, Randi E; Rowa, Karen; Antony, Martin M

    2009-01-01

    The Social Interaction Anxiety Scale and Social Phobia Scale are companion measures for assessing symptoms of social anxiety and social phobia. The scales have good reliability and validity across several samples, however, exploratory and confirmatory factor analyses have yielded solutions comprising substantially different item content and factor structures. These discrepancies are likely the result of analyzing items from each scale separately or simultaneously. The current investigation sets out to assess items from those scales, both simultaneously and separately, using exploratory and confirmatory factor analyses in an effort to resolve the factor structure. Participants consisted of a clinical sample (n 5353; 54% women) and an undergraduate sample (n 5317; 75% women) who completed the Social Interaction Anxiety Scale and Social Phobia Scale, along with additional fear-related measures to assess convergent and discriminant validity. A three-factor solution with a reduced set of items was found to be most stable, irrespective of whether the items from each scale are assessed together or separately. Items from the Social Interaction Anxiety Scale represented one factor, whereas items from the Social Phobia Scale represented two other factors. Initial support for scale and factor validity, along with implications and recommendations for future research, is provided. (c) 2009 Wiley-Liss, Inc.

  4. Development and validation of a smartphone addiction scale (SAS.

    Directory of Open Access Journals (Sweden)

    Min Kwon

    Full Text Available OBJECTIVE: The aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS was demonstrated. METHODS: A total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale, visual analogue scale (VAS, and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96. Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS. RESULTS: Based on the factor analysis results, the subscale "disturbance of reality testing" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967. SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05, education (p<0.05, and self-reported smartphone addiction scores (p<0.001 in SAS. CONCLUSIONS: This study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.

  5. Construct and Concurrent Validation of a New Resistance Intensity Scale for Exercise with Thera-Band® Elastic Bands

    Directory of Open Access Journals (Sweden)

    Juan C. Colado, Xavier Garcia-Masso, N. Travis Triplett, Joaquin Calatayud, Jorge Flandez, David Behm, Michael E. Rogers

    2014-12-01

    Full Text Available The construct and concurrent validity of the Thera-Band Perceived Exertion Scale for Resistance Exercise with elastic bands (EB was examined. Twenty subjects performed two separate sets of 15 repetitions of both frontal and lateral raise exercise over two sessions. The criterion variables were myoelectric activity and heart rate. One set was performed with an elastic band grip width that permitted 15 maximum repetitions in the selected exercise, and another set was performed with a grip width 50% more than the 15RM grip. Following the final repetition of each set, active muscle (AM and overall body (O ratings of perceived exertion (RPE were collected from the Thera-Band® resistance exercise scale and the OMNI-Resistance Exercise Scale of perceived exertion with Thera-Band® resistance bands (OMNI-RES EB. Construct validity was established by correlating the RPE from the OMNI-RES EB with the Thera-Band RPE scale using regression analysis. The results showed significant differences (p ≤ 0.05 in myoelectric activity, heart rate, and RPE scores between the low- and high-intensity sets. The intraclass correlation coefficient for active muscles and overall RPE scale scores was 0.67 and 0.58, respectively. There was a positive linear relationship between the RPE from the OMNI-RES EB and the Thera-Band scale. Validity coefficients for the RPE AM were r2 = 0.87 and ranged from r2 = 0.76 to 0.85 for the RPE O. Therefore, the Thera-Band Perceived Exertion Scale for Resistance Exercise can be used for monitoring elastic band exercise intensity. This would allow the training dosage to be better controlled within and between sessions. Moreover, the construct and concurrent validity indicates that the OMNI-RES EB measures similar properties of exertion as the Thera-Band RPE scale during elastic resistance exercise.

  6. Initial validation of the Argentinean Spanish version of the PedsQL™ 4.0 Generic Core Scales in children and adolescents with chronic diseases: acceptability and comprehensibility in low-income settings

    Directory of Open Access Journals (Sweden)

    Bauer Gabriela

    2008-08-01

    Full Text Available Abstract Background To validate the Argentinean Spanish version of the PedsQL™ 4.0 Generic Core Scales in Argentinean children and adolescents with chronic conditions and to assess the impact of socio-demographic characteristics on the instrument's comprehensibility and acceptability. Reliability, and known-groups, and convergent validity were tested. Methods Consecutive sample of 287 children with chronic conditions and 105 healthy children, ages 2–18, and their parents. Chronically ill children were: (1 attending outpatient clinics and (2 had one of the following diagnoses: stem cell transplant, chronic obstructive pulmonary disease, HIV/AIDS, cancer, end stage renal disease, complex congenital cardiopathy. Patients and adult proxies completed the PedsQL™ 4.0 and an overall health status assessment. Physicians were asked to rate degree of health status impairment. Results The PedsQL™ 4.0 was feasible (only 9 children, all 5 to 7 year-olds, could not complete the instrument, easy to administer, completed without, or with minimal, help by most children and parents, and required a brief administration time (average 5–6 minutes. People living below the poverty line and/or low literacy needed more help to complete the instrument. Cronbach Alpha's internal consistency values for the total and subscale scores exceeded 0.70 for self-reports of children over 8 years-old and parent-reports of children over 5 years of age. Reliability of proxy-reports of 2–4 year-olds was low but improved when school items were excluded. Internal consistency for 5–7 year-olds was low (α range = 0.28–0.76. Construct validity was good. Child self-report and parent proxy-report PedsQL™ 4.0 scores were moderately but significantly correlated (ρ = 0.39, p Conclusion Results suggest that the Argentinean Spanish PedsQL™ 4.0 is suitable for research purposes in the public health setting for children over 8 years old and parents of children over 5 years old

  7. THE DEVELOPMENT AND USE OF A MODEL TO PREDICT SUSTAINABILITY OF CHANGE IN HEALTH CARE SETTINGS.

    Science.gov (United States)

    Molfenter, Todd; Ford, James H; Bhattacharya, Abhik

    2011-01-01

    Innovations adopted through organizational change initiatives are often not sustained leading to diminished quality, productivity, and consumer satisfaction. Research explaining variance in the use of adopted innovations in health care settings is sparse, suggesting the need for a theoretical model to guide research and practice. In this article, we describe the development of a hybrid conjoint decision theoretic model designed to predict the sustainability of organizational change in health care settings. An initial test of the model's predictive validity using expert scored hypothetic profiles resulted in an r-squared value of .77. The test of this model offers a theoretical base for future research on the sustainability of change in health care settings.

  8. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  9. Detecting Motor Impairment in Early Parkinson’s Disease via Natural Typing Interaction With Keyboards: Validation of the neuroQWERTY Approach in an Uncontrolled At-Home Setting

    Science.gov (United States)

    Ledesma-Carbayo, María J; Butterworth, Ian; Matarazzo, Michele; Montero-Escribano, Paloma; Puertas-Martín, Verónica; Gray, Martha L

    2018-01-01

    Background Parkinson’s disease (PD) is the second most prevalent neurodegenerative disease and one of the most common forms of movement disorder. Although there is no known cure for PD, existing therapies can provide effective symptomatic relief. However, optimal titration is crucial to avoid adverse effects. Today, decision making for PD management is challenging because it relies on subjective clinical evaluations that require a visit to the clinic. This challenge has motivated recent research initiatives to develop tools that can be used by nonspecialists to assess psychomotor impairment. Among these emerging solutions, we recently reported the neuroQWERTY index, a new digital marker able to detect motor impairment in an early PD cohort through the analysis of the key press and release timing data collected during a controlled in-clinic typing task. Objective The aim of this study was to extend the in-clinic implementation to an at-home implementation by validating the applicability of the neuroQWERTY approach in an uncontrolled at-home setting, using the typing data from subjects’ natural interaction with their laptop to enable remote and unobtrusive assessment of PD signs. Methods We implemented the data-collection platform and software to enable access and storage of the typing data generated by users while using their computer at home. We recruited a total of 60 participants; of these participants 52 (25 people with Parkinson’s and 27 healthy controls) provided enough data to complete the analysis. Finally, to evaluate whether our in-clinic-built algorithm could be used in an uncontrolled at-home setting, we compared its performance on the data collected during the controlled typing task in the clinic and the results of our method using the data passively collected at home. Results Despite the randomness and sparsity introduced by the uncontrolled setting, our algorithm performed nearly as well in the at-home data (area under the receiver operating

  10. The fish sexual development test: an OECD test guideline proposal with possible relevance for environmental risk assessment. Results from the validation programme

    DEFF Research Database (Denmark)

    Holbech, Henrik; Brande-Lavridsen, Nanna; Kinnberg, Karin Lund

    2010-01-01

    The Fish Sexual Development Test (FSDT) has gone through two validations as an OECD test guideline for the detection of endocrine active chemicals with different modes of action. The validation has been finalized on four species: Zebrafish (Danio rerio), Japanese medaka (Oryzias latipes), three s...... as a population relevant endpoint and the results of the two validation rounds will be discussed in relation to environmental risk assessment and species selection....... for histology. For all three methods, the fish parts were numbered and histology could therefore be linked to the vitellogenin concentration in individual fish. The two core endocrine relevant endpoints were vitellogenin concentrations and phenotypic sex ratio. Change in the sex ratio is presented...

  11. Brazilian Portuguese Validated Version of the Cardiac Anxiety Questionnaire

    Science.gov (United States)

    Sardinha, Aline; Nardi, Antonio Egidio; de Araújo, Claudio Gil Soares; Ferreira, Maria Cristina; Eifert, Georg H.

    2013-01-01

    Background Cardiac Anxiety (CA) is the fear of cardiac sensations, characterized by recurrent anxiety symptoms, in patients with or without cardiovascular disease. The Cardiac Anxiety Questionnaire (CAQ) is a tool to assess CA, already adapted but not validated to Portuguese. Objective This paper presents the three phases of the validation studies of the Brazilian CAQ. Methods To extract the factor structure and assess the reliability of the CAQ (phase 1), 98 patients with coronary artery disease were recruited. The aim of phase 2 was to explore the convergent and divergent validity. Fifty-six patients completed the CAQ, along with the Body Sensations Questionnaire (BSQ) and the Social Phobia Inventory (SPIN). To determine the discriminative validity (phase 3), we compared the CAQ scores of two subgroups formed with patients from phase 1 (n = 98), according to the diagnoses of panic disorder and agoraphobia, obtained with the MINI - Mini International Neuropsychiatric Interview. Results A 2-factor solution was the most interpretable (46.4% of the variance). Subscales were named "Fear and Hypervigilance" (n = 9; alpha = 0.88), and "Avoidance", (n = 5; alpha = 0.82). Significant correlation was found between factor 1 and the BSQ total score (p < 0.01), but not with factor 2. SPIN factors showed significant correlations with CAQ subscales (p < 0.01). In phase 3, "Cardiac with panic" patients scored significantly higher in CAQ factor 1 (t = -3.42; p < 0.01, CI = -1.02 to -0.27), and higher, but not significantly different, in factor 2 (t = -1.98; p = 0.51, CI = -0.87 to 0.00). Conclusions These results provide a definite Brazilian validated version of the CAQ, adequate to clinical and research settings. PMID:24145391

  12. The Management Advisory Committee of the Inspection Validation Centre seventh report

    International Nuclear Information System (INIS)

    1990-07-01

    The Management Advisory Committee of the Inspection Validation Centre (IVC/MAC) was set up to review the policy, scope, procedure and operation of the Inspection Validation Centre (IVC), to supervise its operation and to advise and report to the United Kingdom Atomic Energy Authority (UKAEA) appropriately. The IVC was established at the UKAEA Risley Laboratory, to validate the procedures, personnel and equipment proposed by Nuclear Electric for use in the ultrasonic inspection at various stages of the fabrication, erection and operation of the Sizewell 'B' Pressurized Water Reactor (PWR) reactor pressure vessel (RPV) and such other components as are identified by the utility. It is operated by the UKAEA to work as an independent organisation under contract to Nuclear Electric, and results are reported to Nuclear Electric together with the conclusions of the Centre in relation to the validation of individual techniques. At the meetings of the IVC/MAC, the progress on the manufacture of the pressure vessel is also outlined by the PWR Project Director. The vessel has now undergone the final stress relief and post-hydro inspection and is due to be delivered to the Sizewell site before the end of 1990. (author)

  13. Leachates analysis of glass from black and white and color televisions sets

    Directory of Open Access Journals (Sweden)

    Radovan Kukla

    2012-01-01

    Full Text Available The aim of work was to determine the content of selected elements in the glass from color and black and white television (TV sets. The amount of back taken TV sets in the Czech Republic increases annualy, which is associated with higher production of the waste glass. Currently there is 1.4 television sets for each household and the number of it should increase in future, because of higher standard of living and new technologies used. Waste glass treatment or landfilling may present, because of composition of the waste glass threat to the environment. One of the indicators of the polution from waste glass is leachate analysis, which can show us the content of hazardous substances in the waste glass, which can be released to the environment. A qualitative analysis of leachate samples was carried out by UV-VIS spectrophotometer. The results showed concentration of potencionaly hazardous substances contained in leachate samples. This was especially content of aluminum, cadmium, chromium, copper, molybdenum, nickel, lead, tin and zinc. Results of analyzes of the aqueous extract of glass were confronted with the limits specified in the currently valid legislation. Based on the results there is clear that in the case of landfilling of the glass from television sets, there is possibility of the contamination of landfill leachate by the elements, which are presented in the glass.

  14. [Validity and reliability of the spanish EQ-5D-Y proxy version].

    Science.gov (United States)

    Gusi, N; Perez-Sousa, M A; Gozalo-Delgado, M; Olivares, P R

    2014-10-01

    A proxy version of the EQ-5D-Y, a questionnaire to evaluate the Health Related Quality of Life (HRQoL) in children and adolescents, has recently been developed. There are currently no data on the validity and reliability of this tool. The objective of this study was to analyze the validity and reliability of the EQ-5D-Y proxy version. A core set of self-report tools, including the Spanish version of the EQ-5D-Y were administered to a group of Spanish children and adolescents drawn from the general population. A similar core set of internationally standardized proxy tools, including the EQ-5D-Y proxy version were administered to their parents. Test-retest reliability was determined, and correlations with other generic measurements of HRQoL were calculated. Additionally, known group validity was examined by comparing groups with a priori expected differences in HRQoL. The agreement between the self-report and proxy version responses was also calculated. A total of 477 children and adolescents and their parents participated in the study. One week later, 158 participants completed the EQ-5D-Y/EQ-5D-Y proxy to facilitate reliability analysis. Agreement between the test-retest scores was higher than 88% for EQ-5D-Y self-report, and proxy version. Correlations with other health measurements showed similar convergent validity to that observed in the international EQ-5D-Y. Agreement between the self-report and proxy versions ranged from 72.9% to 97.1%. The results provide preliminary evidence of the reliability and validity of the EQ-5D-Y proxy version. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  15. Thermal-Hydraulic Results for the Boiling Water Reactor Dry Cask Simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Durbin, Samuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lindgren, Eric R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    single assembly geometry with well-controlled boundary conditions simplified interpretation of results. Two different arrangements of ducting were used to mimic conditions for aboveground and belowground storage configurations for vertical, dry cask systems with canisters. Transverse and axial temperature profiles were measured throughout the test assembly. The induced air mass flow rate was measured for both the aboveground and belowground configurations. In addition, the impact of cross-wind conditions on the belowground configuration was quantified. Over 40 unique data sets were collected and analyzed for these efforts. Fourteen data sets for the aboveground configuration were recorded for powers and internal pressures ranging from 0.5 to 5.0 kW and 0.3 to 800 kPa absolute, respectively. Similarly, fourteen data sets were logged for the belowground configuration starting at ambient conditions and concluding with thermal-hydraulic steady state. Over thirteen tests were conducted using a custom-built wind machine. The results documented in this report highlight a small, but representative, subset of the available data from this test series. This addition to the dry cask experimental database signifies a substantial addition of first-of-a-kind, high-fidelity transient and steady-state thermal-hydraulic data sets suitable for CFD model validation.

  16. [Reliability and validity of the Braden Scale for predicting pressure sore risk].

    Science.gov (United States)

    Boes, C

    2000-12-01

    For more accurate and objective pressure sore risk assessment various risk assessment tools were developed mainly in the USA and Great Britain. The Braden Scale for Predicting Pressure Sore Risk is one such example. By means of a literature analysis of German and English texts referring to the Braden Scale the scientific control criteria reliability and validity will be traced and consequences for application of the scale in Germany will be demonstrated. Analysis of 4 reliability studies shows an exclusive focus on interrater reliability. Further, even though examination of 19 validity studies occurs in many different settings, such examination is limited to the criteria sensitivity and specificity (accuracy). The range of sensitivity and specificity level is 35-100%. The recommended cut off points rank in the field of 10 to 19 points. The studies prove to be not comparable with each other. Furthermore, distortions in these studies can be found which affect accuracy of the scale. The results of the here presented analysis show an insufficient proof for reliability and validity in the American studies. In Germany, the Braden scale has not yet been tested under scientific criteria. Such testing is needed before using the scale in different German settings. During the course of such testing, construction and study procedures of the American studies can be used as a basis as can the problems be identified in the analysis presented below.

  17. COVERS Neonatal Pain Scale: Development and Validation

    Directory of Open Access Journals (Sweden)

    Ivan L. Hand

    2010-01-01

    Full Text Available Newborns and infants are often exposed to painful procedures during hospitalization. Several different scales have been validated to assess pain in specific populations of pediatric patients, but no single scale can easily and accurately assess pain in all newborns and infants regardless of gestational age and disease state. A new pain scale was developed, the COVERS scale, which incorporates 6 physiological and behavioral measures for scoring. Newborns admitted to the Neonatal Intensive Care Unit or Well Baby Nursery were evaluated for pain/discomfort during two procedures, a heel prick and a diaper change. Pain was assessed using indicators from three previously established scales (CRIES, the Premature Infant Pain Profile, and the Neonatal Infant Pain Scale, as well as the COVERS Scale, depending upon gestational age. Premature infant testing resulted in similar pain assessments using the COVERS and PIPP scales with an r=0.84. For the full-term infants, the COVERS scale and NIPS scale resulted in similar pain assessments with an r=0.95. The COVERS scale is a valid pain scale that can be used in the clinical setting to assess pain in newborns and infants and is universally applicable to all neonates, regardless of their age or physiological state.

  18. Initial Verification and Validation Assessment for VERA

    Energy Technology Data Exchange (ETDEWEB)

    Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States); Athe, Paridhi [North Carolina State Univ., Raleigh, NC (United States); Jones, Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hetzler, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sieger, Matt [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-04-01

    The Virtual Environment for Reactor Applications (VERA) code suite is assessed in terms of capability and credibility against the Consortium for Advanced Simulation of Light Water Reactors (CASL) Verification and Validation Plan (presented herein) in the context of three selected challenge problems: CRUD-Induced Power Shift (CIPS), Departure from Nucleate Boiling (DNB), and Pellet-Clad Interaction (PCI). Capability refers to evidence of required functionality for capturing phenomena of interest while capability refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements against which the VERA software is assessed. This approach, in turn, enables the focused assessment of only those capabilities relevant to the challenge problem. The evaluation of VERA against the challenge problem requirements represents a capability assessment. The mechanism for assessment is the Sandia-developed Predictive Capability Maturity Model (PCMM) that, for this assessment, evaluates VERA on 8 major criteria: (1) Representation and Geometric Fidelity, (2) Physics and Material Model Fidelity, (3) Software Quality Assurance and Engineering, (4) Code Verification, (5) Solution Verification, (6) Separate Effects Model Validation, (7) Integral Effects Model Validation, and (8) Uncertainty Quantification. For each attribute, a maturity score from zero to three is assigned in the context of each challenge problem. The evaluation of these eight elements constitutes the credibility assessment for VERA.

  19. Best practice strategies for validation of micro moulding process simulation

    DEFF Research Database (Denmark)

    Costa, Franco; Tosello, Guido; Whiteside, Ben

    2009-01-01

    The use of simulation for injection moulding design is a powerful tool which can be used up-front to avoid costly tooling modifications and reduce the number of mould trials. However, the accuracy of the simulation results depends on many component technologies and information, some of which can...... be easily controlled or known by the simulation analyst and others which are not easily known. For this reason, experimental validation studies are an important tool for establishing best practice methodologies for use during analysis set up on all future design projects. During the validation studies......, detailed information about the moulding process is gathered and used to establish these methodologies. Whereas in routine design projects, these methodologies are then relied on to provide efficient but reliable working practices. Data analysis and simulations on preliminary micro-moulding experiments have...

  20. Predicting nonrecovery among whiplash patients in the emergency room and in an insurance company setting.

    Science.gov (United States)

    Rydman, Eric; Ponzer, Sari; Ottosson, Carin; Järnbert-Pettersson, Hans

    2017-04-01

    To construct and validate a prediction instrument for early identification of patients with a high risk of delayed recovery after whiplash injuries (PPS-WAD) in an insurance company setting. Prospective cohort study. On the basis of a historic cohort (n = 130) of patients with a whiplash injury identified in an emergency room (ER, model-building set), we used logistic regression to construct an instrument consisting of two demographic variables (i.e. questions of educational level and work status) and the patient-rated physical and mental status during the acute phase to predict self-reported nonrecovery after 6 months. We evaluated the instrument's ability to predict nonrecovery in a new cohort (n = 204) of patients originating from an insurance company setting (IC, validation set). The prediction instrument had low reproducibility when the setting was changed from the ER cohort to the IC cohort. The overall percentage of correct predictions of nonrecovery in the ER cohort was 78 % compared with 62 % in the IC cohort. The sensitivity and specificity in relation to nonrecovery were both 78 % in the ER cohort. The sensitivity and specificity in the insurance company setting was lower, 67 and 50 %. Clinical decision rules need validation before they are used in a new setting. An instrument consisting of four questions with an excellent possibility of identifying patients with a high risk of nonrecovery after a whiplash injury in the emergency room was not as useful in an insurance company setting. The importance and type of the risk factors for not recovering probably differ between the settings, as well as the individuals.

  1. Development and validation of a smartphone addiction scale (SAS).

    Science.gov (United States)

    Kwon, Min; Lee, Joon-Yeop; Won, Wang-Youn; Park, Jae-Woo; Min, Jung-Ah; Hahn, Changtae; Gu, Xinyu; Choi, Ji-Hye; Kim, Dai-Jin

    2013-01-01

    The aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated. A total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS. Based on the factor analysis results, the subscale "disturbance of reality testing" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (psmartphone addiction scores (psmartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.

  2. Cooperate to Validate. Observal-Net Experts' Report on Validation of Non-Formal and Informal Learning (VNIL) 2013

    Science.gov (United States)

    Weber Guisan, Saskia; Voit, Janine; Lengauer, Sonja; Proinger, Eva; Duvekot, Ruud; Aagaard, Kirsten

    2014-01-01

    The present publication is one of the outcomes of the OBSERVAL-NET project (followup of the OBSERVAL project). The main aim of OBSERVAL-NET was to set up a stakeholder centric network of organisations supporting the validation of non-formal and informal learning in Europe based on the formation of national working groups in the 8 participating…

  3. Cooperate to Validate: OBSERVAL-NET Experts' Report on Validation of Non-Formal and Informal Learning (VNIL) 2013

    Science.gov (United States)

    Weber Guisan, Saskia; Voit, Janine; Lengauer, Sonja; Proinger, Eva; Duvekot, Ruud; Aagaard, Kirsten

    2014-01-01

    The present publication is one of the outcomes of the OBSERVAL-NET project (follow-up of the OBSERVAL project). The main aim of OBSERVAL-NET was to set up a stakeholder-centric network of organisations supporting the validation of non-formal and informal learning in Europe based on the formation of national working groups in the 8 participating…

  4. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  5. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  6. Results of a questionnaire survey on window setting and FOV on CT images of examinations

    International Nuclear Information System (INIS)

    Anzui, Maya; Asano, Kazushige; Goto, Takahiro; Sekitani, Toshinori; Tsujioka, Katsumi; Kawaguchi, Daisuke; Narumi, Tatsuki

    2008-01-01

    The use of CT as a general examination has spread widely and is even used in small institutions. However, it is difficult to determine the current situation of each institution. Therefore, we employed a questionnaire to investigate the current situation of a variety of institutions. From the results of the questionnaire, we determined that the window setting was difficult for beginner technologists. In addition, in many institutions, radiological technologists did not always use the same display field of view (FOV) for the same patient. From this questionnaire, we were able to determine the present conditions in each institution. We consider these results very useful. (author)

  7. On the dimensionality of organizational justice: a construct validation of a measure.

    Science.gov (United States)

    Colquitt, J A

    2001-06-01

    This study explores the dimensionality of organizational justice and provides evidence of construct validity for a new justice measure. Items for this measure were generated by strictly following the seminal works in the justice literature. The measure was then validated in 2 separate studies. Study 1 occurred in a university setting, and Study 2 occurred in a field setting using employees in an automobile parts manufacturing company. Confirmatory factor analyses supported a 4-factor structure to the measure, with distributive, procedural, interpersonal, and informational justice as distinct dimensions. This solution fit the data significantly better than a 2- or 3-factor solution using larger interactional or procedural dimensions. Structural equation modeling also demonstrated predictive validity for the justice dimensions on important outcomes, including leader evaluation, rule compliance, commitment, and helping behavior.

  8. Cross-cultural adaptation and validation of the Manchester Foot Pain and Disability Index into Spanish.

    Science.gov (United States)

    Gijon-Nogueron, Gabriel; Ndosi, Mwidimi; Luque-Suarez, Alejandro; Alcacer-Pitarch, Begonya; Munuera, Pedro Vicente; Garrow, Adam; Redmond, Anthony C

    2014-03-01

    The Manchester Foot Pain and Disability Index (MFPDI) is a self-assessment 19-item questionnaire developed in the UK to measure foot pain and disability. This study aimed at conducting cross-cultural adaptation and validation of the MFPDI for use in Spain. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes measures were followed in the MFPDI adaptation into Spanish. The cross-cultural validation involved Rasch analysis of pooled data sets from Spain and the UK. Spanish data set comprised 338 patients, five used in the adaptation phase and 333 in the cross-cultural validation phase, mean age (SD) = 55.2 (16.7) and 248 (74.5 %) were female. A UK data set (n = 682) added in the cross-cultural validation phase; mean age (SD) = 51.6 (15.2 %) and 416 (61.0 %) were female. A preliminary analysis of the 17-item MFPDI revealed significant local dependency of items causing significant deviation from the Rasch model. Grouping all items into testlets and re-analysing the MFPDI as a 3-testlet scale resulted in an adequate fit to the Rasch model, χ (2) (df) = 15.945 (12), p = 0.194, excellent reliability and unidimensionality. Lack of cross-cultural invariance was evident on the functional and personal appearance testlets. Splitting the affected testlets discounted the cross-cultural bias and satisfied requirements of the Rasch model. Subsequently, the MFPDI was calibrated into interval-level scales, fully adjusted to allow parametric analyses and cross-cultural data comparisons when required. Rasch analysis has confirmed that the MFPDI is a robust 3-subscale measure of foot pain, function and appearance in both its English and Spanish versions.

  9. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method

    Directory of Open Access Journals (Sweden)

    Daniel Ramos

    2017-02-01

    Full Text Available Data to which the authors refer to throughout this article are likelihood ratios (LR computed from the comparison of 5–12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim, [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  10. Moral Judgment Reloaded: A Moral Dilemma validation study

    Directory of Open Access Journals (Sweden)

    Julia F. Christensen

    2014-07-01

    Full Text Available We propose a revised set of moral dilemmas for studies on moral judgment. We selected a total of 46 moral dilemmas available in the literature and fine-tuned them in terms of four conceptual factors (Personal Force, Benefit Recipient, Evitability and Intention and methodological aspects of the dilemma formulation (word count, expression style, question formats that have been shown to influence moral judgment. Second, we obtained normative codings of arousal and valence for each dilemma showing that emotional arousal in response to moral dilemmas depends crucially on the factors Personal Force, Benefit Recipient, and Intentionality. Third, we validated the dilemma set confirming that people's moral judgment is sensitive to all four conceptual factors, and to their interactions. Results are discussed in the context of this field of research, outlining also the relevance of our RT effects for the Dual Process account of moral judgment. Finally, we suggest tentative theoretical avenues for future testing, particularly stressing the importance of the factor Intentionality in moral judgment. Additionally, due to the importance of cross-cultural studies in the quest for universals in human moral cognition, we provide the new set dilemmas in six languages (English, French, German, Spanish, Catalan and Danish. The norming values provided here refer to the Spanish dilemma set.

  11. Moral judgment reloaded: a moral dilemma validation study

    Science.gov (United States)

    Christensen, Julia F.; Flexas, Albert; Calabrese, Margareta; Gut, Nadine K.; Gomila, Antoni

    2014-01-01

    We propose a revised set of moral dilemmas for studies on moral judgment. We selected a total of 46 moral dilemmas available in the literature and fine-tuned them in terms of four conceptual factors (Personal Force, Benefit Recipient, Evitability, and Intention) and methodological aspects of the dilemma formulation (word count, expression style, question formats) that have been shown to influence moral judgment. Second, we obtained normative codings of arousal and valence for each dilemma showing that emotional arousal in response to moral dilemmas depends crucially on the factors Personal Force, Benefit Recipient, and Intentionality. Third, we validated the dilemma set confirming that people's moral judgment is sensitive to all four conceptual factors, and to their interactions. Results are discussed in the context of this field of research, outlining also the relevance of our RT effects for the Dual Process account of moral judgment. Finally, we suggest tentative theoretical avenues for future testing, particularly stressing the importance of the factor Intentionality in moral judgment. Additionally, due to the importance of cross-cultural studies in the quest for universals in human moral cognition, we provide the new set dilemmas in six languages (English, French, German, Spanish, Catalan, and Danish). The norming values provided here refer to the Spanish dilemma set. PMID:25071621

  12. Experimental validation of thermal design of top shield for a pool type SFR

    International Nuclear Information System (INIS)

    Aithal, Sriramachandra; Babu, V. Rajan; Balasubramaniyan, V.; Velusamy, K.; Chellapandi, P.

    2016-01-01

    Highlights: • Overall thermal design of top shield in a SFR is experimentally verified. • Air jet cooling is effective in ensuring the temperatures limits for top shield. • Convection patterns in narrow annulus are in line with published CFD results. • Wire mesh insulation ensures gradual thermal gradient at top portion of main vessel. • Under loss of cooling scenario, sufficient time is available for corrective action. - Abstract: An Integrated Top Shield Test Facility towards validation of thermal design of top shield for a pool type SFR has been conceived, constructed & commissioned. Detailed experiments were performed in this experimental facility having full-scale features. Steady state temperature distribution within the facility is measured for various heater plate temperatures in addition to simulating different operating states of the reactor. Following are the important observations (i) jet cooling system is effective in regulating the roof slab bottom plate temperature and thermal gradient across roof slab simulating normal operation of reactor, (ii) wire mesh insulation provided in roof slab-main vessel annulus is effective in obtaining gradual thermal gradient along main vessel top portion and inhibiting the setting up of cellular convection within annulus and (iii) cellular convection with four distinct convective cells sets in the annular gap between roof slab and small rotatable plug measuring ∼ϕ4 m in diameter & gap width varying from 16 mm to 30 mm. Repeatability of results is also ensured during all the above tests. The results presented in this paper is expected to provide reference data for validation of thermal hydraulic models in addition to serving as design validation of jet cooling system for pool type SFR.

  13. Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting

    Science.gov (United States)

    Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.

    2017-01-01

    Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908

  14. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  15. Veggie ISS Validation Test Results and Produce Consumption

    Science.gov (United States)

    Massa, Gioia; Hummerick, Mary; Spencer, LaShelle; Smith, Trent

    2015-01-01

    The Veggie vegetable production system flew to the International Space Station (ISS) in the spring of 2014. The first set of plants, Outredgeous red romaine lettuce, was grown, harvested, frozen, and returned to Earth in October. Ground control and flight plant tissue was sub-sectioned for microbial analysis, anthocyanin antioxidant phenolic analysis, and elemental analysis. Microbial analysis was also performed on samples swabbed on orbit from plants, Veggie bellows, and plant pillow surfaces, on water samples, and on samples of roots, media, and wick material from two returned plant pillows. Microbial levels of plants were comparable to ground controls, with some differences in community composition. The range in aerobic bacterial plate counts between individual plants was much greater in the ground controls than in flight plants. No pathogens were found. Anthocyanin concentrations were the same between ground and flight plants, while antioxidant and phenolic levels were slightly higher in flight plants. Elements varied, but key target elements for astronaut nutrition were similar between ground and flight plants. Aerobic plate counts of the flight plant pillow components were significantly higher than ground controls. Surface swab samples showed low microbial counts, with most below detection limits. Flight plant microbial levels were less than bacterial guidelines set for non-thermostabalized food and near or below those for fungi. These guidelines are not for fresh produce but are the closest approximate standards. Forward work includes the development of standards for space-grown produce. A produce consumption strategy for Veggie on ISS includes pre-flight assessments of all crops to down select candidates, wiping flight-grown plants with sanitizing food wipes, and regular Veggie hardware cleaning and microbial monitoring. Produce then could be consumed by astronauts, however some plant material would be reserved and returned for analysis. Implementation of

  16. Psychometric properties of the Postgraduate Hospital Educational Environment Measure in an Iranian hospital setting

    Directory of Open Access Journals (Sweden)

    Shahrzad Shokoohi

    2014-08-01

    Full Text Available Background: Students’ perceptions of the educational environment are an important construct in assessing and enhancing the quality of medical training programs. Reliable and valid measurement, however, can be problematic – especially as instruments developed and tested in one culture are translated for use in another. Materials and method: This study sought to explore the psychometric properties of the Postgraduate Hospital Educational Environment Measure (PHEEM for use in an Iranian hospital training setting. We translated the instrument into Persian and ensured its content validity by back translation and expert review prior to administering it to 127 residents of Urmia University of Medical Science. Results: Overall internal consistency of the translated measure was good (a=0.94. Principal components analysis revealed five factors accounting for 52.8% of the variance. Conclusion: The Persian version of the PHEEM appears to be a reliable and potentially valid instrument for use in Iranian medical schools and may find favor in evaluating the educational environments of residency programs nationwide.

  17. The validity and reliability of value-added and target-setting procedures with special reference to Key Stage 3

    OpenAIRE

    Moody, Ian Robin

    2003-01-01

    The validity of value-added systems of measurement is crucially dependent upon there being a demonstrably unambiguous relationship between the so-called baseline, or intake measures, and any subsequent measure of performance at a later stage. The reliability of such procedures is dependent on the relationships between these two measures being relatively stable over time. A number of questions arise with regard to both the validity and reliability of value-added procedures at any level in educ...

  18. Validation of a survey tool for use in cross-cultural studies

    Directory of Open Access Journals (Sweden)

    Costa FA

    2008-09-01

    Full Text Available There is a need for tools to measure the information patients need in order for healthcare professionals in general, and particularly pharmacists, to communicate effectively and play an active part in the way patients manage their medicines. Previous research has developed and validated constructs to measure patients’ desires for information and their perceptions of how useful their medicines are. It is important to develop these tools for use in different settings and countries so that best practice is shared and is based on the best available evidence. Objectives: this project sought to validate of a survey tool measuring the “Extent of Information Desired” (EID, the “Perceived Utility of Medicines” (PUM, and the “Anxiety about Illness” (AI that had been previously translated for use with Portuguese patients. Methods: The scales were validated in a patient sample of 596: construct validity was explored in Factor analysis (PCA and internal consistency analysed using Cronbach’s alpha. Criterion validity was explored correlating scores to the AI scale and patients’ perceived health status. Discriminatory power was assessed using ANOVA. Temporal stability was explored in a sub-sample of patients who responded at two time points, using a T-test to compare their mean scores. Results: Construct validity results indicated the need to remove 1 item from the Perceived Harm of Medicines (PHM and Perceived Benefit of Medicines (PBM for use in a Portuguese sample and the abandon of the tolerance scale. The internal consistency was high for the EID, PBM and AI scales (alpha>0.600 and acceptable for the PHM scale (alpha=0.536. All scales, except the EID, were consistent over time (p>0.05; p<0.01. All the scales tested showed good discriminatory power. The comparison of the AI scale with the SF-36 indicated good criterion validity (p<0.05.Conclusion: The translated tool was valid and reliable in Portuguese patients- excluding the Tolerance

  19. Psychometric Evaluation of the Revised Michigan Diabetes Knowledge Test (V.2016) in Arabic: Translation and Validation

    Science.gov (United States)

    Alhaiti, Ali Hassan; Alotaibi, Alanod Raffa; Jones, Linda Katherine; DaCosta, Cliff

    2016-01-01

    Objective. To translate the revised Michigan Diabetes Knowledge Test into the Arabic language and examine its psychometric properties. Setting. Of the 139 participants recruited through King Fahad Medical City in Riyadh, Saudi Arabia, 34 agreed to the second-round sample for retesting purposes. Methods. The translation process followed the World Health Organization's guidelines for the translation and adaptation of instruments. All translations were examined for their validity and reliability. Results. The translation process revealed excellent results throughout all stages. The Arabic version received 0.75 for internal consistency via Cronbach's alpha test and excellent outcomes in terms of the test-retest reliability of the instrument with a mean of 0.90 infraclass correlation coefficient. It also received positive content validity index scores. The item-level content validity index for all instrument scales fell between 0.83 and 1 with a mean scale-level index of 0.96. Conclusion. The Arabic version is proven to be a reliable and valid measure of patient's knowledge that is ready to be used in clinical practices. PMID:27995149

  20. Palliative Sedation: Reliability and Validity of Sedation Scales

    NARCIS (Netherlands)

    Arevalo Romero, J.; Brinkkemper, T.; van der Heide, A.; Rietjens, J.A.; Ribbe, M.W.; Deliens, L.; Loer, S.A.; Zuurmond, W.W.A.; Perez, R.S.G.M.

    2012-01-01

    Context: Observer-based sedation scales have been used to provide a measurable estimate of the comfort of nonalert patients in palliative sedation. However, their usefulness and appropriateness in this setting has not been demonstrated. Objectives: To study the reliability and validity of