WorldWideScience

Sample records for validation dirk tasche

  1. Dirk Bakker 1947 – 2009

    CERN Multimedia

    2009-01-01

    Dirk Bakker, far left, with his colleagues in the former AB-CO group during a test of new prototype consoles for the CERN Control Centre in 2005.It was with great sorrow that we learnt of the death of our colleague Dirk, taken too quickly by an incurable illness against which he fought with courage and dignity. Dirk arrived at CERN on 16 April, 1972 and spent nearly 36 years in the accelerator sector. He was instrumental in the distribution of the audio and video signals between the various accelerators and the control rooms and for the deployment of TV transmissions on the many screens all over the site. He was also the key person organizing the installations in the control rooms and his recent contributions for the CERN Control Centre (CCC) were exemplary. All the users of his services knew Dirk as an indispensable expert whose knowledge and professionalism were always appreciated. His discretion and pride in his work made Dirk a ...

  2. Pädevuskeskne õpe / Dirk van Vierssen

    Index Scriptorium Estoniae

    Vierssen, Dirk van

    2002-01-01

    Eestis korraldatakse politseikoolitust ümber. Konsultandiks on Hollandi Politseikoolituse Keskuse spetsialist, kasvatusteaduste doktor Dirk van Vierssen. Reformi sisu on üleminek kvalifikatsioonikeskselt õppelt pädevuskesksele (kompetentsusele põhinevale) õppele. Kvalifikatsioonikeskse ja pädevuskeskse õppe erinevusi / lindilt üles kirjutanud Raivo Juurak

  3. Essays in theoretical physics in honour of Dirk Ter Haar

    CERN Document Server

    Parry, W E

    2013-01-01

    Essays in Theoretical Physics: In Honour of Dirk ter Haar is devoted to Dirk ter Haar, detailing the breadth of Dirk's interest in physics. The book contains 15 chapters, with some chapters elucidating stellar dynamics with non-classical integrals; a mean-field treatment of charge density waves in a strong magnetic field; electrodynamics of two-dimensional (surface) superconductors; and the Bethe Ansatz and exact solutions of the Kondo and related magnetic impurity models. Other chapters focus on probing the interiors of neutron stars; macroscopic quantum tunneling; unitary transformation meth

  4. Jodi / Dirk Paesmans ; interv. Tilman Baumgärtel

    Index Scriptorium Estoniae

    Paesmans, Dirk

    2006-01-01

    1994. aastast nime Jodi all töötavast kunstnikepaarist, kelle moodustavad Barcelonas elavad Dirk Paesmans ja Joan Heemskerk. 1999. a. omistati Jodile Webby auhind kunsti kategoorias. D. Paesmans 2001. a. tehtud telefoniintervjuus töödest "OSS****", "SOD", versioonidest arvutimängudele "Quake" ja "Wolfenstein", "Valebrauseritest", huvist kunstilise tarkvara loomise ja eksisteerivate programmide modifitseerimise vastu

  5. The design and implementation of the DIRK system for dosemeter issue and record keeping

    International Nuclear Information System (INIS)

    Kendall, G.M.; Kay, P.; Saw, G.M.A.; Salmon, L.; Carter, C.D.; Law, D.V.

    1983-05-01

    DIRK, the computerised system which the National Radiological Protection Board employs for its Personal Monitoring Service, is described. DIRK is also used to store the data for the National Registry for Radiation Workers and could support the Central Index of Dose Information should this be set up. The general principles of the design of DIRK, as well as a detailed description of the system, are included in the report. DIRK is based on a set of interlocked index sequential files manipulated by PL/1 programs. Data compaction techniques are used to reduce by a factor of ten the size of the files stored on magnetic disk. Security of the database is most important and two levels of security have been implemented. Table driven techniques are used for updating the database. A specially designed free-format language is used for specifying changes. Statistics, sorted listings of selected data and summaries are provided by a general purpose program for this type of operation. However, it has still been necessary to write a number of special purpose programs for some particular needs of DIRK users. The final section of the report describes the experiences gained during the planning, implementation and maintenance of DIRK. The importance of liaison with the eventual users of the system is emphasised. (author)

  6. The design and implementation of the DIRK system for dosemeter issue and record keeping

    CERN Document Server

    Kendall, G M; Kay, P; Law, D V; Salmon, L; Saw, G M A

    1983-01-01

    DIRK, the computerised system which the National Radiological Protection Board employs for its Personal Monitoring Service, is described. DIRK is also used to store the data for the National Registry for Radiation Workers and could support the Central Index of Dose Information should this be set up. The general principles of the design of DIRK, as well as a detailed description of the system, are included in the report. DIRK is based on a set of interlocked index sequential files manipulated by PL/1 programs. Data compaction techniques are used to reduce by a factor of ten the size of the files stored on magnetic disk. Security of the database is most important and two levels of security have been implemented. Table driven techniques are used for updating the database. A specially designed free-format language is used for specifying changes. Statistics, sorted listings of selected data and summaries are provided by a general purpose program for this type of operation. However, it has still been necessary to w...

  7. Inzicht door onderdompeling Een reactie op Bart Van de Putte, Henk de Smaele en Dirk Jan Wolffram

    Directory of Open Access Journals (Sweden)

    Jan Hein Furnée

    2014-09-01

    Full Text Available Giving a detailed account of the social history of The Hague’s most prominent sites of civilised leisure – the gentlemen’s clubs, the zoo, the Royal Theatre and the seaside resort of Scheveningen – Plaatsen van beschaafd vertier demonstrates how the constant struggle for social in- and exclusion structured the daily lives of upper and middle class men and women in The Hague in the nineteenth century. In response to Bart Van de Putte, Jan Hein Furnée argues that extensive quantitative analyses of ‘class’ and ‘social class’ show that objective class stratifications based onwealth and/or occupation are important tools, but at most semi-finished products for historical research. Furnée fully agrees with Henk de Smaele’s objection that his study would have benefitted from a more in-depth reflection on the ways in which shifting patterns in women’s freedom of movement in urban spaces were related to their political and economic emancipation. In response to Dirk Jan Wolffram, Furnée repeats some examples given in his book that show how political practices in places of leisure impacted upon local and national politics, even though this didnot directly contribute to a linear process of increasing political participation and representation.Aan de hand van een gedetailleerde analyse van de sociale geschiedenis van herenen burgersociëteiten, de dierentuin, de Koninklijke Schouwburg en badplaats Scheveningen demonstreert Plaatsen van beschaafd vertier hoe de constante strijd om sociale in- en uitsluiting het dagelijks leven van mannen en vrouwen uit de hogere en middenstanden in negentiende-eeuws Den Haag beheerste. In reactie op Bart Van de Putte betoogt Jan Hein Furnée dat grondige kwantitatieve analyses van ‘class’ en ‘social class’ uitwijst dat objectieve sociale stratificaties op basis van welstand en/of beroep voor historisch onderzoek weliswaar zeer nuttig enzelfs noodzakelijk, maar uiteindelijk slechts een

  8. Review: Dirk Michel (2009. Politisierung und Biographie. Politische Einstellungen deutscher Zionisten und Holocaustüberlebender [Political Socialization and Biography: German Zionists and Holocaust Survivors and Their Political Attitudes

    Directory of Open Access Journals (Sweden)

    Susanne Bressan

    2012-07-01

    Full Text Available How do extraordinary experiences, especially during childhood and adolescence, affect political attitudes? Most studies focusing on political movements only implicitly address the connection between biographical experiences and political attitudes. Moreover, a detailed understanding of these impacts often remains merely hypothetical. Biographical studies increasingly address the relationship between politics and biography through empirical and hermeneutic approaches. For his doctoral thesis, Dirk MICHEL conducts autobiographical narrative interviews with 20 Jewish Israelis. Based on their extraordinary biographical experiences, MICHEL categorized the interviewees into two groups—the "German Zionists" and the "German Holocaust survivors." He then conducts semi-structured interviews with each of the participants with the aim of analyzing their political attitudes. However, the conceptual categorization of the interviewees, the empirical investigation of the research question and the subsequent analysis all challenge the underpinning theoretical and methodological concepts of the study. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1203165

  9. IN MEMORIAM DOCTOR DIRK FOK VAN SLOOTEN

    Directory of Open Access Journals (Sweden)

    DIRK FOK VAN SLOOTEN

    2015-11-01

    Full Text Available In the midst of his work Van Slooten has been suddenly called awayat the relatively early age of 61. It was known that his heart was not toogood, but it was expected that living a quiet life he would be able to finishhis life's work, the monograph of the Malaysian Dipterocarpaceae, towhich he had been able since 1951 to devote all his time and concentrationundisturbed by other duties. The striving towards the completion of thiswork on the most important family of Malaysian forest trees alwaysoccupied his mind and had been to a large extent the main object of his life.Van Slooten's ambition was to produce careful work, meticulous inall details. This made him a slow worker, but at the same time one of thetrustworthy kind. This trend towards perfectionism expressed itselfequally in the preliminaries and routine work towards his objective.Through his method of working progress was steady but unfortunatelyrelatively slow. Other factors beyond his control added to this result.Besides delays due to World War II, Van Slooten performed many otherofficial duties in the same earnest way in which he carried out hisresearch work. Any spontaneity and opportunism he had in his characterwas suppressed through his orderliness. Only in exceptional and veryurgent circumstances would he make decisions a l'improviste. It is ofcourse questionable whether one can deduce a man's character from hispublished writings. Whether this thesis be accepted as a generality or not,it is certain that it held for Van Slooten. His care for details, for straight-forwardness, for trying to find the truth in his work found a remarkableparallel in his office work, and his private life. He wanted things to beclean and orderly. Even on excursions, which he made surprisingly seldom,his clothes were as speckless as they could possibly be in the circumstances.

  10. Keskklassi meistrivõistlused / Wolfgang König, Dirk Branke

    Index Scriptorium Estoniae

    König, Wolfgang

    2012-01-01

    Suur test. Mõõtu võtavad 15 pereautot: Audi A4 2.0 TDI, BMW 318d, Mercedes-Benz C 200 CDI, Škoda Superb 2.0 TDI, VW Passat 2.0 TDI, Citroën C5 HDi 140, Hyundai i40 1.7 CRDi, Ford Mondeo 2.0 TDCi, Kia Optima 1.7 CRDi, Mazda 6 2.2 MZR-CD, Opel Insignia 2.0 CDTI, Renault Laguna dCi 150, Peugeot 508 HDi 140, Volvo S60 D3, Seat Exeo 2.0 TDI

  11. On implementation of the EU – Ukraine / Dirk Hartman

    Index Scriptorium Estoniae

    Hartman, Dirk

    2014-01-01

    Euroopa Liidu ja selle liikmesriikide ning Ukraina vahelise assotsieerimislepingu rakendamisest (poliitiline dialoog; õigus, vabadus ja turvalisus; majanduslik koostöö; kaubandus; tuumaenergia ja taastuvad energiaallikad)

  12. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  13. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  14. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  15. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  16. Venemaa on meie enda loodud oht / Noam Chomsky ; interv. Dirk Hoyer

    Index Scriptorium Estoniae

    Chomsky, Noam, 1928-

    2008-01-01

    Ühendriikide tuntud arvamusliider sõna- ja ajakirjandusvabadusest USA-s, suhtumisest Iraagi sõtta, avalikkussuhete tööstuse ja parteide juhtkondade toodete müügist presidendivalimiste kampaania käigus, tervishoiusüsteemi katastroofilisest olukorrast USA-s, valikust presidendikandidaatide Barack Obama ja John McCaini vahel, kosmose militariseerimisest, suhtumisest 11. septembrisse, Venemaast Balti julgeoleku ohuna, NATOst julgeoleku garanteerijana, energiaressurssidest kui hirmutamisvahenditest

  17. Both Islam and Christianity Invite to Tolerance: A Commentary on Dirk Baier.

    Science.gov (United States)

    Salamati, Payman; Naji, Zohrehsadat; Koutlaki, Sofia A; Rahimi-Movaghar, Vafa

    2015-12-01

    Baier recently published an interesting original article in the Journal of Interpersonal Violence. He compared violent behavior (VB) between Christians and Muslims and concluded that religiosity was not a protecting factor against violence and that Muslim religiosity associated positively with increased VB. We appreciate the author's enormous efforts on researching such an issue of relevance to today's world. However, in our view, the article has methodological weaknesses in terms of participants, instruments, and statistical analyses, which we examine in detail. Therefore, Baier's results should be interpreted more cautiously. Although interpersonal violence may sometimes be observable among Muslims, we do not attribute these to Islam's teachings. In our opinion, both Islam and Christianity invite to tolerance, peace, and friendship. So, the comparison of such differences and the drawing of conclusions that may reflect negatively on specific religious groups need better defined research, taking into consideration other basic variables in different communities. © The Author(s) 2014.

  18. Development of a novel active muzzle brake for an artillery weapon system / Dirk Johannes Downing

    OpenAIRE

    Downing, Dirk Johannes

    2002-01-01

    A conventional muzzle brake is a baffle device located at some distance in front of the muzzle exit of a gun. The purpose of a muzzle brake is to alleviate the force on the weapon platform by diverting a portion of the muzzle gas resulting in a forward impulse being exerted on the recoiling parts of the weapon. A very efficient muzzle brake unfortunately gives rise to an excessive overpressure in the crew environment due to the deflection of the emerging shock waves. The novel ...

  19. Regstellende aksie, aliënasie en die nie-aangewese groep / Dirk Johannes Hermann

    OpenAIRE

    Hermann, Dirk Johannes

    2006-01-01

    Affirmative action is a central concept in South African politics and the workplace. The Employment Equity Act divides society into a designated group (blacks, women and people with disabilities) and a non-designated group (white men and white women). In this study, the influence of affirmative action on alienation of the non-designated group was investigated. Guidelines were also developed for employers in order to lead the non-designated group from a state of alienation to th...

  20. Lesson 6: Signature Validation

    Science.gov (United States)

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  1. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  2. Validity in Qualitative Evaluation

    OpenAIRE

    Vasco Lub

    2015-01-01

    This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate), the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of con...

  3. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  4. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  5. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  6. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  7. Validity in Qualitative Evaluation

    Directory of Open Access Journals (Sweden)

    Vasco Lub

    2015-12-01

    Full Text Available This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate, the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of connecting them with aspects of evaluation in social policy. It argues that different purposes of qualitative evaluations can be linked with different scientific paradigms and perspectives, thus transcending unproductive paradigmatic divisions as well as providing a flexible yet rigorous validity framework for researchers and reviewers of qualitative evaluations.

  8. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  9. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  10. Transient FDTD simulation validation

    OpenAIRE

    Jauregui Tellería, Ricardo; Riu Costa, Pere Joan; Silva Martínez, Fernando

    2010-01-01

    In computational electromagnetic simulations, most validation methods have been developed until now to be used in the frequency domain. However, the EMC analysis of the systems in the frequency domain many times is not enough to evaluate the immunity of current communication devices. Based on several studies, in this paper we propose an alternative method of validation of the transients in time domain allowing a rapid and objective quantification of the simulations results.

  11. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  12. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  13. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  14. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  15. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  16. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  17. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  18. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  19. Shift Verification and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Gregory G [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  20. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  1. Validation Process Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  2. The dialogic validation

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2005-01-01

    This paper is inspired by dialogism and the title is a paraphrase on Bakhtin's (1981) "The Dialogic Imagination". The paper investigates how dialogism can inform the process of validating inquiry-based qualitative research. The paper stems from a case study on the role of recognition...

  3. A valid licence

    NARCIS (Netherlands)

    Spoolder, H.A.M.; Ingenbleek, P.T.M.

    2010-01-01

    A valid licence Tuesday, April 20, 2010 Dr Hans Spoolder and Dr Paul Ingenbleek, of Wageningen University and Research Centres, share their thoughts on improving farm animal welfare in Europe At the presentation of the European Strategy 2020 on 3rd March, President Barroso emphasised the need for

  4. The Chimera of Validity

    Science.gov (United States)

    Baker, Eva L.

    2013-01-01

    Background/Context: Education policy over the past 40 years has focused on the importance of accountability in school improvement. Although much of the scholarly discourse around testing and assessment is technical and statistical, understanding of validity by a non-specialist audience is essential as long as test results drive our educational…

  5. Validating year 2000 compliance

    NARCIS (Netherlands)

    A. van Deursen (Arie); P. Klint (Paul); M.P.A. Sellink

    1997-01-01

    textabstractValidating year 2000 compliance involves the assessment of the correctness and quality of a year 2000 conversion. This entails inspecting both the quality of the conversion emph{process followed, and of the emph{result obtained, i.e., the converted system. This document provides an

  6. Validation and test report

    DEFF Research Database (Denmark)

    Pedersen, Jens Meldgaard; Andersen, T. Bull

    2012-01-01

    . As a consequence of extensive movement artefacts seen during dynamic contractions, the following validation and test report consists of a report that investigates the physiological responses to a static contraction in a standing and a supine position. Eight subjects performed static contractions of the ankle...

  7. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  8. Validity and Fairness

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…

  9. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  10. Flight code validation simulator

    Science.gov (United States)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  11. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  12. CIPS Validation Data Plan

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2012-03-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  13. CIPS Validation Data Plan

    International Nuclear Information System (INIS)

    Dinh, Nam

    2012-01-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  14. Validating MEDIQUAL Constructs

    Science.gov (United States)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  15. DDML Schema Validation

    Science.gov (United States)

    2016-02-08

    XML schema govern DDML instance documents. For information about XML, refer to RCC 125-15, XML Style Guide.2 Figure 4 provides an XML snippet of a...we have documented three main types of information .  User Stories: A user story describes a specific requirement of the schema in the terms of a...instance document is a schema -valid XML file that completely describes the information in the test case in a manner that satisfies the user story

  16. What is validation

    International Nuclear Information System (INIS)

    Clark, H.K.

    1985-01-01

    Criteria for establishing the validity of a computational method to be used in assessing nuclear criticality safety, as set forth in ''American Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors,'' ANSI/ANS-8.1-1983, are examined and discussed. Application of the criteria is illustrated by describing the procedures followed in deriving subcritical limits that have been incorporated in the Standard

  17. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  18. Validering av Evolution 220

    OpenAIRE

    Krakeli, Tor-Arne

    2013-01-01

    - Det har blitt kjøpt inn et nytt spektrofotometer (Evolution 220, Thermo Scientific) til BioLab Nofima. I den forbindelsen har det blitt utført en validering som involverer kalibreringsstandarder fra produsenten og en test på normal distribusjon (t-test) på to metoder (Total fosfor, Tryptofan). Denne valideringen fant Evolution 220 til å være et akseptabelt alternativ til det allerede benyttede spektrofotometeret (Helios Beta). På bakgrunn av noen instrumentbegrensninger må de aktuelle an...

  19. Simulation Validation for Societal Systems

    National Research Council Canada - National Science Library

    Yahja, Alex

    2006-01-01

    .... There are however, substantial obstacles to validation. The nature of modeling means that there are implicit model assumptions, a complex model space and interactions, emergent behaviors, and uncodified and inoperable simulation and validation knowledge...

  20. Audit Validation Using Ontologies

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2015-01-01

    Full Text Available Requirements to increase quality audit processes in enterprises are defined. It substantiates the need for assessment and management audit processes using ontologies. Sets of rules, ways to assess the consistency of rules and behavior within the organization are defined. Using ontologies are obtained qualifications that assess the organization's audit. Elaboration of the audit reports is a perfect algorithm-based activity characterized by generality, determinism, reproducibility, accuracy and a well-established. The auditors obtain effective levels. Through ontologies obtain the audit calculated level. Because the audit report is qualitative structure of information and knowledge it is very hard to analyze and interpret by different groups of users (shareholders, managers or stakeholders. Developing ontology for audit reports validation will be a useful instrument for both auditors and report users. In this paper we propose an instrument for validation of audit reports contain a lot of keywords that calculates indicators, a lot of indicators for each key word there is an indicator, qualitative levels; interpreter who builds a table of indicators, levels of actual and calculated levels.

  1. Peaminister : Euroopa Liiduga ühinemise otsustab rahvas / Mart Laar ; interv. Dirk Koch, tõlk. Margus Enno

    Index Scriptorium Estoniae

    Laar, Mart, 1960-

    2000-01-01

    Ilmunud ka: Järva Teataja 17. okt. lk. 2. Eesti peaminister vastab Der Spiegeli küsimustele, mis puudutavad Eesti ühinemisläbirääkimisi EL-iga, Eesti rahva osast ühinemisotsuse langetamisel, Eesti soovist saada NATO liikmeks. Avaldatud lühendatult. Autor: Isamaaliit

  2. Jona se “opstanding uit die dood”: Perspektiewe op die “opstandings-geloof” vanuit die Ou Testament Dirk

    Directory of Open Access Journals (Sweden)

    Dirk J. Human

    2004-10-01

    The Jonah novelette tends to be one of the First Testament’s primary witnesses on the resurrection faith. This faith portrays the omnipotent power of God over all other threatening powers of death and chaos, be they human or divine. Only God can raise the dead from death. Jonah’s resurrection from death illustrates how Yahweh alone is responsible for this endeavour. This article focuses on Jonah’s prayer (2:3-10. It argues that the reader is persuaded to see Jonah’s flight from Yahweh and his commission ultimately leading to his ending up behind the bars of death (2:7b. Embedded in fictitious and mythological descriptions is Yahweh who delivered Jonah from the pit of death, namely Sheol (2:7c. Resurrection faith narratives in the Second Testament confirm these perspectives in the First Testament.

  3. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  4. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  5. Site characterization and validation

    International Nuclear Information System (INIS)

    Olsson, O.; Eriksson, J.; Falk, L.; Sandberg, E.

    1988-04-01

    The borehole radar investigation program of the SCV-site (Site Characterization and Validation) has comprised single hole reflection measurements with centre frequencies of 22, 45, and 60 MHz. The radar range obtained in the single hole reflection measurements was approximately 100 m for the lower frequency (22 MHz) and about 60 m for the centre frequency 45 MHz. In the crosshole measurements transmitter-receiver separations from 60 to 200 m have been used. The radar investigations have given a three dimensional description of the structure at the SCV-site. A generalized model of the site has been produced which includes three major zones, four minor zones and a circular feature. These features are considered to be the most significant at the site. Smaller features than the ones included in the generalized model certainly exist but no additional features comparable to the three major zones are thought to exist. The results indicate that the zones are not homogeneous but rather that they are highly irregular containing parts of considerably increased fracturing and parts where their contrast to the background rock is quite small. The zones appear to be approximately planar at least at the scale of the site. At a smaller scale the zones can appear quite irregular. (authors)

  6. Spare Items validation

    International Nuclear Information System (INIS)

    Fernandez Carratala, L.

    1998-01-01

    There is an increasing difficulty for purchasing safety related spare items, with certifications by manufacturers for maintaining the original qualifications of the equipment of destination. The main reasons are, on the top of the logical evolution of technology, applied to the new manufactured components, the quitting of nuclear specific production lines and the evolution of manufacturers quality systems, originally based on nuclear codes and standards, to conventional industry standards. To face this problem, for many years different Dedication processes have been implemented to verify whether a commercial grade element is acceptable to be used in safety related applications. In the same way, due to our particular position regarding the spare part supplies, mainly from markets others than the american, C.N. Trillo has developed a methodology called Spare Items Validation. This methodology, which is originally based on dedication processes, is not a single process but a group of coordinated processes involving engineering, quality and management activities. These are to be performed on the spare item itself, its design control, its fabrication and its supply for allowing its use in destinations with specific requirements. The scope of application is not only focussed on safety related items, but also to complex design, high cost or plant reliability related components. The implementation in C.N. Trillo has been mainly curried out by merging, modifying and making the most of processes and activities which were already being performed in the company. (Author)

  7. SHIELD verification and validation report

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    This document outlines the verification and validation effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system code. Along with its predecessors, SHIELD has been in use at the Savannah River Site (SRS) for more than ten years. During this time the code has been extensively tested and a variety of validation documents have been issued. The primary function of this report is to specify the features and capabilities for which SHIELD is to be considered validated, and to reference the documents that establish the validation

  8. Experimental validation of UTDefect

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, A.S. [ABB Tekniska Roentgencentralen AB, Taeby (Sweden); Bostroem, A.; Wirdelius, H. [Chalmers Univ. of Technology, Goeteborg (Sweden). Div. of Mechanics

    1997-01-01

    This study reports on conducted experiments and computer simulations of ultrasonic nondestructive testing (NDT). Experiments and simulations are compared with the purpose of validating the simulation program UTDefect. UTDefect simulates ultrasonic NDT of cracks and some other defects in isotropic and homogeneous materials. Simulations for the detection of surface breaking cracks are compared with experiments in pulse-echo mode on surface breaking cracks in carbon steel plates. The echo dynamics are plotted and compared with the simulations. The experiments are performed on a plate with thickness 36 mm and the crack depths are 7.2 mm and 18 mm. L- and T-probes with frequency 1, 2 and 4 MHz and angels 45, 60 and 70 deg are used. In most cases the probe and the crack is on opposite sides of the plate, but in some cases they are on the same side. Several cracks are scanned from two directions. In total 53 experiments are reported for 33 different combinations. Generally the simulations agree well with the experiments and UTDefect is shown to be able to, within certain limits, perform simulations that are close to experiments. It may be concluded that: For corner echoes the eight 45 deg cases and the eight 60 deg cases show good agreement between experiments and UTDefect, especially for the 7.2 mm crack. The amplitudes differ more for some cases where the defect is close to the probe and for the corner of the 18 mm crack. For the two 70 deg cases there are too few experimental values to compare the curve shapes, but the amplitudes do not differ too much. The tip diffraction echoes also agree well in general. For some cases, where the defect is close to the probe, the amplitudes differ more than 10-15 dB, but for all but two cases the difference in amplitude is less than 7 dB. 6 refs.

  9. Cleaning Validation of Fermentation Tanks

    DEFF Research Database (Denmark)

    Salo, Satu; Friis, Alan; Wirtanen, Gun

    2008-01-01

    Reliable test methods for checking cleanliness are needed to evaluate and validate the cleaning process of fermentation tanks. Pilot scale tanks were used to test the applicability of various methods for this purpose. The methods found to be suitable for validation of the clenlinees were visula...

  10. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  11. Validity in SSM: neglected areas

    NARCIS (Netherlands)

    Pala, O.; Vennix, J.A.M.; Mullekom, T.L. van

    2003-01-01

    Contrary to the prevailing notion in hard OR, in soft system methodology (SSM), validity seems to play a minor role. The primary reason for this is that SSM models are of a different type, they are not would-be descriptions of real-world situations. Therefore, establishing their validity, that is

  12. The Consequences of Consequential Validity.

    Science.gov (United States)

    Mehrens, William A.

    1997-01-01

    There is no agreement at present about the importance or meaning of the term "consequential validity." It is important that the authors of revisions to the "Standards for Educational and Psychological Testing" recognize the debate and relegate discussion of consequences to a context separate from the discussion of validity.…

  13. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  14. The measurement of instrumental ADL: content validity and construct validity

    DEFF Research Database (Denmark)

    Avlund, K; Schultz-Larsen, K; Kreiner, S

    1993-01-01

    do not depend on help. It is also possible to add the items in a valid way. However, to obtain valid IADL-scales, we omitted items that were highly relevant to especially elderly women, such as house-work items. We conclude that the criteria employed for this IADL-measure are somewhat contradictory....... showed that 14 items could be combined into two qualitatively different additive scales. The IADL-measure complies with demands for content validity, distinguishes between what the elderly actually do, and what they are capable of doing, and is a good discriminator among the group of elderly persons who...

  15. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  16. Validation of EAF-2005 data

    International Nuclear Information System (INIS)

    Kopecky, J.

    2005-01-01

    Full text: Validation procedures applied on EAF-2003 starter file, which lead to the production of EAF-2005 library, are described. The results in terms of reactions with assigned quality scores in EAF-20005 are given. Further the extensive validation against the recent integral data is discussed together with the status of the final report 'Validation of EASY-2005 using integral measurements'. Finally, the novel 'cross section trend analysis' is presented with some examples of its use. This action will lead to the release of improved library EAF-2005.1 at the end of 2005, which shall be used as the starter file for EAF-2007. (author)

  17. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  18. Validation of Autonomous Space Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — System validation addresses the question "Will the system do the right thing?" When system capability includes autonomy, the question becomes more pointed. As NASA...

  19. Magnetic Signature Analysis & Validation System

    National Research Council Canada - National Science Library

    Vliet, Scott

    2001-01-01

    The Magnetic Signature Analysis and Validation (MAGSAV) System is a mobile platform that is used to measure, record, and analyze the perturbations to the earth's ambient magnetic field caused by object such as armored vehicles...

  20. Mercury and Cyanide Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  1. ICP-MS Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  2. Contextual Validity in Hybrid Logic

    DEFF Research Database (Denmark)

    Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin

    2013-01-01

    interpretations. Moreover, such indexicals give rise to a special kind of validity—contextual validity—that interacts with ordinary logi- cal validity in interesting and often unexpected ways. In this paper we model these interactions by combining standard techniques from hybrid logic with insights from the work...... of Hans Kamp and David Kaplan. We introduce a simple proof rule, which we call the Kamp Rule, and first we show that it is all we need to take us from logical validities involving now to contextual validities involving now too. We then go on to show that this deductive bridge is strong enough to carry us...... to contextual validities involving yesterday, today and tomorrow as well....

  3. MARS Validation Plan and Status

    International Nuclear Information System (INIS)

    Ahn, Seung-hoon; Cho, Yong-jin

    2008-01-01

    The KINS Reactor Thermal-hydraulic Analysis System (KINS-RETAS) under development is directed toward a realistic analysis approach of best-estimate (BE) codes and realistic assumptions. In this system, MARS is pivoted to provide the BE Thermal-Hydraulic (T-H) response in core and reactor coolant system to various operational transients and accidental conditions. As required for other BE codes, the qualification is essential to ensure reliable and reasonable accuracy for a targeted MARS application. Validation is a key element of the code qualification, and determines the capability of a computer code in predicting the major phenomena expected to occur. The MARS validation was made by its developer KAERI, on basic premise that its backbone code RELAP5/MOD3.2 is well qualified against analytical solutions, test or operational data. A screening was made to select the test data for MARS validation; some models transplanted from RELAP5, if already validated and found to be acceptable, were screened out from assessment. It seems to be reasonable, but does not demonstrate whether code adequacy complies with the software QA guidelines. Especially there may be much difficulty in validating the life-cycle products such as code updates or modifications. This paper presents the plan for MARS validation, and the current implementation status

  4. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  5. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  6. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  7. The validation of an infrared simulation system

    CSIR Research Space (South Africa)

    De Waal, A

    2013-08-01

    Full Text Available theoretical validation framework. This paper briefly describes the procedure used to validate software models in an infrared system simulation, and provides application examples of this process. The discussion includes practical validation techniques...

  8. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  9. Rapid Robot Design Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies will create a comprehensive software infrastructure for rapid validation of robot designs. The software will support push-button validation...

  10. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  11. All Validity Is Construct Validity. Or Is It?

    Science.gov (United States)

    Kane, Michael

    2012-01-01

    Paul E. Newton's article on the consensus definition of validity tackles a number of big issues and makes a number of strong claims. I agreed with much of what he said, and I disagreed with a number of his claims, but I found his article to be consistently interesting and thought provoking (whether I agreed or not). I will focus on three general…

  12. Validering av vattenkraftmodeller i ARISTO

    OpenAIRE

    Lundbäck, Maja

    2013-01-01

    This master thesis was made to validate hydropower models of a turbine governor, Kaplan turbine and a Francis turbine in the power system simulator ARISTO at Svenska Kraftnät. The validation was made in three steps. The first step was to make sure the models was implement correctly in the simulator. The second was to compare the simulation results from the Kaplan turbine model to data from a real hydropower plant. The comparison was made to see how the models could generate simulation result ...

  13. PIV Data Validation Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  14. Assessment of juveniles testimonies’ validity

    Directory of Open Access Journals (Sweden)

    Dozortseva E.G.

    2015-12-01

    Full Text Available The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA by means of Criteria-Based Content Analysis (CBCA and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice

  15. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  16. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  17. Validity of Management Control Topoi

    DEFF Research Database (Denmark)

    Nørreklit, Lennart; Nørreklit, Hanne; Israelsen, Poul

    2004-01-01

    The validity of research and company topoi for constructing/analyzing relaity is analyzed as the integration of the four aspects (dimensions): fact, possibility (logic), value and comunication. Main stream, agency theory and social constructivism are critizied for reductivism (incomplete integrat...

  18. NVN 5694 intra laboratory validation. Feasibility study for interlaboratory- validation

    International Nuclear Information System (INIS)

    Voors, P.I.; Baard, J.H.

    1998-11-01

    Within the project NORMSTAR 2 a number of Dutch prenormative protocols have been defined for radioactivity measurements. Some of these protocols, e.g. the Dutch prenormative protocol NVN 5694, titled Methods for radiochemical determination of polonium-210 and lead-210, have not been validated, neither by intralaboratory nor interlaboratory studies. Validation studies are conducted within the framework of the programme 'Normalisatie and Validatie van Milieumethoden 1993-1997' (Standardization and Validation of test methods for environmental parameters) of the Dutch Ministry of Housing, Physical Planning and the Environment (VROM). The aims of this study were (a) a critical evaluation of the protocol, (b) investigation on the feasibility of an interlaboratory study, and (c) the interlaboratory validation of NVN 5694. The evaluation of the protocol resulted in a list of deficiencies varying from missing references to incorrect formulae. From the survey by interview it appeared that for each type of material, there are 4 to 7 laboratories willing to participate in a interlaboratory validation study. This reflects the situation in 1997. Consequently, if 4 or 6 (the minimal number) laboratories are participating and each laboratory analyses 3 subsamples, the uncertainty in the repeatability standard deviation is 49 or 40 %, respectively. If the ratio of reproducibility standard deviation to the repeatability standard deviation is equal to 1 or 2, then the uncertainty in the reproducibility standard deviation increases from 42 to 67 % and from 34 to 52 % for 4 or 6 laboratories, respectively. The intralaboratory validation was established on four different types of materials. Three types of materials (milkpowder condensate and filter) were prepared in the laboratory using the raw material and certified Pb-210 solutions, and one (sediment) was obtained from the IAEA. The ECN-prepared reference materials were used after testing on homogeneity. The pre-normative protocol can

  19. An information architecture for validating courseware

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    Courseware validation should locate Learning Objects inconsistent with the courseware instructional design being used. In order for validation to take place it is necessary to identify the implicit and explicit information needed for validation. In this paper, we identify this information and formally define an information architecture to model courseware validation information explicitly. This promotes tool-support for courseware validation and its interoperability with the courseware specif...

  20. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  1. Construct Validity: Advances in Theory and Methodology

    OpenAIRE

    Strauss, Milton E.; Smith, Gregory T.

    2009-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review f...

  2. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  3. Validation of New Cancer Biomarkers

    DEFF Research Database (Denmark)

    Duffy, Michael J; Sturgeon, Catherine M; Söletormos, Georg

    2015-01-01

    BACKGROUND: Biomarkers are playing increasingly important roles in the detection and management of patients with cancer. Despite an enormous number of publications on cancer biomarkers, few of these biomarkers are in widespread clinical use. CONTENT: In this review, we discuss the key steps...... in advancing a newly discovered cancer candidate biomarker from pilot studies to clinical application. Four main steps are necessary for a biomarker to reach the clinic: analytical validation of the biomarker assay, clinical validation of the biomarker test, demonstration of clinical value from performance...... of the biomarker test, and regulatory approval. In addition to these 4 steps, all biomarker studies should be reported in a detailed and transparent manner, using previously published checklists and guidelines. Finally, all biomarker studies relating to demonstration of clinical value should be registered before...

  4. The validated sun exposure questionnaire

    DEFF Research Database (Denmark)

    Køster, B; Søndergaard, J; Nielsen, J B

    2017-01-01

    Few questionnaires used in monitoring sun-related behavior have been tested for validity. We established criteria validity of a developed questionnaire for monitoring population sun-related behavior. During May-August 2013, 664 Danes wore a personal electronic UV-dosimeter for one week...... that measured the outdoor time and dose of erythemal UVR exposure. In the following week, they answered a questionnaire on their sun-related behavior in the measurement week. Outdoor time measured by dosimetry correlated strongly with both outdoor time and the developed exposure scale measured...... in the questionnaire. Exposure measured in SED by dosimetry correlated strongly with the exposure scale. In a linear regression model of UVR (SED) received, 41 percent of the variation was explained by skin type, age, week of participation and the exposure scale, with the exposure scale as the main contributor...

  5. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  6. Drive: Theory and Construct Validation.

    Science.gov (United States)

    Siegling, Alex B; Petrides, K V

    2016-01-01

    This article explicates the theory of drive and describes the development and validation of two measures. A representative set of drive facets was derived from an extensive corpus of human attributes (Study 1). Operationalised using an International Personality Item Pool version (the Drive:IPIP), a three-factor model was extracted from the facets in two samples and confirmed on a third sample (Study 2). The multi-item IPIP measure showed congruence with a short form, based on single-item ratings of the facets, and both demonstrated cross-informant reliability. Evidence also supported the measures' convergent, discriminant, concurrent, and incremental validity (Study 3). Based on very promising findings, the authors hope to initiate a stream of research in what is argued to be a rather neglected niche of individual differences and non-cognitive assessment.

  7. Validation of nursing management diagnoses.

    Science.gov (United States)

    Morrison, R S

    1995-01-01

    Nursing management diagnosis based on nursing and management science, merges "nursing diagnosis" and "organizational diagnosis". Nursing management diagnosis is a judgment about nursing organizational problems. The diagnoses provide a basis for nurse manager interventions to achieve outcomes for which a nurse manager is accountable. A nursing organizational problem is a discrepancy between what should be happening and what is actually happening that prevents the goals of nursing from being accomplished. The purpose of this study was to validate 73 nursing management diagnoses identified previously in 1992: 71 of the 72 diagnoses were considered valid by at least 70% of 136 participants. Diagnoses considered to have high priority for future research and development were identified by summing the mean scores for perceived frequency of occurrence and level of disruption. Further development of nursing management diagnoses and testing of their effectiveness in enhancing decision making is recommended.

  8. Validation of radiation sterilization process

    International Nuclear Information System (INIS)

    Kaluska, I.

    2007-01-01

    The standards for quality management systems recognize that, for certain processes used in manufacturing, the effectiveness of the process cannot be fully verified by subsequent inspection and testing of the product. Sterilization is an example of such a process. For this reason, sterilization processes are validated for use, the performance of sterilization process is monitored routinely and the equipment is maintained according to ISO 13 485. Different aspects of this norm are presented

  9. Satellite imager calibration and validation

    CSIR Research Space (South Africa)

    Vhengani, L

    2010-10-01

    Full Text Available and Validation Lufuno Vhengani*, Minette Lubbe, Derek Griffith and Meena Lysko Council for Scientific and Industrial Research, Defence Peace Safety and Security, Pretoria, South Africa E-mail: * lvhengani@csir.co.za Abstract: The success or failure... techniques specific to South Africa. 1. Introduction The success or failure of any earth observation mission depends on the quality of its data. To achieve optimum levels of reliability most sensors are calibrated pre-launch. However...

  10. Microservices Validation: Methodology and Implementation

    OpenAIRE

    Savchenko, D.; Radchenko, G.

    2015-01-01

    Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservic...

  11. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  12. Multimicrophone Speech Dereverberation: Experimental Validation

    Directory of Open Access Journals (Sweden)

    Marc Moonen

    2007-05-01

    Full Text Available Dereverberation is required in various speech processing applications such as handsfree telephony and voice-controlled systems, especially when signals are applied that are recorded in a moderately or highly reverberant environment. In this paper, we compare a number of classical and more recently developed multimicrophone dereverberation algorithms, and validate the different algorithmic settings by means of two performance indices and a speech recognition system. It is found that some of the classical solutions obtain a moderate signal enhancement. More advanced subspace-based dereverberation techniques, on the other hand, fail to enhance the signals despite their high-computational load.

  13. Physical standards and valid caibration

    International Nuclear Information System (INIS)

    Smith, D.B.

    1975-01-01

    The desire for improved nuclear material safeguards has led to the development and use of a number and techniques and instruments for the nondestructive assay (NDA) of special nuclear material. Sources of potential bias in NDA measurements are discussed and methods of eliminating the effects of bias in assay results are suggested. Examples are given of instruments in which these methods have been successfully applied. The results of careful attention to potential sources of assay bias are a significant reduction in the number and complexity of standards required for valid instrument calibration and more credible assay results. (auth)

  14. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  15. Static Validation of Security Protocols

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, P.

    2005-01-01

    We methodically expand protocol narrations into terms of a process algebra in order to specify some of the checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we demonstrate that these techniques ...... suffice to identify several authentication flaws in symmetric and asymmetric key protocols such as Needham-Schroeder symmetric key, Otway-Rees, Yahalom, Andrew secure RPC, Needham-Schroeder asymmetric key, and Beller-Chang-Yacobi MSR...

  16. Software for validating parameters retrieved from satellite

    Digital Repository Service at National Institute of Oceanography (India)

    Muraleedharan, P.M.; Sathe, P.V.; Pankajakshan, T.

    -channel Scanning Microwave Radiometer (MSMR) onboard the Indian satellites Occansat-1 during 1999-2001 were validated using this software as a case study. The program has several added advantages over the conventional method of validation that involves strenuous...

  17. Congruent Validity of the Rathus Assertiveness Schedule.

    Science.gov (United States)

    Harris, Thomas L.; Brown, Nina W.

    1979-01-01

    The validity of the Rathus Assertiveness Schedule (RAS) was investigated by correlating it with the six Class I scales of the California Psychological Inventory on a sample of undergraduate students. Results supported the validity of the RAS. (JKS)

  18. CFD validation experiments for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  19. A CFD validation roadmap for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  20. MODIS Hotspot Validation over Thailand

    Directory of Open Access Journals (Sweden)

    Veerachai Tanpipat

    2009-11-01

    Full Text Available To ensure remote sensing MODIS hotspot (also known as active fire products or hotspots quality and precision in forest fire control and management in Thailand, an increased level of confidence is needed. Accuracy assessment of MODIS hotspots utilizing field survey data validation is described. A quantitative evaluation of MODIS hotspot products has been carried out since the 2007 forest fire season. The carefully chosen hotspots were scattered throughout the country and within the protected areas of the National Parks and Wildlife Sanctuaries. Three areas were selected as test sites for validation guidelines. Both ground and aerial field surveys were also conducted in this study by the Forest Fire Control Division, National Park, Wildlife and Plant Conversation Department, Ministry of Natural Resources and Environment, Thailand. High accuracy of 91.84 %, 95.60% and 97.53% for the 2007, 2008 and 2009 fire seasons were observed, resulting in increased confidence in the use of MODIS hotspots for forest fire control and management in Thailand.

  1. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, Ahmed; Bleyer, Alexandre; Schwarz, Siegfried

    2009-01-01

    The ASTEC code development by IRSN and GRS is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this validation, only the thermal-hydraulic module of ASTEC code is used. ASTEC is a lumped-parameter code able to represent multi-compartment containments. It uses the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. This paper presents the validation of ASTEC V1.3 on the tests T9 and T9bis, of the PANDA OECD/SETH experimental program, investigating the impact of injection velocity and steam condensation on the plume shape and on the gas distribution. Dedicated meshes were developed to simulate the test facility with the two vessels DW1, DW2 and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows a good agreement between experiments and calculations. (author)

  2. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  3. Expert system validation in prolog

    Science.gov (United States)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  4. Validity and Reliability in Social Science Research

    Science.gov (United States)

    Drost, Ellen A.

    2011-01-01

    In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…

  5. Validity Semantics in Educational and Psychological Assessment

    Science.gov (United States)

    Hathcoat, John D.

    2013-01-01

    The semantics, or meaning, of validity is a fluid concept in educational and psychological testing. Contemporary controversies surrounding this concept appear to stem from the proper location of validity. Under one view, validity is a property of score-based inferences and entailed uses of test scores. This view is challenged by the…

  6. Validation of the Child Sport Cohesion Questionnaire

    Science.gov (United States)

    Martin, Luc J.; Carron, Albert V.; Eys, Mark A.; Loughead, Todd

    2013-01-01

    The purpose of the present study was to test the validity evidence of the Child Sport Cohesion Questionnaire (CSCQ). To accomplish this task, convergent, discriminant, and known-group difference validity were examined, along with factorial validity via confirmatory factor analysis (CFA). Child athletes (N = 290, M[subscript age] = 10.73 plus or…

  7. The Role of Generalizability in Validity.

    Science.gov (United States)

    Kane, Michael

    The relationship between generalizability and validity is explained, making four important points. The first is that generalizability coefficients provide upper bounds on validity. The second point is that generalization is one step in most interpretive arguments, and therefore, generalizability is a necessary condition for the validity of these…

  8. Site characterization and validation - Inflow to the validation drift

    International Nuclear Information System (INIS)

    Harding, W.G.C.; Black, J.H.

    1992-01-01

    Hydrogeological experiments have had an essential role in the characterization of the drift site on the Stripa project. This report focuses on the methods employed and the results obtained from inflow experiments performed on the excavated drift in stage 5 of the SCV programme. Inflows were collected in sumps on the floor, in plastic sheeting on the upper walls and ceiling, and measured by means of differential humidity of ventilated air at the bulkhead. Detailed evaporation experiments were also undertaken on uncovered areas of the excavated drift. The inflow distribution was determined on the basis of a system of roughly equal sized grid rectangles. The results have highlighted the overriding importance of fractures in the supply of water to the drift site. The validation drift experiment has revealed that in excess of 99% of inflow comes from a 5 m section corresponding to the 'H' zone, and that as much as 57% was observed coming from a single grid square (267). There was considerable heterogeneity even within the 'H' zone, with 38% of such samples areas yielding no flow at all. Model predictions in stage 4 underestimated the very substantial declines in inflow observed in the validation drift when compared to the SDE; this was especially so in the 'good' rock areas. Increased drawdowns in the drift have generated less flow and reduced head responses in nearby boreholes by a similar proportion. This behaviour has been the focus for considerable study in the latter part of the SCV project, and a number of potential processes have been proposed. These include 'transience', stress redistribution resulting from the creation of the drift, chemical precipitation, blast-induced dynamic unloading and related gas intrusion, and degassing. (au)

  9. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  10. Convergent Validity of the PUTS

    Directory of Open Access Journals (Sweden)

    Valerie Cathérine Brandt

    2016-04-01

    Full Text Available Premonitory urges are a cardinal feature in Gilles de la Tourette syndrome. Severity of premonitory urges can be assessed with the Premonitory Urge for Tic Disorders Scale (PUTS. However, convergent validity of the measure has been difficult to assess due to the lack of other urge measures.We investigated the relationship between average real-time urge intensity assessed by an in-house developed real-time urge monitor, measuring urge intensity continuously for 5mins on a visual analogue scale, and general urge intensity assessed by the PUTS in 22 adult Tourette patients (mean age 29.8+/- 10.3; 19 male. Additionally, underlying factors of premonitory urges assessed by the PUTS were investigated in the adult sample using factor analysis and were replicated in 40 children and adolescents diagnosed with Tourette syndrome (mean age 12.05 +/- 2.83 SD, 31 male.Cronbach’s alpha for the PUTS10 was acceptable (α = .79 in the adult sample. Convergent validity between average real-time urge intensity scores (as assessed with the real-time urge monitor and the 10-item version of the PUTS (r = .64 and the 9-item version of the PUTS (r = .66 was good. A factor analysis including the 10 items of the PUTS and average real-time urge intensity scores revealed three factors. One factor included the average real-time urge intensity score and appeared to measure urge intensity, while the other two factors can be assumed to reflect the (sensory quality of urges and subjective control, respectively. The factor structure of the 10 PUTS items alone was replicated in a sample of children and adolescents.The results indicate that convergent validity between the PUTS and the real-time urge assessment monitor is good. Furthermore, the results suggest that the PUTS might assess more than one dimension of urges and it may be worthwhile developing different sub-scales of the PUTS assessing premonitory urges in terms of intensity and quality, as well as subjectively

  11. Turbine-99 unsteady simulations - Validation

    International Nuclear Information System (INIS)

    Cervantes, M J; Andersson, U; Loevgren, H M

    2010-01-01

    The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.

  12. Ultrasonic techniques validation on shell

    International Nuclear Information System (INIS)

    Navarro, J.; Gonzalez, E.

    1998-01-01

    Due to the results obtained in several international RRT during the 80's, it has been necessary to prove the effectiveness of the NDT techniques. For this reason it has been imperative to verify the goodness of the Inspection Procedure over different mock-ups, representative of the inspection area and with real defects. Prior to the revision of the inspection procedure and with the aim of updating the techniques used, it is a good practice to perform different scans on the mock-ups until the validation is achieved. It is at this point, where all the parameters of the inspection at hands are defined; transducer, step, scan direction,... and what it's more important, it will be demonstrated that the technique to be used for the area required to inspection is suitable to evaluate the degradation phenomena that could appear. (Author)

  13. Turbine-99 unsteady simulations - Validation

    Science.gov (United States)

    Cervantes, M. J.; Andersson, U.; Lövgren, H. M.

    2010-08-01

    The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.

  14. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  15. Cable SGEMP Code Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Ballard, William Parker [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Center for CA Weapons Systems Engineering

    2013-05-01

    This report compared data taken on the Modular Bremsstrahlung Simulator using copper jacketed (cujac) cables with calculations using the RHSD-RA Cable SGEMP analysis tool. The tool relies on CEPXS/ONBFP to perform radiation transport in a series of 1D slices through the cable, and then uses a Green function technique to evaluate the expected current drive on the center conductor. The data were obtained in 2003 as part of a Cabana verification and validation experiment using 1-D geometries, but were not evaluated until now. The agreement between data and model is not adequate unless gaps between the dielectric and outer conductor (ground) are assumed, and these gaps are large compared with what is believed to be in the actual cable.

  16. Comparative Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow r...... is that the comparative validation can be regarded as the main argument to continue the validation of the building simulation software for the buildings with the double skin façade with the empirical validation test cases.......The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow...

  17. Active Transportation Demand Management (ATDM) Trajectory Level Validation

    Data.gov (United States)

    Department of Transportation — The ATDM Trajectory Validation project developed a validation framework and a trajectory computational engine to compare and validate simulated and observed vehicle...

  18. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  19. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  20. Validation of the Danish PAROLE lexicon (upubliceret)

    DEFF Research Database (Denmark)

    Møller, Margrethe; Christoffersen, Ellen

    2000-01-01

    This validation is based on the Danish PAROLE lexicon dated June 20, 1998, downloaded on March 16, 1999. Subsequently, the developers of the lexicon have informed us that they have been revising the lexicon, in particular the morphological level. Morphological entries were originally generated...... automatically from a machine-readable version of the Official Danish Spelling Dictionary (Retskrivningsordbogen 1986, in the following RO86), and this resulted in some overgeneration, which the developers started eliminating after submitting the Danish PAROLE lexicon for validation. The present validation is......, however, based on the January 1997 version of the lexicon. The validation as such complies with the specifications described in ELRA validation manuals for lexical data, i.e. Underwood and Navaretta: "A Draft Manual for the Validation of Lexica, Final Report" [Underwood & Navaretta1997] and Braasch: "A...

  1. Reliability and validity in a nutshell.

    Science.gov (United States)

    Bannigan, Katrina; Watson, Roger

    2009-12-01

    To explore and explain the different concepts of reliability and validity as they are related to measurement instruments in social science and health care. There are different concepts contained in the terms reliability and validity and these are often explained poorly and there is often confusion between them. To develop some clarity about reliability and validity a conceptual framework was built based on the existing literature. The concepts of reliability, validity and utility are explored and explained. Reliability contains the concepts of internal consistency and stability and equivalence. Validity contains the concepts of content, face, criterion, concurrent, predictive, construct, convergent (and divergent), factorial and discriminant. In addition, for clinical practice and research, it is essential to establish the utility of a measurement instrument. To use measurement instruments appropriately in clinical practice, the extent to which they are reliable, valid and usable must be established.

  2. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  3. Neutron flux control systems validation

    International Nuclear Information System (INIS)

    Hascik, R.

    2003-01-01

    In nuclear installations main requirement is to obtain corresponding nuclear safety in all operation conditions. From the nuclear safety point of view is commissioning and start-up after reactor refuelling appropriate period for safety systems verification. In this paper, methodology, performance and results of neutron flux measurements systems validation is presented. Standard neutron flux measuring chains incorporated into the reactor protection and control system are used. Standard neutron flux measuring chain contains detector, preamplifier, wiring to data acquisition unit, data acquisition unit, wiring to display at control room and display at control room. During reactor outage only data acquisition unit and wiring and displaying at reactor control room is verified. It is impossible to verify detector, preamplifier and wiring to data acquisition recording unit during reactor refuelling according to low power. Adjustment and accurate functionality of these chains is confirmed by start-up rate (SUR) measurement during start-up tests after refuelling of the reactors. This measurement has direct impact to nuclear safety and increase operational nuclear safety level. Briefly description of each measuring system is given. Results are illustrated on measurements performed at Bohunice NPP during reactor start-up tests. Main failures and their elimination are described (Authors)

  4. ISOTHERMAL AIR INGRESS VALIDATION EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Chang H Oh; Eung S Kim

    2011-09-01

    Idaho National Laboratory carried out air ingress experiments as part of validating computational fluid dynamics (CFD) calculations. An isothermal test loop was designed and set to understand the stratified-flow phenomenon, which is important as the initial air flow into the lower plenum of the very high temperature gas cooled reactor (VHTR) when a large break loss-of-coolant accident occurs. The unique flow characteristics were focused on the VHTR air-ingress accident, in particular, the flow visualization of the stratified flow in the inlet pipe to the vessel lower plenum of the General Atomic’s Gas Turbine-Modular Helium Reactor (GT-MHR). Brine and sucrose were used as heavy fluids, and water was used to represent a light fluid, which mimics a counter current flow due to the density difference between the stimulant fluids. The density ratios were changed between 0.87 and 0.98. This experiment clearly showed that a stratified flow between simulant fluids was established even for very small density differences. The CFD calculations were compared with experimental data. A grid sensitivity study on CFD models was also performed using the Richardson extrapolation and the grid convergence index method for the numerical accuracy of CFD calculations . As a result, the calculated current speed showed very good agreement with the experimental data, indicating that the current CFD methods are suitable for predicting density gradient stratified flow phenomena in the air-ingress accident.

  5. CTF Validation and Verification Manual

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Dances, Christopher A. [Pennsylvania State Univ., University Park, PA (United States); Magedanz, Jeffrey W. [Pennsylvania State Univ., University Park, PA (United States); Jernigan, Caleb [Holtec International, Marlton, NJ (United States); Kelly, Joeseph [U.S. Nuclear Regulatory Commission (NRC), Rockville, MD (United States); Toptan, Aysenur [North Carolina State Univ., Raleigh, NC (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [Pennsylvania State Univ., University Park, PA (United States); Palmtag, Scott [Core Physics, Inc., Cary, NC (United States); Gehin, Jess C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-05-25

    Coolant-Boiling in Rod Arrays- Two Fluids (COBRA-TF) is a Thermal/Hydraulic (T/H) simulation code designed for Light Water Reactor (LWR) analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of nine conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and has been used and modified by several institutions over the last several decades. COBRA-TF is also used at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG), and has been improved, updated, and subsequently became the PSU RDFMG version of COBRA-TF (CTF). One part of the improvement process includes validating the methods in CTF. This document seeks to provide a certain level of certainty and confidence in the predictive capabilities of the code for the scenarios it was designed to model--rod bundle geometries with operating conditions that are representative of prototypical Pressurized Water Reactor (PWR)s and Boiling Water Reactor (BWR)s in both normal and accident conditions. This is done by modeling a variety of experiments that simulate these scenarios and then presenting a qualitative and quantitative analysis of the results that demonstrates the accuracy to which CTF is capable of capturing specific quantities of interest.

  6. Seismic Data Gathering and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-02-01

    Three recent earthquakes in the last seven years have exceeded their design basis earthquake values (so it is implied that damage to SSC’s should have occurred). These seismic events were recorded at North Anna (August 2011, detailed information provided in [Virginia Electric and Power Company Memo]), Fukushima Daichii and Daini (March 2011 [TEPCO 1]), and Kaswazaki-Kariwa (2007, [TEPCO 2]). However, seismic walk downs at some of these plants indicate that very little damage occurred to safety class systems and components due to the seismic motion. This report presents seismic data gathered for two of the three events mentioned above and recommends a path for using that data for two purposes. One purpose is to determine what margins exist in current industry standard seismic soil-structure interaction (SSI) tools. The second purpose is the use the data to validated seismic site response tools and SSI tools. The gathered data represents free field soil and in-structure acceleration time histories data. Gathered data also includes elastic and dynamic soil properties and structural drawings. Gathering data and comparing with existing models has potential to identify areas of uncertainty that should be removed from current seismic analysis and SPRA approaches. Removing uncertainty (to the extent possible) from SPRA’s will allow NPP owners to make decisions on where to reduce risk. Once a realistic understanding of seismic response is established for a nuclear power plant (NPP) then decisions on needed protective measures, such as SI, can be made.

  7. Validation for chromatographic and electrophoretic methods

    OpenAIRE

    Ribani, Marcelo; Bottoli, Carla Beatriz Grespan; Collins, Carol H.; Jardim, Isabel Cristina Sales Fontes; Melo, Lúcio Flávio Costa

    2004-01-01

    The validation of an analytical method is fundamental to implementing a quality control system in any analytical laboratory. As the separation techniques, GC, HPLC and CE, are often the principal tools used in such determinations, procedure validation is a necessity. The objective of this review is to describe the main aspects of validation in chromatographic and electrophoretic analysis, showing, in a general way, the similarities and differences between the guidelines established by the dif...

  8. Redundant sensor validation by using fuzzy logic

    International Nuclear Information System (INIS)

    Holbert, K.E.; Heger, A.S.; Alang-Rashid, N.K.

    1994-01-01

    This research is motivated by the need to relax the strict boundary of numeric-based signal validation. To this end, the use of fuzzy logic for redundant sensor validation is introduced. Since signal validation employs both numbers and qualitative statements, fuzzy logic provides a pathway for transforming human abstractions into the numerical domain and thus coupling both sources of information. With this transformation, linguistically expressed analysis principles can be coded into a classification rule-base for signal failure detection and identification

  9. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    Science.gov (United States)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  10. Self-regulation strategies of white young adult male students who grew up with emotionally absent fathers / Dirk Wouter Jacobus Ackermann

    OpenAIRE

    Ackermann, Dirk Wouter Jacobus

    2014-01-01

    Young men who grew up with emotionally absent fathers seem to find it difficult to attain equilibrium through dedication to both personal and relational concerns, probably because they tend to have low self-esteem, struggle to establish intimate relationships and may be at greater risk of engaging in antisocial or violent behaviour. The aim of this study was to explore the self-regulation strategies that white young adult male students employ to deal with the emotions and cognitions related t...

  11. Review: Dirk Tänzler, Hubert Knobloch & Hans-Georg Soeffner (Eds. (2006. Neue Perspektiven der Wissenssoziologie [New Perspectives on the Sociology of Knowledge

    Directory of Open Access Journals (Sweden)

    Torsten Junge

    2008-10-01

    Full Text Available The 14 articles in this collection present theories, questions and research areas in German sociology of knowledge. The contributions analyze the construction of reality with regard to knowledge. As a science of "the real," sociology of knowledge gains access to other disciplines such as ethnology, systems theory, communication studies or cognitive science. The collection emphasizes a critical approach to the possibilities and capacities of sociology of knowledge as such. URN: urn:nbn:de:0114-fqs0901278

  12. Internal Validity: A Must in Research Designs

    Science.gov (United States)

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  13. Validation of the Netherlands pacemaker patient registry

    NARCIS (Netherlands)

    Dijk, WA; Kingma, T; Hooijschuur, CAM; Dassen, WRM; Hoorntje, JCA; van Gelder, LM

    1997-01-01

    This paper deals with the validation of the information stored in the Netherlands central pacemaker patient database. At this moment the registry database contains information on more than 70500 patients, 85000 pacemakers and 90000 leads. The validation procedures consisted of an internal

  14. Ensuring validity in qualitative International Business Research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman

    2004-01-01

    The purpose of this paper is to provide an account for how the validity issue may be gasped within a qualitative apporach to the IB field......The purpose of this paper is to provide an account for how the validity issue may be gasped within a qualitative apporach to the IB field...

  15. Construct Validity of Neuropsychological Tests in Schizophrenia.

    Science.gov (United States)

    Allen, Daniel N.; Aldarondo, Felito; Goldstein, Gerald; Huegel, Stephen G.; Gilbertson, Mark; van Kammen, Daniel P.

    1998-01-01

    The construct validity of neuropsychological tests in patients with schizophrenia was studied with 39 patients who were evaluated with a battery of six tests assessing attention, memory, and abstract reasoning abilities. Results support the construct validity of the neuropsychological tests in patients with schizophrenia. (SLD)

  16. 77 FR 27135 - HACCP Systems Validation

    Science.gov (United States)

    2012-05-09

    ... validation, the journal article should identify E.coli O157:H7 and other pathogens as the hazard that the..., or otherwise processes ground beef may determine that E. coli O157:H7 is not a hazard reasonably... specifications that require that the establishment's suppliers apply validated interventions to address E. coli...

  17. Validity in assessment of prior learning

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne; Aarkrog, Vibe

    2015-01-01

    , the article discusses the need for specific criteria for assessment. The reliability and validity of the assessment procedures depend on whether the competences are well-defined, and whether the teachers are adequately trained for the assessment procedures. Keywords: assessment, prior learning, adult...... education, vocational training, lifelong learning, validity...

  18. Validation of the Classroom Behavior Inventory

    Science.gov (United States)

    Blunden, Dale; And Others

    1974-01-01

    Factor-analytic methods were used toassess contruct validity of the Classroom Behavior Inventory, a scale for rating behaviors associated with hyperactivity. The Classroom Behavior Inventory measures three dimensions of behavior: Hyperactivity, Hostility, and Sociability. Significant concurrent validity was obtained for only one Classroom Behavior…

  19. DESIGN AND VALIDATION OF A CARDIORESPIRATORY ...

    African Journals Online (AJOL)

    UJA

    This study aimed to validate the 10x20m test for children aged 3 to 6 years in order ... obtained adequate parameters of reliability and validity in healthy children aged 3 ... and is a determinant of cardiovascular risk in preschool children (Bürgi et al., ... (Seca 222, Hamburg, Germany), and weight (kg) that was recorded with a ...

  20. DESCQA: Synthetic Sky Catalog Validation Framework

    Science.gov (United States)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  1. Structural Validation of the Holistic Wellness Assessment

    Science.gov (United States)

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  2. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  3. Validation of self-reported erythema

    DEFF Research Database (Denmark)

    Petersen, B; Thieden, E; Lerche, C M

    2013-01-01

    Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data.......Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data....

  4. Validity of a Measure of Assertiveness

    Science.gov (United States)

    Galassi, John P.; Galassi, Merna D.

    1974-01-01

    This study was concerned with further validation of a measure of assertiveness. Concurrent validity was established for the College Self-Expression Scale using the method of contrasted groups and through correlations of self-and judges' ratings of assertiveness. (Author)

  5. Empirical Validation of Listening Proficiency Guidelines

    Science.gov (United States)

    Cox, Troy L.; Clifford, Ray

    2014-01-01

    Because listening has received little attention and the validation of ability scales describing multidimensional skills is always challenging, this study applied a multistage, criterion-referenced approach that used a framework of aligned audio passages and listening tasks to explore the validity of the ACTFL and related listening proficiency…

  6. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  7. Is intercessory prayer valid nursing intervention?

    Science.gov (United States)

    Stang, Cecily Wellelr

    2011-01-01

    Is the use of intercessory prayer (IP) in modern nursing a valid practice? As discussed in current healthcare literature, IP is controversial, with authors offering support for and against the efficacy of the practice. This article reviews IP literature and research, concluding IP is a valid intervention for Christian nurses.

  8. Promoting Rigorous Validation Practice: An Applied Perspective

    Science.gov (United States)

    Mattern, Krista D.; Kobrin, Jennifer L.; Camara, Wayne J.

    2012-01-01

    As researchers at a testing organization concerned with the appropriate uses and validity evidence for our assessments, we provide an applied perspective related to the issues raised in the focus article. Newton's proposal for elaborating the consensus definition of validity is offered with the intention to reduce the risks of inadequate…

  9. The Treatment Validity of Autism Screening Instruments

    Science.gov (United States)

    Livanis, Andrew; Mouzakitis, Angela

    2010-01-01

    Treatment validity is a frequently neglected topic of screening instruments used to identify autism spectrum disorders. Treatment validity, however, should represent an important aspect of these instruments to link the resulting data to the selection of interventions as well as make decisions about treatment length and intensity. Research…

  10. Terminology, Emphasis, and Utility in Validation

    Science.gov (United States)

    Kane, Michael T.

    2008-01-01

    Lissitz and Samuelsen (2007) have proposed an operational definition of "validity" that shifts many of the questions traditionally considered under validity to a separate category associated with the utility of test use. Operational definitions support inferences about how well people perform some kind of task or how they respond to some kind of…

  11. Validating Measures of Mathematical Knowledge for Teaching

    Science.gov (United States)

    Kane, Michael

    2007-01-01

    According to Schilling, Blunk, and Hill, the set of papers presented in this journal issue had two main purposes: (1) to use an argument-based approach to evaluate the validity of the tests of mathematical knowledge for teaching (MKT), and (2) to critically assess the author's version of an argument-based approach to validation (Kane, 2001, 2004).…

  12. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  13. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  14. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  15. Assessment of teacher competence using video portfolios: reliability, construct validity and consequential validity

    NARCIS (Netherlands)

    Admiraal, W.; Hoeksma, M.; van de Kamp, M.-T.; van Duin, G.

    2011-01-01

    The richness and complexity of video portfolios endanger both the reliability and validity of the assessment of teacher competencies. In a post-graduate teacher education program, the assessment of video portfolios was evaluated for its reliability, construct validity, and consequential validity.

  16. Validation of Symptom Validity Tests Using a "Child-model" of Adult Cognitive Impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P. E. J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children's cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  17. Validation of symptom validity tests using a "child-model" of adult cognitive impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P.E.J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children’s cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  18. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  19. Verification, validation, and reliability of predictions

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1987-04-01

    The objective of predicting long-term performance should be to make reliable determinations of whether the prediction falls within the criteria for acceptable performance. Establishing reliable predictions of long-term performance of a waste repository requires emphasis on valid theories to predict performance. The validation process must establish the validity of the theory, the parameters used in applying the theory, the arithmetic of calculations, and the interpretation of results; but validation of such performance predictions is not possible unless there are clear criteria for acceptable performance. Validation programs should emphasize identification of the substantive issues of prediction that need to be resolved. Examples relevant to waste package performance are predicting the life of waste containers and the time distribution of container failures, establishing the criteria for defining container failure, validating theories for time-dependent waste dissolution that depend on details of the repository environment, and determining the extent of congruent dissolution of radionuclides in the UO 2 matrix of spent fuel. Prediction and validation should go hand in hand and should be done and reviewed frequently, as essential tools for the programs to design and develop repositories. 29 refs

  20. Validation of Yoon's Critical Thinking Disposition Instrument.

    Science.gov (United States)

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  1. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  2. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  3. The validation of Huffaz Intelligence Test (HIT)

    Science.gov (United States)

    Rahim, Mohd Azrin Mohammad; Ahmad, Tahir; Awang, Siti Rahmah; Safar, Ajmain

    2017-08-01

    In general, a hafiz who can memorize the Quran has many specialties especially in respect to their academic performances. In this study, the theory of multiple intelligences introduced by Howard Gardner is embedded in a developed psychometric instrument, namely Huffaz Intelligence Test (HIT). This paper presents the validation and the reliability of HIT of some tahfiz students in Malaysia Islamic schools. A pilot study was conducted involving 87 huffaz who were randomly selected to answer the items in HIT. The analysis method used includes Partial Least Square (PLS) on reliability, convergence and discriminant validation. The study has validated nine intelligences. The findings also indicated that the composite reliabilities for the nine types of intelligences are greater than 0.8. Thus, the HIT is a valid and reliable instrument to measure the multiple intelligences among huffaz.

  4. Ensuring validity in qualitative international business research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman; Skaates, Maria Anne

    2002-01-01

    The purpose of this paper is to provide an account of how the validity issue related to qualitative research strategies within the IB field may be grasped from an at least partially subjectivist point of view. In section two, we will first assess via the aforementioned literature review the extent...... to which the validity issue has been treated in qualitative research contributions published in six leading English-language journals which publish IB research. Thereafter, in section three, we will discuss our findings and relate them to (a) various levels of a research project and (b) the existing...... literature on potential validity problems from a more subjectivist point of view. As a part of this step, we will demonstrate that the assumptions of objectivist and subjectivist ontologies and their corresponding epistemologies merit different canons for assessing research validity. In the subsequent...

  5. Convergent Validity of Four Innovativeness Scales.

    Science.gov (United States)

    Goldsmith, Ronald E.

    1986-01-01

    Four scales of innovativeness were administered to two samples of undergraduate students: the Open Processing Scale, Innovativeness Scale, innovation subscale of the Jackson Personality Inventory, and Kirton Adaption-Innovation Inventory. Intercorrelations indicated the scales generally exhibited convergent validity. (GDC)

  6. Validity of Sensory Systems as Distinct Constructs

    OpenAIRE

    Su, Chia-Ting; Parham, L. Diane

    2014-01-01

    Confirmatory factor analysis testing whether sensory questionnaire items represented distinct sensory system constructs found, using data from two age groups, that such constructs can be measured validly using questionnaire data.

  7. Regulatory perspectives on human factors validation

    International Nuclear Information System (INIS)

    Harrison, F.; Staples, L.

    2001-01-01

    Validation is an important avenue for controlling the genesis of human error, and thus managing loss, in a human-machine system. Since there are many ways in which error may intrude upon system operation, it is necessary to consider the performance-shaping factors that could introduce error and compromise system effectiveness. Validation works to this end by examining, through objective testing and measurement, the newly developed system, procedure or staffing level, in order to identify and eliminate those factors which may negatively influence human performance. It is essential that validation be done in a high-fidelity setting, in an objective and systematic manner, using appropriate measures, if meaningful results are to be obtained, In addition, inclusion of validation work in any design process can be seen as contributing to a good safety culture, since such activity allows licensees to eliminate elements which may negatively impact on human behaviour. (author)

  8. Validation of the reactor dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.

    1994-05-01

    HEXTRAN is a new three-dimensional, hexagonal reactor dynamics code developed in the Technical Research Centre of Finland (VTT) for VVER type reactors. This report describes the validation work of HEXTRAN. The work has been made with the financing of the Finnish Centre for Radiation and Nuclear Safety (STUK). HEXTRAN is particularly intended for calculation of such accidents, in which radially asymmetric phenomena are included and both good neutron dynamics and two-phase thermal hydraulics are important. HEXTRAN is based on already validated codes. The models of these codes have been shown to function correctly also within the HEXTRAN code. The main new model of HEXTRAN, the spatial neutron kinetics model has been successfully validated against LR-0 test reactor and Loviisa plant measurements. Connected with SMABRE, HEXTRAN can be reliably used for calculation of transients including effects of the whole cooling system of VVERs. Further validation plans are also introduced in the report. (orig.). (23 refs., 16 figs., 2 tabs.)

  9. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  10. An information architecture for courseware validation

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    A lack of pedagogy in courseware can lead to learner rejec- tion. It is therefore vital that pedagogy is a central concern of courseware construction. Courseware validation allows the course creator to specify pedagogical rules and principles which courseware must conform to. In this paper we investigate the information needed for courseware valida- tion and propose an information architecture to be used as a basis for validation.

  11. Italian Validation of Homophobia Scale (HS)

    OpenAIRE

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L.; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A.

    2015-01-01

    Introduction: The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self‐reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. Aim: The aim of this study was to validate the HS in the Italian context. Methods: An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back i...

  12. VAlidation STandard antennas: Past, present and future

    DEFF Research Database (Denmark)

    Drioli, Luca Salghetti; Ostergaard, A; Paquay, M

    2011-01-01

    designed for validation campaigns of antenna measurement ranges. The driving requirements of VAST antennas are their mechanical stability over a given operational temperature range and with respect to any orientation of the gravity field. The mechanical design shall ensure extremely stable electrical....../V-band of telecom satellites. The paper will address requirements for future VASTs and possible architecture for multi-frequency Validation Standard antennas....

  13. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  14. Empirical Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  15. Verification and Validation in Systems Engineering

    CERN Document Server

    Debbabi, Mourad; Jarraya, Yosr; Soeanu, Andrei; Alawneh, Luay

    2010-01-01

    "Verification and validation" represents an important process used for the quality assessment of engineered systems and their compliance with the requirements established at the beginning of or during the development cycle. Debbabi and his coauthors investigate methodologies and techniques that can be employed for the automatic verification and validation of systems engineering design models expressed in standardized modeling languages. Their presentation includes a bird's eye view of the most prominent modeling languages for software and systems engineering, namely the Unified Model

  16. The Legality and Validity of Administrative Enforcement

    Directory of Open Access Journals (Sweden)

    Sergei V. Iarkovoi

    2018-01-01

    Full Text Available The article discusses the concept and content of the validity of adopted by the executive authorities and other bodies of public administration legal acts and committed by them legal actions as an important characteristic of law enforcement by these bodies. The Author concludes that the validity of the administrative law enforcement is not an independent requirement for it, and acts as an integral part of its legal requirements.

  17. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  18. DTU PMU Laboratory Development - Testing and Validation

    OpenAIRE

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.; Nielsen, Arne Hejde; Østergaard, Jacob

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to foll...

  19. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  20. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  1. Validating presupposed versus focused text information.

    Science.gov (United States)

    Singer, Murray; Solar, Kevin G; Spear, Jackie

    2017-04-01

    There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.

  2. Assessment of validity with polytrauma Veteran populations.

    Science.gov (United States)

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  3. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  4. [Validation of the IBS-SSS].

    Science.gov (United States)

    Betz, C; Mannsdörfer, K; Bischoff, S C

    2013-10-01

    Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterised by abdominal pain, associated with stool abnormalities and changes in stool consistency. Diagnosis of IBS is based on characteristic symptoms and exclusion of other gastrointestinal diseases. A number of questionnaires exist to assist diagnosis and assessment of severity of the disease. One of these is the irritable bowel syndrome - severity scoring system (IBS-SSS). The IBS-SSS was validated 1997 in its English version. In the present study, the IBS-SSS has been validated in German language. To do this, a cohort of 60 patients with IBS according to the Rome III criteria, was compared with a control group of healthy individuals (n = 38). We studied sensitivity and reproducibility of the score, as well as the sensitivity to detect changes of symptom severity. The results of the German validation largely reflect the results of the English validation. The German version of the IBS-SSS is also a valid, meaningful and reproducible questionnaire with a high sensitivity to assess changes in symptom severity, especially in IBS patients with moderate symptoms. It is unclear if the IBS-SSS is also a valid questionnaire in IBS patients with severe symptoms because this group of patients was not studied. © Georg Thieme Verlag KG Stuttgart · New York.

  5. The ALICE Software Release Validation cluster

    International Nuclear Information System (INIS)

    Berzano, D; Krzewicki, M

    2015-01-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future. (paper)

  6. Validation in the Absence of Observed Events.

    Science.gov (United States)

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  7. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  8. Validation studies of nursing diagnoses in neonatology

    Directory of Open Access Journals (Sweden)

    Pavlína Rabasová

    2016-03-01

    Full Text Available Aim: The objective of the review was the analysis of Czech and foreign literature sources and professional periodicals to obtain a relevant comprehensive overview of validation studies of nursing diagnoses in neonatology. Design: Review. Methods: The selection criterion was studies concerning the validation of nursing diagnoses in neonatology. To obtain data from relevant sources, the licensed professional databases EBSCO, Web of Science and Scopus were utilized. The search criteria were: date of publication - unlimited; academic periodicals - full text; peer-reviewed periodicals; search language - English, Czech and Slovak. Results: A total of 788 studies were found. Only 5 studies were eligible for content analysis, dealing specifically with validation of nursing diagnoses in neonatology. The analysis of the retrieved studies suggests that authors are most often concerned with identifying the defining characteristics of nursing diagnoses applicable to both the mother (parents and the newborn. The diagnoses were validated in the domains Role Relationship; Coping/Stress tolerance; Activity/Rest, and Elimination and Exchange. Diagnoses represented were from the field of dysfunctional physical needs as well as the field of psychosocial and spiritual needs. The diagnoses were as follows: Parental role conflict (00064; Impaired parenting (00056; Grieving (00136; Ineffective breathing pattern (00032; Impaired gas exchange (00030; and Impaired spontaneous ventilation (00033. Conclusion: Validation studies enable effective planning of interventions with measurable results and support clinical nursing practice.

  9. A validated battery of vocal emotional expressions

    Directory of Open Access Journals (Sweden)

    Pierre Maurage

    2007-11-01

    Full Text Available For a long time, the exploration of emotions focused on facial expression, and vocal expression of emotion has only recently received interest. However, no validated battery of emotional vocal expressions has been published and made available to the researchers’ community. This paper aims at validating and proposing such material. 20 actors (10 men recorded sounds (words and interjections expressing six basic emotions (anger, disgust, fear, happiness, neutral and sadness. These stimuli were then submitted to a double validation phase: (1 preselection by experts; (2 quantitative and qualitative validation by 70 participants. 195 stimuli were selected for the final battery, each one depicting a precise emotion. The ratings provide a complete measure of intensity and specificity for each stimulus. This paper provides, to our knowledge, the first validated, freely available and highly standardized battery of emotional vocal expressions (words and intonations. This battery could constitute an interesting tool for the exploration of prosody processing among normal and pathological populations, in neuropsychology as well as psychiatry. Further works are nevertheless needed to complement the present material.

  10. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  11. Italian version of Dyspnoea-12: cultural-linguistic validation, quantitative and qualitative content validity study.

    Science.gov (United States)

    Caruso, Rosario; Arrigoni, Cristina; Groppelli, Katia; Magon, Arianna; Dellafiore, Federica; Pittella, Francesco; Grugnetti, Anna Maria; Chessa, Massimo; Yorke, Janelle

    2018-01-16

    Dyspnoea-12 is a valid and reliable scale to assess dyspneic symptom, considering its severity, physical and emotional components. However, it is not available in Italian version due to it was not yet translated and validated. For this reason, the aim of this study was to develop an Italian version Dyspnoea-12, providing a cultural and linguistic validation, supported by the quantitative and qualitative content validity. This was a methodological study, divided into two phases: phase one is related to the cultural and linguistic validation, phase two is related to test the quantitative and qualitative content validity. Linguistic validation followed a standardized translation process. Quantitative content validity was assessed computing content validity ratio (CVR) and index (I-CVIs and S-CVI) from expert panellists response. Qualitative content validity was assessed by the narrative analysis on the answers of three open-ended questions to the expert panellists, aimed to investigate the clarity and the pertinence of the Italian items. The translation process found a good agreement in considering clear the items in both the six involved bilingual expert translators and among the ten voluntary involved patients. CVR, I-CVIs and S-CVI were satisfactory for all the translated items. This study has represented a pivotal step to use Dyspnoea-12 amongst Italian patients. Future researches are needed to deeply investigate the Italian version of  Dyspnoea-12 construct validity and its reliability, and to describe how dyspnoea components (i.e. physical and emotional) impact the life of patients with cardiorespiratory diseases.

  12. Validation of the prosthetic esthetic index

    DEFF Research Database (Denmark)

    Özhayat, Esben B; Dannemand, Katrine

    2014-01-01

    OBJECTIVES: In order to diagnose impaired esthetics and evaluate treatments for these, it is crucial to evaluate all aspects of oral and prosthetic esthetics. No professionally administered index currently exists that sufficiently encompasses comprehensive prosthetic esthetics. This study aimed...... to validate a new comprehensive index, the Prosthetic Esthetic Index (PEI), for professional evaluation of esthetics in prosthodontic patients. MATERIAL AND METHODS: The content, criterion, and construct validity; the test-retest, inter-rater, and internal consistency reliability; and the sensitivity...... furthermore distinguish between participants and controls, indicating sufficient sensitivity. CONCLUSION: The PEI is considered a valid and reliable instrument involving sufficient aspects for assessment of the professionally evaluated esthetics in prosthodontic patients. CLINICAL RELEVANCE...

  13. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  14. Validation of comprehensive space radiation transport code

    International Nuclear Information System (INIS)

    Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.

    1998-01-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation

  15. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  16. Entropy Evaluation Based on Value Validity

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2014-09-01

    Full Text Available Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

  17. DTU PMU Laboratory Development - Testing and Validation

    DEFF Research Database (Denmark)

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE...... standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to follow known patterns and provide confirmation about the test system to confirm the design and settings....... In a nutshell, having 2 PMUs that observe same signals provides validation of the operation and flags questionable results with more certainty. Moreover, the performance and accuracy of the DTU-PMU is tested acquiring good and precise results, when compared with a commercial phasor measurement device, PMU-1....

  18. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  19. Orthorexia nervosa: validation of a diagnosis questionnaire.

    Science.gov (United States)

    Donini, L M; Marsili, D; Graziani, M P; Imbriale, M; Cannella, C

    2005-06-01

    To validate a questionnaire for the diagnosis of orhorexia oervosa, an eating disorder defined as "maniacal obsession for healthy food". 525 subjects were enrolled. Then they were randomized into two samples (sample of 404 subjects for the construction of the test for the diagnosis of orthorexia ORTO-15; sample of 121 subjects for the validation of the test). The ORTO-15 questionnaire, validated for the diagnosis of orthorexia, is made-up of 15 multiple-choice items. The test we proposed for the diagnosis of orthorexia (ORTO 15) showed a good predictive capability at a threshold value of 40 (efficacy 73.8%, sensitivity 55.6% and specificity 75.8%) also on verification with a control sample. However, it has a limit in identifying the obsessive disorder. For this reason we maintain that further investigation is necessary and that new questions useful for the evaluation of the obsessive-compulsive behavior should be added to the ORTO-15 questionnaire.

  20. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  1. Validation of Visual Caries Activity Assessment

    DEFF Research Database (Denmark)

    Guedes, R S; Piovesan, C; Ardenghi, T M

    2014-01-01

    We evaluated the predictive and construct validity of a caries activity assessment system associated with the International Caries Detection and Assessment System (ICDAS) in primary teeth. A total of 469 children were reexamined: participants of a caries survey performed 2 yr before (follow-up rate...... of 73.4%). At baseline, children (12-59 mo old) were examined with the ICDAS and a caries activity assessment system. The predictive validity was assessed by evaluating the risk of active caries lesion progression to more severe conditions in the follow-up, compared with inactive lesions. We also...... assessed if children with a higher number of active caries lesions were more likely to develop new lesions (construct validity). Noncavitated active caries lesions at occlusal surfaces presented higher risk of progression than inactive ones. Children with a higher number of active lesions and with higher...

  2. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  3. Human Factors methods concerning integrated validation of nuclear power plant control rooms; Metodutveckling foer integrerad validering

    Energy Technology Data Exchange (ETDEWEB)

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia (Swedish Defence Research Agency, Information Systems, Linkoeping (Sweden))

    2010-02-15

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  4. Further Validation of the IDAS: Evidence of Convergent, Discriminant, Criterion, and Incremental Validity

    Science.gov (United States)

    Watson, David; O'Hara, Michael W.; Chmielewski, Michael; McDade-Montez, Elizabeth A.; Koffel, Erin; Naragon, Kristin; Stuart, Scott

    2008-01-01

    The authors explicated the validity of the Inventory of Depression and Anxiety Symptoms (IDAS; D. Watson et al., 2007) in 2 samples (306 college students and 605 psychiatric patients). The IDAS scales showed strong convergent validity in relation to parallel interview-based scores on the Clinician Rating version of the IDAS; the mean convergent…

  5. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    Science.gov (United States)

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  6. Marketing Plan for Demonstration and Validation Assets

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-05-30

    The National Security Preparedness Project (NSPP), is to be sustained by various programs, including technology demonstration and evaluation (DEMVAL). This project assists companies in developing technologies under the National Security Technology Incubator program (NSTI) through demonstration and validation of technologies applicable to national security created by incubators and other sources. The NSPP also will support the creation of an integrated demonstration and validation environment. This report documents the DEMVAL marketing and visibility plan, which will focus on collecting information about, and expanding the visibility of, DEMVAL assets serving businesses with national security technology applications in southern New Mexico.

  7. Italian Validation of Homophobia Scale (HS

    Directory of Open Access Journals (Sweden)

    Giacomo Ciocca, PsyD, PhD

    2015-09-01

    Conclusions: The Italian validation of the HS revealed the use of this self‐report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior. Ciocca G, Capuano N, Tuziak B, Mollaioli D, Limoncin E, Valsecchi D, Carosa E, Gravina GL, Gianfrilli D, Lenzi A, and Jannini EA. Italian validation of Homophobia Scale (HS. Sex Med 2015;3:213–218.

  8. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  9. Validating Acquisition IS Integration Readiness with Drills

    DEFF Research Database (Denmark)

    Wynne, Peter J.

    2017-01-01

    To companies, mergers and acquisitions are important strategic tools, yet they often fail to deliver their expected value. Studies have shown the integration of information systems is a significant roadblock to the realisation of acquisition benefits, and for an IT department to be ready......), to understand how an IT department can use them to validate their integration plans. The paper presents a case study of two drills used to validate an IT department’s readiness to carry out acquisition IS integration, and suggests seven acquisition IS integration drill characteristics others could utilise when...

  10. Reliability and validity of risk analysis

    International Nuclear Information System (INIS)

    Aven, Terje; Heide, Bjornar

    2009-01-01

    In this paper we investigate to what extent risk analysis meets the scientific quality requirements of reliability and validity. We distinguish between two types of approaches within risk analysis, relative frequency-based approaches and Bayesian approaches. The former category includes both traditional statistical inference methods and the so-called probability of frequency approach. Depending on the risk analysis approach, the aim of the analysis is different, the results are presented in different ways and consequently the meaning of the concepts reliability and validity are not the same.

  11. Method Validation Procedure in Gamma Spectroscopy Laboratory

    International Nuclear Information System (INIS)

    El Samad, O.; Baydoun, R.

    2008-01-01

    The present work describes the methodology followed for the application of ISO 17025 standards in gamma spectroscopy laboratory at the Lebanese Atomic Energy Commission including the management and technical requirements. A set of documents, written procedures and records were prepared to achieve the management part. The technical requirements, internal method validation was applied through the estimation of trueness, repeatability , minimum detectable activity and combined uncertainty, participation in IAEA proficiency tests assure the external method validation, specially that the gamma spectroscopy lab is a member of ALMERA network (Analytical Laboratories for the Measurements of Environmental Radioactivity). Some of these results are presented in this paper. (author)

  12. Plant monitoring and signal validation at HFIR

    International Nuclear Information System (INIS)

    Mullens, J.A.

    1991-01-01

    This paper describes a monitoring system for the Oak Ridge National Laboratory's (ORNL'S) High Flux Isotope Reactor (HFIR). HFIR is an 85 MW pressurized water reactor designed to produce isotopes and intense neutron beams. The monitoring system is described with respect to plant signals and computer system; monitoring overview; data acquisition, logging and network distribution; signal validation; status displays; reactor condition monitoring; reactor operator aids. Future work will include the addition of more plant signals, more signal validation and diagnostic capabilities, improved status display, integration of the system with the RELAP plant simulation and graphical interface, improved operator aids, and an alarm filtering system. 8 refs., 7 figs. (MB)

  13. Static validation of licence conformance policies

    DEFF Research Database (Denmark)

    Hansen, Rene Rydhof; Nielson, Flemming; Nielson, Hanne Riis

    2008-01-01

    Policy conformance is a security property gaining importance due to commercial interest like Digital Rights Management. It is well known that static analysis can be used to validate a number of more classical security policies, such as discretionary and mandatory access control policies, as well...... as communication protocols using symmetric and asymmetric cryptography. In this work we show how to develop a Flow Logic for validating the conformance of client software with respect to a licence conformance policy. Our approach is sufficiently flexible that it extends to fully open systems that can admit new...

  14. Validation of Housing Standards Addressing Accessibility

    DEFF Research Database (Denmark)

    Helle, Tina

    2013-01-01

    The aim was to explore the use of an activity-based approach to determine the validity of a set of housing standards addressing accessibility. This included examination of the frequency and the extent of accessibility problems among older people with physical functional limitations who used...... participant groups were examined. Performing well-known kitchen activities was associated with accessibility problems for all three participant groups, in particular those using a wheelchair. The overall validity of the housing standards examined was poor. Observing older people interacting with realistic...... environments while performing real everyday activities seems to be an appropriate method for assessing accessibility problems....

  15. Validity of Linder Hypothesis in Bric Countries

    Directory of Open Access Journals (Sweden)

    Rana Atabay

    2016-03-01

    Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.

  16. Preliminary Validation of Composite Material Constitutive Characterization

    Science.gov (United States)

    John G. Michopoulos; Athanasios lliopoulos; John C. Hermanson; Adrian C. Orifici; Rodney S. Thomson

    2012-01-01

    This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by...

  17. Recent cold fusion claims: are they valid?

    International Nuclear Information System (INIS)

    Kowalski, Ludwik

    2006-01-01

    Cold fusion consists of nuclear reactions occurring in solid metals loaded with hydrogen. Considerable progress has been made in that area in the last ten years. This 2004 paper summarizes recent claims without attempting to evaluate their validity. The manuscript was submitted to seven physics journals. Unfortunately, the editors rejected it without the benefit of the usual peer-review process. (author)

  18. Static Validation of a Voting Protocol

    DEFF Research Database (Denmark)

    Nielsen, Christoffer Rosenkilde; Andersen, Esben Heltoft; Nielson, Hanne Riis

    2005-01-01

    is formalised in an extension of the LySa process calculus with blinding signatures. The analysis, which is fully automatic, pinpoints previously undiscovered flaws related to verifiability and accuracy and we suggest modifications of the protocol needed for validating these properties....

  19. Sampling for validation of digital soil maps

    NARCIS (Netherlands)

    Brus, D.J.; Kempen, B.; Heuvelink, G.B.M.

    2011-01-01

    The increase in digital soil mapping around the world means that appropriate and efficient sampling strategies are needed for validation. Data used for calibrating a digital soil mapping model typically are non-random samples. In such a case we recommend collection of additional independent data and

  20. Spacecraft early design validation using formal methods

    International Nuclear Information System (INIS)

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  1. Validation of the Organizational Culture Assessment Instrument

    Science.gov (United States)

    Heritage, Brody; Pollock, Clare; Roberts, Lynne

    2014-01-01

    Organizational culture is a commonly studied area in industrial/organizational psychology due to its important role in workplace behaviour, cognitions, and outcomes. Jung et al.'s [1] review of the psychometric properties of organizational culture measurement instruments noted many instruments have limited validation data despite frequent use in both theoretical and applied situations. The Organizational Culture Assessment Instrument (OCAI) has had conflicting data regarding its psychometric properties, particularly regarding its factor structure. Our study examined the factor structure and criterion validity of the OCAI using robust analysis methods on data gathered from 328 (females = 226, males = 102) Australian employees. Confirmatory factor analysis supported a four factor structure of the OCAI for both ideal and current organizational culture perspectives. Current organizational culture data demonstrated expected reciprocally-opposed relationships between three of the four OCAI factors and the outcome variable of job satisfaction but ideal culture data did not, thus indicating possible weak criterion validity when the OCAI is used to assess ideal culture. Based on the mixed evidence regarding the measure's properties, further examination of the factor structure and broad validity of the measure is encouraged. PMID:24667839

  2. The Validity of Subjective Performance Measures

    DEFF Research Database (Denmark)

    Meier, Kenneth J.; Winter, Søren C.; O'Toole, Laurence J.

    2015-01-01

    to provide, and are highly policy specific rendering generalization difficult. But are perceptual performance measures valid, and do they generate unbiased findings? We examine these questions in a comparative study of middle managers in schools in Texas and Denmark. The findings are remarkably similar...

  3. Continued validation of the Multidimensional Perfectionism Scale.

    Science.gov (United States)

    Clavin, S L; Clavin, R H; Gayton, W F; Broida, J

    1996-06-01

    Scores on the Multidimensional Perfectionism Scale have been correlated with measures of obsessive-compulsive tendencies for women, so the validity of scores on this scale for 41 men was examined. Scores on the Perfectionism Scale were significantly correlated (.47-.03) with scores on the Maudsley Obsessive-Compulsive Inventory.

  4. Is Echinococcus intermedius a valid species?

    Science.gov (United States)

    Medical and veterinary sciences require scientific names to discriminate pathogenic organisms in our living environment. Various species concepts have been proposed for metazoan animals. There are, however, constant controversies over their validity because of lack of a common criterion to define ...

  5. Electronic unit for fotovoltaic solar panelling validation

    International Nuclear Information System (INIS)

    Vazquez, J.; Valverde, J.; Garcia, J.M.

    1988-01-01

    A low cost and easy to use electronic system for photovoltaic solar panelling validation is described. It measures, with a certain periodicity, voltage and current given by the panel, determines the supplied power, and every so often, its maximum valves. The unit is fitted with a data storage system which allows data recording for later analysis. (Author)

  6. Validation-driven protein-structure improvement

    NARCIS (Netherlands)

    Touw, W.G.

    2016-01-01

    High-quality protein structure models are essential for many Life Science applications, such as protein engineering, molecular dynamics, drug design, and homology modelling. The WHAT_CHECK model validation project and the PDB_REDO model optimisation project have shown that many structure models in

  7. Validation of the Drinking Motives Questionnaire

    DEFF Research Database (Denmark)

    Fernandes-Jesus, Maria; Beccaria, Franca; Demant, Jakob Johan

    2016-01-01

    • This paper assesses the validity of the DMQ-R (Cooper, 1994) among university students in six different European countries. • Results provide support for similar DMQ-R factor structures across countries. • Drinking motives have similar meanings among European university students....

  8. An Argument Approach to Observation Protocol Validity

    Science.gov (United States)

    Bell, Courtney A.; Gitomer, Drew H.; McCaffrey, Daniel F.; Hamre, Bridget K.; Pianta, Robert C.; Qi, Yi

    2012-01-01

    This article develops a validity argument approach for use on observation protocols currently used to assess teacher quality for high-stakes personnel and professional development decisions. After defining the teaching quality domain, we articulate an interpretive argument for observation protocols. To illustrate the types of evidence that might…

  9. Validating quantitative precipitation forecast for the Flood ...

    Indian Academy of Sciences (India)

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitationis required. In view of this, the present study intends to validate the quantitative precipitationforecast (QPF) issued during southwest monsoon season for six river catchments (basin) under theflood meteorological ...

  10. Physics Validation of the LHC Software

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The LHC Software will be confronted to unprecedented challenges as soon as the LHC will turn on. We summarize the main Software requirements coming from the LHC detectors, triggers and physics, and we discuss several examples of Software components developed by the experiments and the LCG project (simulation, reconstruction, etc.), their validation, and their adequacy for LHC physics.

  11. Validating a Spanish Developmental Spelling Test.

    Science.gov (United States)

    Ferroli, Lou; Krajenta, Marilyn

    The creation and validation of a Spanish version of an English developmental spelling test (DST) is described. An introductory section reviews related literature on the rationale for and construction of DSTs, spelling development in the early grades, and Spanish-English bilingual education. Differences between the English and Spanish test versions…

  12. SEBS validation in a Spanish rotating crop

    NARCIS (Netherlands)

    Pardo, N.; Sanchez, M.L.; Timmermans, J.; Su, Zhongbo; Perez, I.A.; Garcia, M.A.

    2014-01-01

    This paper focuses on calculating Evaporative Fraction (EF) and energy balance components, applying the Surface Energy Balance System (SEBS) model combined with remote sensing products and meteorological data over an agricultural rotating cropland from 2008 to 2011. The model is validated by

  13. ASTM Validates Air Pollution Test Methods

    Science.gov (United States)

    Chemical and Engineering News, 1973

    1973-01-01

    The American Society for Testing and Materials (ASTM) has validated six basic methods for measuring pollutants in ambient air as the first part of its Project Threshold. Aim of the project is to establish nationwide consistency in measuring pollutants; determining precision, accuracy and reproducibility of 35 standard measuring methods. (BL)

  14. Validation of the Motivation to Teach Scale

    Science.gov (United States)

    Kauffman, Douglas F.; Yilmaz Soylu, Meryem; Duke, Bryan

    2011-01-01

    The purpose of this study was to develop and validate a self-report psychological instrument assessing pre-service teachers' relative intrinsic and extrinsic motivation to teach. One hundred forty seven undergraduate students taking Educational Psychology courses from a large US University participated in this study completed the 12 item MTS along…

  15. Validated modified Lycopodium spore method development for ...

    African Journals Online (AJOL)

    Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of ...

  16. Iranian Validation of the Identity Style Inventory

    Science.gov (United States)

    Crocetti, Elisabetta; Shokri, Omid

    2010-01-01

    The purpose of this study was to validate the Iranian version of the Identity Style Inventory (ISI). Participants were 376 (42% males) university students. Confirmatory factor analyses revealed a clear three-factor structure of identity style and a mono-factor structure of commitment in the overall sample as well as in gender subgroups. Convergent…

  17. The Validity of Two Education Requirement Measures

    Science.gov (United States)

    van der Meer, Peter H.

    2006-01-01

    In this paper we investigate the validity of two education requirement measures. This is important because a key part of the ongoing discussion concerning overeducation is about measurement. Thanks to the Dutch Institute for Labour Studies, we have been given a unique opportunity to compare two education requirement measures: first, Huijgen's…

  18. Moving beyond Traditional Methods of Survey Validation

    Science.gov (United States)

    Maul, Andrew

    2017-01-01

    In his focus article, "Rethinking Traditional Methods of Survey Validation," published in this issue of "Measurement: Interdisciplinary Research and Perspectives," Andrew Maul wrote that it is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes, such as…

  19. Correcting Fallacies in Validity, Reliability, and Classification

    Science.gov (United States)

    Sijtsma, Klaas

    2009-01-01

    This article reviews three topics from test theory that continue to raise discussion and controversy and capture test theorists' and constructors' interest. The first topic concerns the discussion of the methodology of investigating and establishing construct validity; the second topic concerns reliability and its misuse, alternative definitions…

  20. Towards validation of the Internet Census 2012

    NARCIS (Netherlands)

    Maan, Dirk; Cardoso de Santanna, José Jair; Sperotto, Anna; de Boer, Pieter-Tjerk; Kermarrec, Yvon

    2014-01-01

    The reliability of the ``Internet Census 2012'' (IC), an anonymously published scan of the entire IPv4 address space, is not a priori clear. As a step towards validation of this dataset, we compare it to logged reference data on a /16 network, and present an approach to systematically handle

  1. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The Predictive Validity of Projective Measures.

    Science.gov (United States)

    Suinn, Richard M.; Oskamp, Stuart

    Written for use by clinical practitioners as well as psychological researchers, this book surveys recent literature (1950-1965) on projective test validity by reviewing and critically evaluating studies which shed light on what may reliably be predicted from projective test results. Two major instruments are covered: the Rorschach and the Thematic…

  3. Validation of the regional authority index

    NARCIS (Netherlands)

    Schakel, A.H.

    2008-01-01

    This article validates the Regional Authority Index (RAI) with seven widely used decentralization indices in the literature. A principal axis analysis reveals a common structure. The major source of disagreement between the RAI and the other indices stems from the fact that the RAI does not include

  4. Validation of the organizational culture assessment instrument.

    Directory of Open Access Journals (Sweden)

    Brody Heritage

    Full Text Available Organizational culture is a commonly studied area in industrial/organizational psychology due to its important role in workplace behaviour, cognitions, and outcomes. Jung et al.'s [1] review of the psychometric properties of organizational culture measurement instruments noted many instruments have limited validation data despite frequent use in both theoretical and applied situations. The Organizational Culture Assessment Instrument (OCAI has had conflicting data regarding its psychometric properties, particularly regarding its factor structure. Our study examined the factor structure and criterion validity of the OCAI using robust analysis methods on data gathered from 328 (females = 226, males = 102 Australian employees. Confirmatory factor analysis supported a four factor structure of the OCAI for both ideal and current organizational culture perspectives. Current organizational culture data demonstrated expected reciprocally-opposed relationships between three of the four OCAI factors and the outcome variable of job satisfaction but ideal culture data did not, thus indicating possible weak criterion validity when the OCAI is used to assess ideal culture. Based on the mixed evidence regarding the measure's properties, further examination of the factor structure and broad validity of the measure is encouraged.

  5. Theory and Validity of Life Satisfaction Scales

    Science.gov (United States)

    Diener, Ed; Inglehart, Ronald; Tay, Louis

    2013-01-01

    National accounts of subjective well-being are being considered and adopted by nations. In order to be useful for policy deliberations, the measures of life satisfaction must be psychometrically sound. The reliability, validity, and sensitivity to change of life satisfaction measures are reviewed. The scales are stable under unchanging conditions,…

  6. Evidence of Construct Validity for Work Values

    Science.gov (United States)

    Leuty, Melanie E.; Hansen, Jo-Ida C.

    2011-01-01

    Despite the importance of work values in the process of career adjustment (Dawis, 2002), little empirical research has focused on articulating the domains represented within the construct of work values and the examination of evidence of validity for the construct has been limited. Furthermore, the larger number of work values measures has made it…

  7. Validation of in vitro probabilistic tractography

    DEFF Research Database (Denmark)

    Dyrby, Tim B.; Sogaard, L.V.; Parker, G.J.

    2007-01-01

    assessed the anatomical validity and reproducibility of in vitro multi-fiber probabilistic tractography against two invasive tracers: the histochemically detectable biotinylated dextran amine and manganese enhanced magnetic resonance imaging. Post mortern DWI was used to ensure that most of the sources...

  8. Validation of Metrics for Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  9. Validation of Metrics for Collaborative Systems

    OpenAIRE

    Ion IVAN; Cristian CIUREA

    2008-01-01

    This paper describe the new concepts of collaborative systems metrics validation. The paper define the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.

  10. Construct Validity of Adolescent Antisocial Personality Disorder

    Science.gov (United States)

    Taylor, Jeanette; Elkins, Irene J.; Legrand, Lisa; Peuschold, Dawn; Iacono, William G.

    2007-01-01

    This study examined the construct validity of antisocial personality disorder (ASPD) diagnosed in adolescence. Boys and girls were grouped by history of DSM-III-R conduct disorder (CD) and ASPD: Controls (n = 340) had neither diagnosis; CD Only (n = 77) had CD by age 17 but no ASPD through age 20; Adolescent ASPD (n = 64) had ASPD by age 17. The…

  11. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  12. On the validity range of piston theory

    CSIR Research Space (South Africa)

    Meijer, M-C

    2015-06-01

    Full Text Available The basis of linear piston theory in unsteady potential flow is used in this work to develop a quantitative treatment of the validity range of piston theory. In the limit of steady flow, velocity perturbations from Donov’s series expansion...

  13. Description and Validation of a MATLAB

    DEFF Research Database (Denmark)

    Johra, Hicham; Heiselberg, Per

    This report aims to present the details and the validation tests of a single family house building energy model. The building model includes furniture / additional indoor content, phase change materials, ground source heat pump and water-based under floor heating system....

  14. Playing to win over: validating persuasive games

    NARCIS (Netherlands)

    R.S. Jacobs (Ruud)

    2017-01-01

    textabstractThis dissertation describes four years of scientific inquiry into persuasive games – digital games designed to persuade – as part of a multidisciplinary research project ‘Persuasive Gaming. From Theory-Based Design to Validation and Back’ funded by the Netherlands Organization for

  15. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  16. Construct Validation of Content Standards for Teaching

    Science.gov (United States)

    van der Schaaf, Marieke F.; Stokking, Karel M.

    2011-01-01

    Current international demands to strengthen the teaching profession have led to an increased development and use of professional content standards. The study aims to provide insight in the construct validity of content standards by researching experts' underlying assumptions and preferences when participating in a delphi method. In three rounds 21…

  17. Validating High-Stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    2002-01-01

    Makes the point that the interpretations and use of high-stakes test scores rely on policy assumptions about what should be taught and the content standards and performance standards that should be applied. The assumptions built into an assessment need to be subjected to scrutiny and criticism if a strong case is to be made for the validity of the…

  18. Soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  19. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  20. Gaia Data Release 1. Catalogue validation

    NARCIS (Netherlands)

    Arenou, F.; Luri, X.; Babusiaux, C.; Fabricius, C.; Helmi, A.; Robin, A. C.; Vallenari, A.; Blanco-Cuaresma, S.; Cantat-Gaudin, T.; Findeisen, K.; Reylé, C.; Ruiz-Dern, L.; Sordo, R.; Turon, C.; Walton, N. A.; Shih, I.-C.; Antiche, E.; Barache, C.; Barros, M.; Breddels, M.; Carrasco, J. M.; Costigan, G.; Diakité, S.; Eyer, L.; Figueras, F.; Galluccio, L.; Heu, J.; Jordi, C.; Krone-Martins, A.; Lallement, R.; Lambert, S.; Leclerc, N.; Marrese, P. M.; Moitinho, A.; Mor, R.; Romero-Gómez, M.; Sartoretti, P.; Soria, S.; Soubiran, C.; Souchay, J.; Veljanoski, J.; Ziaeepour, H.; Giuffrida, G.; Pancino, E.; Bragaglia, A.

    Context. Before the publication of the Gaia Catalogue, the contents of the first data release have undergone multiple dedicated validation tests. Aims: These tests aim to provide in-depth analysis of the Catalogue content in order to detect anomalies and individual problems in specific objects or in

  1. The validation of the turnover intention scale

    Directory of Open Access Journals (Sweden)

    Chris F.C. Bothma

    2013-04-01

    Full Text Available Orientation: Turnover intention as a construct has attracted increased research attention in the recent past, but there are seemingly not many valid and reliable scales around to measure turnover intention. Research purpose: This study focused on the validation of a shortened, six-item version of the turnover intention scale (TIS-6. Motivation for the study: The research question of whether the TIS-6 is a reliable and a valid scale for measuring turnover intention and for predicting actual turnover was addressed in this study. Research design, approach and method: The study was based on a census-based sample (n= 2429 of employees in an information, communication and technology (ICT sector company (N= 23 134 where the TIS-6 was used as one of the criterion variables. The leavers (those who left the company in this sample were compared with the stayers (those who remained in the employ of the company in this sample in respect of different variables used in the study. Main findings: It was established that the TIS-6 could measure turnover intentions reliably (α= 0.80. The TIS-6 could significantly distinguish between leavers and stayers (actual turnover, thereby confirming its criterion-predictive validity. The scale also established statistically significant differences between leavers and stayers in respect of a number of the remaining theoretical variables used in the study, thereby also confirming its differential validity. These comparisons were conducted for both the 4-month and the 4-year period after the survey was conducted. Practical/managerial implications: Turnover intention is related to a number of variables in the study which necessitates a reappraisal and a reconceptualisation of existing turnover intention models. Contribution/value-add: The TIS-6 can be used as a reliable and valid scale to assess turnover intentions and can therefore be used in research to validly and reliably assess turnover intentions or to

  2. Fuel Cell and Hydrogen Technology Validation | Hydrogen and Fuel Cells |

    Science.gov (United States)

    NREL Fuel Cell and Hydrogen Technology Validation Fuel Cell and Hydrogen Technology Validation The NREL technology validation team works on validating hydrogen fuel cell electric vehicles; hydrogen fueling infrastructure; hydrogen system components; and fuel cell use in early market applications such as

  3. Further Validation of the Coach Identity Prominence Scale

    Science.gov (United States)

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  4. Construct Validity of the Nepalese School Leaving English Reading Test

    Science.gov (United States)

    Dawadi, Saraswati; Shrestha, Prithvi N.

    2018-01-01

    There has been a steady interest in investigating the validity of language tests in the last decades. Despite numerous studies on construct validity in language testing, there are not many studies examining the construct validity of a reading test. This paper reports on a study that explored the construct validity of the English reading test in…

  5. Toward a Unified Validation Framework in Mixed Methods Research

    Science.gov (United States)

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  6. Certification Testing as an Illustration of Argument-Based Validation

    Science.gov (United States)

    Kane, Michael

    2004-01-01

    The theories of validity developed over the past 60 years are quite sophisticated, but the methodology of validity is not generally very effective. The validity evidence for major testing programs is typically much weaker than the evidence for more technical characteristics such as reliability. In addition, most validation efforts have a strong…

  7. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  8. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  9. Shield verification and validation action matrix summary

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    WSRC-RP-90-26, Certification Plan for Reactor Analysis Computer Codes, describes a series of action items to be completed for certification of reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. Validation and verification are integral part of the certification process. This document identifies the work performed and documentation generated to satisfy these action items for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system, it is not certification of the complete SHIELD system. Complete certification will follow at a later date. Each action item is discussed with the justification for its completion. Specific details of the work performed are not included in this document but can be found in the references. The validation and verification effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system computer code is completed

  10. Validation of Dose Calculation Codes for Clearance

    International Nuclear Information System (INIS)

    Menon, S.; Wirendal, B.; Bjerler, J.; Studsvik; Teunckens, L.

    2003-01-01

    Various international and national bodies such as the International Atomic Energy Agency, the European Commission, the US Nuclear Regulatory Commission have put forward proposals or guidance documents to regulate the ''clearance'' from regulatory control of very low level radioactive material, in order to allow its recycling as a material management practice. All these proposals are based on predicted scenarios for subsequent utilization of the released materials. The calculation models used in these scenarios tend to utilize conservative data regarding exposure times and dose uptake as well as other assumptions as a safeguard against uncertainties. None of these models has ever been validated by comparison with the actual real life practice of recycling. An international project was organized in order to validate some of the assumptions made in these calculation models, and, thereby, better assess the radiological consequences of recycling on a practical large scale

  11. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  12. Circulating tumor cells: clinical validity and utility.

    Science.gov (United States)

    Cabel, Luc; Proudhon, Charlotte; Gortais, Hugo; Loirat, Delphine; Coussy, Florence; Pierga, Jean-Yves; Bidard, François-Clément

    2017-06-01

    Circulating tumor cells (CTCs) are rare tumor cells and have been investigated as diagnostic, prognostic and predictive biomarkers in many types of cancer. Although CTCs are not currently used in clinical practice, CTC studies have accumulated a high level of clinical validity, especially in breast, lung, prostate and colorectal cancers. In this review, we present an overview of the current clinical validity of CTCs in metastatic and non-metastatic disease, and the main concepts and studies investigating the clinical utility of CTCs. In particular, this review will focus on breast, lung, colorectal and prostate cancer. Three major topics concerning the clinical utility of CTC are discussed-(1) treatment based on CTCs used as liquid biopsy, (2) treatment based on CTC count or CTC variations, and (3) treatment based on CTC biomarker expression. A summary of published or ongoing phase II and III trials is also presented.

  13. Validity of Type D personality in Iceland

    DEFF Research Database (Denmark)

    Svansdottir, Erla; Karlsson, Hrobjartur D; Gudnason, Thorarinn

    2012-01-01

    was 26-29%, and assessment of Type D personality was not confounded by severity of underlying coronary artery disease. Regarding risk markers, Type D patients reported more psychopharmacological medication use and smoking, but frequency of previous mental problems was similar across groups. Type D......Type D personality has been associated with poor prognosis in cardiac patients. This study investigated the validity of the Type D construct in Iceland and its association with disease severity and health-related risk markers in cardiac patients. A sample of 1,452 cardiac patients completed...... the Type D scale (DS14), and a subgroup of 161 patients completed measurements for the five-factor model of personality, emotional control, anxiety, depression, stress and lifestyle factors. The Icelandic DS14 had good psychometric properties and its construct validity was confirmed. Prevalence of Type D...

  14. Validation of the reactor dynamics code TRAB

    International Nuclear Information System (INIS)

    Raety, H.; Kyrki-Rajamaeki, R.; Rajamaeki, M.

    1991-05-01

    The one-dimensional reactor dynamics code TRAB (Transient Analysis code for BWRs) developed at VTT was originally designed for BWR analyses, but it can in its present version be used for various modelling purposes. The core model of TRAB can be used separately for LWR calculations. For PWR modelling the core model of TRAB has been coupled to circuit model SMABRE to form the SMATRA code. The versatile modelling capabilities of TRAB have been utilized also in analyses of e.g. the heating reactor SECURE and the RBMK-type reactor (Chernobyl). The report summarizes the extensive validation of TRAB. TRAB has been validated with benchmark problems, comparative calculations against independent analyses, analyses of start-up experiments of nuclear power plants and real plant transients. Comparative RBMES type reactor calculations have been made against Soviet simulations and the initial power excursion of the Chernobyl reactor accident has also been calculated with TRAB

  15. Ensuring the validity of calculated subcritical limits

    International Nuclear Information System (INIS)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionally subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin

  16. Distributed Trust Management for Validating SLA Choreographies

    Science.gov (United States)

    Haq, Irfan Ul; Alnemr, Rehab; Paschke, Adrian; Schikuta, Erich; Boley, Harold; Meinel, Christoph

    For business workflow automation in a service-enriched environment such as a grid or a cloud, services scattered across heterogeneous Virtual Organizations (VOs) can be aggregated in a producer-consumer manner, building hierarchical structures of added value. In order to preserve the supply chain, the Service Level Agreements (SLAs) corresponding to the underlying choreography of services should also be incrementally aggregated. This cross-VO hierarchical SLA aggregation requires validation, for which a distributed trust system becomes a prerequisite. Elaborating our previous work on rule-based SLA validation, we propose a hybrid distributed trust model. This new model is based on Public Key Infrastructure (PKI) and reputation-based trust systems. It helps preventing SLA violations by identifying violation-prone services at service selection stage and actively contributes in breach management at the time of penalty enforcement.

  17. Validation of the Early Functional Abilities scale

    DEFF Research Database (Denmark)

    Poulsen, Ingrid; Kreiner, Svend; Engberg, Aase W

    2018-01-01

    model item analysis. A secondary objective was to examine the relationship between the Early Functional Abilities scale and the Functional Independence Measurement™, in order to establish the criterion validity of the Early Functional Abilities scale and to compare the sensitivity of measurements using......), facio-oral, sensorimotor and communicative/cognitive functions. Removal of one item from the sensorimotor scale confirmed unidimensionality for each of the 4 subscales, but not for the entire scale. The Early Functional Abilities subscales are sensitive to differences between patients in ranges in which......OBJECTIVE: The Early Functional Abilities scale assesses the restoration of brain function after brain injury, based on 4 dimensions. The primary objective of this study was to evaluate the validity, objectivity, reliability and measurement precision of the Early Functional Abilities scale by Rasch...

  18. Construct validity of the Iowa Gambling Task.

    Science.gov (United States)

    Buelow, Melissa T; Suhr, Julie A

    2009-03-01

    The Iowa Gambling Task (IGT) was created to assess real-world decision making in a laboratory setting and has been applied to various clinical populations (i.e., substance abuse, schizophrenia, pathological gamblers) outside those with orbitofrontal cortex damage, for whom it was originally developed. The current review provides a critical examination of lesion, functional neuroimaging, developmental, and clinical studies in order to examine the construct validity of the IGT. The preponderance of evidence provides support for the use of the IGT to detect decision making deficits in clinical populations, in the context of a more comprehensive evaluation. The review includes a discussion of three critical issues affecting the validity of the IGT, as it has recently become available as a clinical instrument: the lack of a concise definition as to what aspect of decision making the IGT measures, the lack of data regarding reliability of the IGT, and the influence of personality and state mood on IGT performance.

  19. Content validation of the nursing diagnosis Nausea

    Directory of Open Access Journals (Sweden)

    Daniele Alcalá Pompeo

    2014-02-01

    Full Text Available This study aimed to evaluate the content validity of the nursing diagnosis of nausea in the immediate post-operative period, considering Fehring’s model. Descriptive study with 52 nurses experts who responded an instrument containing identification and validation of nausea diagnosis data. Most experts considered the domain 12 (Comfort, Class 1 (Physical Comfort and the statement (Nausea adequate to the diagnosis. Modifications were suggested in the current definition of this nursing diagnosis. Four defining characteristics were considered primary (reported nausea, increased salivation, aversion to food and vomiting sensation and eight secondary (increased swallowing, sour taste in the mouth, pallor, tachycardia, diaphoresis, sensation of hot and cold, changes in blood pressure and pupil dilation. The total score for the diagnosis of nausea was 0.79. Reports of nausea, vomiting sensation, increased salivation and aversion to food are strong predictors of nursing diagnosis of nausea.

  20. Validity of your safety awareness training

    CERN Multimedia

    DG Unit

    2010-01-01

    AIS is setting up an automatic e-mail reminder system for safety training. You are invited to forward this message to everyone concerned. Reminder: Please check the validity of your Safety courses Since April 2009 the compulsory basic Safety awareness courses (levels 1, 2 and 3) have been accessible on a "self-service" basis on the web (see CERN Bulletin). Participants are required to pass a test at the end of each course. The test is valid for 3 years so courses must be repeated on a regular basis. A system of automatic e-mail reminders already exists for level 4 courses on SIR and will be extended to the other levels shortly. The number of levels you are required to complete depends on your professional category. Activity Personnel concerned Level 1 Level 2 Level 3 Level 4     Basic safety Basic Safety ...

  1. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  2. Validation of Metrics as Error Predictors

    Science.gov (United States)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  3. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  4. GRIMHX verification and validation action matrix summary

    International Nuclear Information System (INIS)

    Trumble, E.F.

    1991-12-01

    WSRC-RP-90-026, Certification Plan for Reactor Analysis Computer Codes, describes a series of action items to be completed for certification of reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. Validation and verification of the code is an integral part of this process. This document identifies the work performed and documentation generated to satisfy these action items for the Reactor Physics computer code GRIMHX. Each action item is discussed with the justification for its completion. Specific details of the work performed are not included in this document but are found in the references. The publication of this document signals the validation and verification effort for the GRIMHX code is completed

  5. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  6. Italian Validation of Homophobia Scale (HS).

    Science.gov (United States)

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A

    2015-09-01

    The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self-reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. The aim of this study was to validate the HS in the Italian context. An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back into the English language. A psychologist and sexologist checked the translated items from a clinical point of view. We recruited 100 subjects aged18-65 for the Italian validation of the HS. The Pearson coefficient and Cronbach's α coefficient were performed to test the test-retest reliability and internal consistency. A sociodemographic questionnaire including the main information as age, geographic distribution, partnership status, education, religious orientation, and sex orientation was administrated together with the translated version of HS. The analysis of the internal consistency showed an overall Cronbach's α coefficient of 0.92. In the four domains, the Cronbach's α coefficient was 0.90 in behavior/negative affect, 0.94 in affect/behavioral aggression, and 0.92 in negative cognition, whereas in the total score was 0.86. The test-retest reliability showed the following results: the HS total score was r = 0.93 (P cognition was r = 0.75 (P validation of the HS revealed the use of this self-report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior.

  7. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  8. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  9. Further validation of the Indecisiveness Scale.

    Science.gov (United States)

    Gayton, W F; Clavin, R H; Clavin, S L; Broida, J

    1994-12-01

    Scores on the Indecisiveness Scale have been shown to be correlated with scores on measures of obsessive-compulsive tendencies and perfectionism for women. This study examined the validity of the Indecisiveness Scale with 41 men whose mean age was 21.1 yr. Indecisiveness scores were significantly correlated with scores on measures of obsessive-compulsive tendencies and perfectionism. Also, undeclared majors had a significantly higher mean on the Indecisiveness Scale than did declared majors.

  10. Benchmarking and validation activities within JEFF project

    OpenAIRE

    Cabellos O.; Alvarez-Velarde F.; Angelone M.; Diez C.J.; Dyrda J.; Fiorito L.; Fischer U.; Fleming M.; Haeck W.; Hill I.; Ichou R.; Kim D. H.; Klix A.; Kodeli I.; Leconte P.

    2017-01-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient be...

  11. Development and validation of sodium fire codes

    International Nuclear Information System (INIS)

    Morii, Tadashi; Himeno Yoshiaki; Miyake, Osamu

    1989-01-01

    Development, verification, and validation of the spray fire code, SPRAY-3M, the pool fire codes, SOFIRE-M2 and SPM, the aerosol behavior code, ABC-INTG, and the simultaneous spray and pool fires code, ASSCOPS, are presented. In addition, the state-of-the-art of development of the multi-dimensional natural convection code, SOLFAS, for the analysis of heat-mass transfer during a fire, is presented. (author)

  12. Validation Of Critical Knowledge-Based Systems

    Science.gov (United States)

    Duke, Eugene L.

    1992-01-01

    Report discusses approach to verification and validation of knowledge-based systems. Also known as "expert systems". Concerned mainly with development of methodologies for verification of knowledge-based systems critical to flight-research systems; e.g., fault-tolerant control systems for advanced aircraft. Subject matter also has relevance to knowledge-based systems controlling medical life-support equipment or commuter railroad systems.

  13. Predictive validity of the Slovene Matura

    Directory of Open Access Journals (Sweden)

    Valentin Bucik

    2001-09-01

    Full Text Available Passing Matura is the last step of the secondary school graduation, but it is also the entrance ticket for the university. Besides, the summary score of Matura exam takes part in the selection process for the particular university studies in case of 'numerus clausus'. In discussing either aim of Matura important dilemmas arise, namely, is the Matura examination sufficiently exact and rightful procedure to, firstly, use its results for settling starting studying conditions and, secondly, to select validly, reliably and sensibly the best candidates for university studies. There are some questions concerning predictive validity of Matura that should be answered, e.g. (i does Matura as an enrollment procedure add to the qualitaty of the study; (ii is it a better selection tool than entrance examinations formerly used in different faculties in the case of 'numerus clausus'; and (iii is it reasonable to expect high predictive validity of Matura results for success at the university at all. Recent results show that in the last few years the dropout-rate is lower than before, the pass-rate between the first and the second year is higher and the average duration of study per student is shorter. It is clear, however, that it is not possible to simply predict the study success from the Matura results. There are too many factors influencing the success in the university studies. In most examined study programs the correlation between Matura results and study success is positive but moderate, therefore it can not be said categorically that only candidates accepted according to the Matura results are (or will be the best students. Yet it has been shown that Matura is a standardized procedure, comparable across different candidates entering university, and that – when compared entrance examinations – it is more objective, reliable, and hen ce more valid and fair a procedure. In addition, comparable procedures of university recruiting and selection can be

  14. Streamlining Compliance Validation Through Automation Processes

    Science.gov (United States)

    2014-03-01

    INTENTIONALLY LEFT BLANK xv LIST OF ACRONYMS AND ABBREVIATIONS ACAS Assured Compliance Assessment Suite AMP Apache- MySQL -PHP ANSI American...enemy. Of course , a common standard for DoD security personnel to write and share compliance validation content would prevent duplicate work and aid in...process and consume much of the SCAP content available. Finally, it is free and easy to install as part of the Apache/ MySQL /PHP (AMP) [37

  15. Experimental validation of the HARMONIE code

    International Nuclear Information System (INIS)

    Bernard, A.; Dorsselaere, J.P. van

    1984-01-01

    An experimental program of deformation, in air, of different groups of subassemblies (7 to 41 subassemblies), was performed on a scale 1 mock-up in the SPX1 geometry, in order to achieve a first experimental validation of the code HARMONIE. The agreement between tests and calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 19 subassemblies at most. The differences come mainly from friction between pads, and secondly from the foot gaps. (author)

  16. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  17. Validation of Biomarkers for Prostate Cancer Prognosis

    Science.gov (United States)

    2016-11-01

    subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE...challenge, we formed the multi-institutional Canary Tissue Microarray Project. We have used rigorous clinical trial case/cohort design, taking care to...concluded that TACOMA algorithm as it currently stands, it is inadequate for automatic imaging reading. The main reason is that it still requires

  18. Validating PHITS for heavy ion fragmentation reactions

    International Nuclear Information System (INIS)

    Ronningen, Reginald M.

    2015-01-01

    The performance of the Monte Carlo code system PHITS is validated for heavy-ion transport capabilities by performing simulations and comparing results against experimental data from heavy-ion reactions of benchmark quality. These data are from measurements of isotope yields produced in the fragmentation of a 140 MeV/u "4"8Ca beam on a beryllium target and on a tantalum target. The results of this study show that PHITS performs reliably. (authors)

  19. Validation of the Regional Authority Index

    OpenAIRE

    SCHAKEL, ARJAN H.

    2008-01-01

    This article validates the Regional Authority Index with seven widely used decentralization indices in the literature. A principal axis analysis reveals a common structure. The major source of disagreement between the Regional Authority Index and the other indices stems from the fact that the Regional Authority Index does not include local governance whereas most other indices do. Two other sources of disagreement concern the treatment of federal versus non-federal countries, and countries wh...

  20. Validation of Biomarkers for Prostate Cancer Prognosis

    Science.gov (United States)

    2017-06-01

    been calibrated and already validated precisely for this purpose. In addition, multiparametric MRI shows good correlation with grade in that only the...StanfordUniversity, Stanford,California 2Departmentof ExperimentalandClinical Pharmacology ,UniversityofMinnesota,Minneapolis,Minnesota BACKGROUND. Protein...repeat biopsies, and more recently, MRI examinations [9–11]. This follow-up is necessitated by the inability to characterize the biological potential of

  1. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  2. IP validation in remote microelectronics testing

    Science.gov (United States)

    Osseiran, Adam; Eshraghian, Kamran; Lachowicz, Stefan; Zhao, Xiaoli; Jeffery, Roger; Robins, Michael

    2004-03-01

    This paper presents the test and validation of FPGA based IP using the concept of remote testing. It demonstrates how a virtual tester environment based on a powerful, networked Integrated Circuit testing facility, aimed to complement the emerging Australian microelectronics based research and development, can be employed to perform the tasks beyond the standard IC test. IC testing in production consists in verifying the tested products and eliminating defective parts. Defects could have a number of different causes, including process defects, process migration and IP design and implementation errors. One of the challenges in semiconductor testing is that while current fault models are used to represent likely faults (stuck-at, delay, etc.) in a global context, they do not account for all possible defects. Research in this field keeps growing but the high cost of ATE is preventing a large community from accessing test and verification equipment to validate innovative IP designs. For these reasons a world class networked IC teletest facility has been established in Australia under the support of the Commonwealth government. The facility is based on a state-of-the-art semiconductor tester operating as a virtual centre spanning Australia and accessible internationally. Through a novel approach the teletest network provides virtual access to the tester on which the DUT has previously been placed. The tester software is then accessible as if the designer is sitting next to the tester. This paper presents the approach used to test and validate FPGA based IPs using this remote test approach.

  3. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  4. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  5. Initial Verification and Validation Assessment for VERA

    Energy Technology Data Exchange (ETDEWEB)

    Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States); Athe, Paridhi [North Carolina State Univ., Raleigh, NC (United States); Jones, Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hetzler, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sieger, Matt [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-04-01

    The Virtual Environment for Reactor Applications (VERA) code suite is assessed in terms of capability and credibility against the Consortium for Advanced Simulation of Light Water Reactors (CASL) Verification and Validation Plan (presented herein) in the context of three selected challenge problems: CRUD-Induced Power Shift (CIPS), Departure from Nucleate Boiling (DNB), and Pellet-Clad Interaction (PCI). Capability refers to evidence of required functionality for capturing phenomena of interest while capability refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements against which the VERA software is assessed. This approach, in turn, enables the focused assessment of only those capabilities relevant to the challenge problem. The evaluation of VERA against the challenge problem requirements represents a capability assessment. The mechanism for assessment is the Sandia-developed Predictive Capability Maturity Model (PCMM) that, for this assessment, evaluates VERA on 8 major criteria: (1) Representation and Geometric Fidelity, (2) Physics and Material Model Fidelity, (3) Software Quality Assurance and Engineering, (4) Code Verification, (5) Solution Verification, (6) Separate Effects Model Validation, (7) Integral Effects Model Validation, and (8) Uncertainty Quantification. For each attribute, a maturity score from zero to three is assigned in the context of each challenge problem. The evaluation of these eight elements constitutes the credibility assessment for VERA.

  6. Nuclear data to support computer code validation

    International Nuclear Information System (INIS)

    Fisher, S.E.; Broadhead, B.L.; DeHart, M.D.; Primm, R.T. III

    1997-04-01

    The rate of plutonium disposition will be a key parameter in determining the degree of success of the Fissile Materials Disposition Program. Estimates of the disposition rate are dependent on neutronics calculations. To ensure that these calculations are accurate, the codes and data should be validated against applicable experimental measurements. Further, before mixed-oxide (MOX) fuel can be fabricated and loaded into a reactor, the fuel vendors, fabricators, fuel transporters, reactor owners and operators, regulatory authorities, and the Department of Energy (DOE) must accept the validity of design calculations. This report presents sources of neutronics measurements that have potential application for validating reactor physics (predicting the power distribution in the reactor core), predicting the spent fuel isotopic content, predicting the decay heat generation rate, certifying criticality safety of fuel cycle facilities, and ensuring adequate radiation protection at the fuel cycle facilities and the reactor. The U.S. in-reactor experience with MOX fuel is first presented, followed by information related to other aspects of the MOX fuel performance information that is valuable to this program, but the data base remains largely proprietary. Thus, this information is not reported here. It is expected that the selected consortium will make the necessary arrangements to procure or have access to the requisite information

  7. Verification and validation methodology of training simulators

    International Nuclear Information System (INIS)

    Hassan, M.W.; Khan, N.M.; Ali, S.; Jafri, M.N.

    1997-01-01

    A full scope training simulator comprising of 109 plant systems of a 300 MWe PWR plant contracted by Pakistan Atomic Energy Commission (PAEC) from China is near completion. The simulator has its distinction in the sense that it will be ready prior to fuel loading. The models for the full scope training simulator have been developed under APROS (Advanced PROcess Simulator) environment developed by the Technical Research Center (VTT) and Imatran Voima (IVO) of Finland. The replicated control room of the plant is contracted from Shanghai Nuclear Engineering Research and Design Institute (SNERDI), China. The development of simulation models to represent all the systems of the target plant that contribute to plant dynamics and are essential for operator training has been indigenously carried out at PAEC. This multifunctional simulator is at present under extensive testing and will be interfaced with the control planes in March 1998 so as to realize a full scope training simulator. The validation of the simulator is a joint venture between PAEC and SNERDI. For the individual components and the individual plant systems, the results have been compared against design data and PSAR results to confirm the faithfulness of the simulator against the physical plant systems. The reactor physics parameters have been validated against experimental results and benchmarks generated using design codes. Verification and validation in the integrated state has been performed against the benchmark transients conducted using the RELAP5/MOD2 for the complete spectrum of anticipated transient covering the well known five different categories. (author)

  8. Validation of the vaccine conspiracy beliefs scale.

    Science.gov (United States)

    Shapiro, Gilla K; Holding, Anne; Perez, Samara; Amsel, Rhonda; Rosberger, Zeev

    2016-12-01

    Parents' vaccine attitudes influence their decision regarding child vaccination. To date, no study has evaluated the impact of vaccine conspiracy beliefs on human papillomavirus vaccine acceptance. The authors assessed the validity of a Vaccine Conspiracy Beliefs Scale (VCBS) and determined whether this scale was associated with parents' willingness to vaccinate their son with the HPV vaccine. Canadian parents completed a 24-min online survey in 2014. Measures included socio-demographic variables, HPV knowledge, health care provider recommendation, Conspiracy Mentality Questionnaire (CMQ), the seven-item VCBS, and parents' willingness to vaccinate their son at two price points. A total of 1427 Canadian parents completed the survey in English (61.2%) or French (38.8%). A Factor Analysis revealed the VCBS is one-dimensional and has high internal consistency (α=0.937). The construct validity of the VCBS was supported by a moderate relationship with the CMQ (r=0.44, pparents' willingness to vaccinate their son with the HPV vaccine at both price points ('free' or '$300') after controlling for gender, age, household income, education level, HPV knowledge, and health care provider recommendation. The VCBS is a brief, valid scale that will be useful in further elucidating the correlates of vaccine hesitancy. Future research could use the VCBS to evaluate the impact of vaccine conspiracies beliefs on vaccine uptake and how concerns about vaccination may be challenged and reversed. Copyright © 2016. Published by Elsevier B.V.

  9. Validation of Clinical Testing for Warfarin Sensitivity

    Science.gov (United States)

    Langley, Michael R.; Booker, Jessica K.; Evans, James P.; McLeod, Howard L.; Weck, Karen E.

    2009-01-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 −1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses. PMID:19324988

  10. User Validation of VIIRS Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Don Hillger

    2015-12-01

    Full Text Available Visible/Infrared Imaging Radiometer Suite (VIIRS Imagery from the Suomi National Polar-orbiting Partnership (S-NPP satellite is the finest spatial resolution (375 m multi-spectral imagery of any operational meteorological satellite to date. The Imagery environmental data record (EDR has been designated as a Key Performance Parameter (KPP for VIIRS, meaning that its performance is vital to the success of a series of Joint Polar Satellite System (JPSS satellites that will carry this instrument. Because VIIRS covers the high-latitude and Polar Regions especially well via overlapping swaths from adjacent orbits, the Alaska theatre in particular benefits from VIIRS more than lower-latitude regions. While there are no requirements that specifically address the quality of the EDR Imagery aside from the VIIRS SDR performance requirements, the value of VIIRS Imagery to operational users is an important consideration in the Cal/Val process. As such, engaging a wide diversity of users constitutes a vital part of the Imagery validation strategy. The best possible image quality is of utmost importance. This paper summarizes the Imagery Cal/Val Team’s quality assessment in this context. Since users are a vital component to the validation of VIIRS Imagery, specific examples of VIIRS imagery applied to operational needs are presented as an integral part of the post-checkout Imagery validation.

  11. Developing a validation for environmental sustainability

    Science.gov (United States)

    Adewale, Bamgbade Jibril; Mohammed, Kamaruddeen Ahmed; Nawi, Mohd Nasrun Mohd; Aziz, Zulkifli

    2016-08-01

    One of the agendas for addressing environmental protection in construction is to reduce impacts and make the construction activities more sustainable. This important consideration has generated several research interests within the construction industry, especially considering the construction damaging effects on the ecosystem, such as various forms of environmental pollution, resource depletion and biodiversity loss on a global scale. Using Partial Least Squares-Structural Equation Modeling technique, this study validates environmental sustainability (ES) construct in the context of large construction firms in Malaysia. A cross-sectional survey was carried out where data was collected from Malaysian large construction firms using a structured questionnaire. Results of this study revealed that business innovativeness and new technology are important in determining environmental sustainability (ES) of the Malaysian construction firms. It also established an adequate level of internal consistency reliability, convergent validity and discriminant validity for each of this study's constructs. And based on this result, it could be suggested that the indicators for organisational innovativeness dimensions (business innovativeness and new technology) are useful to measure these constructs in order to study construction firms' tendency to adopt environmental sustainability (ES) in their project execution.

  12. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  13. INTRA - Maintenance and Validation. Final Report

    International Nuclear Information System (INIS)

    Edlund, Ove; Jahn, Hermann; Yitbarek, Z.

    2002-05-01

    The INTRA code is specified by the ITER Joint Central Team and the European Community as a reference code for safety analyses of Tokamak type fusion reactors. INTRA has been developed by GRS and Studsvik EcoSafe to analyse integrated behaviours such as pressurisation, chemical reactions and temperature transients inside the plasma chamber and adjacent rooms, following postulated accidents, e.g. ingress of coolant water or air. Important results of the ICE and EVITA experiments, which became available early 2001, were used to validate and improve specific INTRA models. Large efforts were spent on the behaviour of water and steam injection into a low-pressure volumes at high temperature as well as on the modelling of boiling of water in contact with hot surfaces. As a result of this a new version, INTRA/Mod4, was documented and issued. The work included implementation and validation of selected physical models in the code, maintaining code versions, preparation review and distribution of code documents, and monitoring of the code related activities being performed by the GRS under a separate contract. The INTRA/Mod4 Manual and Code Description is documented in four volumes: Volume 1 - Physical Modelling, Volume 2 - User's Manual, Volume 3 -Code Structure and Volume 4 - Validation

  14. Content validation applied to job simulation and written examinations

    International Nuclear Information System (INIS)

    Saari, L.M.; McCutchen, M.A.; White, A.S.; Huenefeld, J.C.

    1984-08-01

    The application of content validation strategies in work settings have become increasingly popular over the last few years, perhaps spurred by an acknowledgment in the courts of content validation as a method for validating employee selection procedures (e.g., Bridgeport Guardians v. Bridgeport Police Dept., 1977). Since criterion-related validation is often difficult to conduct, content validation methods should be investigated as an alternative for determining job related selection procedures. However, there is not yet consensus among scientists and professionals concerning how content validation should be conducted. This may be because there is a lack of clear cut operations for conducting content validation for different types of selection procedures. The purpose of this paper is to discuss two content validation approaches being used for the development of a licensing examination that involves a job simulation exam and a written exam. These represent variations in methods for applying content validation. 12 references

  15. Overview of SCIAMACHY validation: 2002 2004

    Science.gov (United States)

    Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.

    2005-08-01

    SCIAMACHY, on board Envisat, is now in operation for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. The actual validation of the operational SCIAMACHY processors established at DLR on behalf of ESA has been hampered by data distribution and processor problems. Since first data releases in summer 2002, operational processors were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, remaining processor problems cause major errors preventing from scientific usability in other periods and domains. Untied to the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products (both columns and profiles) already have acceptable, if not excellent, quality

  16. CosmoQuest:Using Data Validation for More Than Just Data Validation

    Science.gov (United States)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  17. [French validation of the Frustration Discomfort Scale].

    Science.gov (United States)

    Chamayou, J-L; Tsenova, V; Gonthier, C; Blatier, C; Yahyaoui, A

    2016-08-01

    Rational emotive behavior therapy originally considered the concept of frustration intolerance in relation to different beliefs or cognitive patterns. Psychological disorders or, to some extent, certain affects such as frustration could result from irrational beliefs. Initially regarded as a unidimensional construct, recent literature considers those irrational beliefs as a multidimensional construct; such is the case for the phenomenon of frustration. In order to measure frustration intolerance, Harrington (2005) developed and validated the Frustration Discomfort Scale. The scale includes four dimensions of beliefs: emotional intolerance includes beliefs according to which emotional distress is intolerable and must be controlled or avoided as soon as possible. The intolerance of discomfort or demand for comfort is the second dimension based on beliefs that life should be peaceful and comfortable and that any inconvenience, effort or hassle should be avoided. The third dimension is entitlement, which includes beliefs about personal goals, such as merit, fairness, respect and gratification, and that others must not frustrate those non-negotiable desires. The fourth dimension is achievement, which reflects demands for high expectations or standards. The aim of this study was to translate and validate in a French population the Frustration and Discomfort Scale developed by Harrington (2005), assess its psychometric properties, highlight the four factors structure of the scale, and examine the relationships between this concept and both emotion regulation and perceived stress. We translated the Frustration Discomfort Scale from English to French and back from French to English in order to ensure good quality of translation. We then submitted the scale to 289 students (239 females and 50 males) from the University of Savoy in addition to the Cognitive Emotional Regulation Questionnaire and the Perceived Stress Scale. The results showed satisfactory psychometric

  18. Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.

    Science.gov (United States)

    Levinson, Cheri A; Rodebaugh, Thomas L

    2011-09-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.

  19. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  20. Validity of an Interactive Functional Reach Test.

    Science.gov (United States)

    Galen, Sujay S; Pardo, Vicky; Wyatt, Douglas; Diamond, Andrew; Brodith, Victor; Pavlov, Alex

    2015-08-01

    Videogaming platforms such as the Microsoft (Redmond, WA) Kinect(®) are increasingly being used in rehabilitation to improve balance performance and mobility. These gaming platforms do not have built-in clinical measures that offer clinically meaningful data. We have now developed software that will enable the Kinect sensor to assess a patient's balance using an interactive functional reach test (I-FRT). The aim of the study was to test the concurrent validity of the I-FRT and to establish the feasibility of implementing the I-FRT in a clinical setting. The concurrent validity of the I-FRT was tested among 20 healthy adults (mean age, 25.8±3.4 years; 14 women). The Functional Reach Test (FRT) was measured simultaneously by both the Kinect sensor using the I-FRT software and the Optotrak Certus(®) 3D motion-capture system (Northern Digital Inc., Waterloo, ON, Canada). The feasibility of implementing the I-FRT in a clinical setting was assessed by performing the I-FRT in 10 participants with mild balance impairments recruited from the outpatient physical therapy clinic (mean age, 55.8±13.5 years; four women) and obtaining their feedback using a NASA Task Load Index (NASA-TLX) questionnaire. There was moderate to good agreement between FRT measures made by the two measurement systems. The greatest agreement between the two measurement system was found with the Kinect sensor placed at a distance of 2.5 m [intraclass correlation coefficient (2,k)=0.786; PNASA/TLX questionnaire. FRT measures made using the Kinect sensor I-FRT software provides a valid clinical measure that can be used with the gaming platforms.

  1. Site characterization and validation - Final report

    International Nuclear Information System (INIS)

    Olsson, O.

    1992-04-01

    The central aims of the Site Characterization and Validation (SCV) project were to develop and apply; * an advanced site characterization methodology and * a methodology to validate the models used to describe groundwater flow and transport in fractured rock. The basic experiment within the SCV project was to predict the distribution of water flow and tracer transport through a volume of rock, before and after excavation of a sub-horizontal drift, and to compare these predictions with actual field measurements. A structured approach was developed to combine site characterization data into a geological and hydrogeological conceptual model of a site. The conceptual model was based on a binary description where the rock mass was divided into 'fracture zones' and 'averagely fractured rock'. This designation into categories was based on a Fracture Zone Index (FZI) derived from principal component analysis of single borehole data. The FZI was used to identify the location of fracture zones in the boreholes and the extent of the zones between the boreholes was obtained form remote sensing data (radar and seismics). The consistency of the geometric model thus defined, and its significance to the flow system, was verified by cross-hole hydraulic testing. The conceptual model of the SCV site contained three major and four minor fractures zones which were the principal hydraulic conduits at the site. The location and extent of the fracture zones were included explicitly in the flow and transport models. Four different numerical modelling approaches were pursued within the project; one porous medium approach, two discrete fracture approaches, and an equivalent discontinuum approach. A series of tracer tests was also included in the prediction-validation exercise. (120 refs.) (au)

  2. Validation of the vaccine conspiracy beliefs scale

    Directory of Open Access Journals (Sweden)

    Gilla K. Shapiro

    2016-12-01

    Full Text Available Background: Parents’ vaccine attitudes influence their decision regarding child vaccination. To date, no study has evaluated the impact of vaccine conspiracy beliefs on human papillomavirus vaccine acceptance. The authors assessed the validity of a Vaccine Conspiracy Beliefs Scale (VCBS and determined whether this scale was associated with parents’ willingness to vaccinate their son with the HPV vaccine. Methods: Canadian parents completed a 24-min online survey in 2014. Measures included socio-demographic variables, HPV knowledge, health care provider recommendation, Conspiracy Mentality Questionnaire (CMQ, the seven-item VCBS, and parents’ willingness to vaccinate their son at two price points. Results: A total of 1427 Canadian parents completed the survey in English (61.2% or French (38.8%. A Factor Analysis revealed the VCBS is one-dimensional and has high internal consistency (α=0.937. The construct validity of the VCBS was supported by a moderate relationship with the CMQ (r=0.44, p<0.001. Hierarchical regression analyses found the VCBS is negatively related to parents’ willingness to vaccinate their son with the HPV vaccine at both price points (‘free’ or ‘$300′ after controlling for gender, age, household income, education level, HPV knowledge, and health care provider recommendation. Conclusions: The VCBS is a brief, valid scale that will be useful in further elucidating the correlates of vaccine hesitancy. Future research could use the VCBS to evaluate the impact of vaccine conspiracies beliefs on vaccine uptake and how concerns about vaccination may be challenged and reversed. Keywords: Cancer prevention, Conspiracy beliefs, Human papillomavirus, Vaccine hesitancy, Vaccines, Vaccine Conspiracy Belief Scale

  3. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  4. Assessing the construct validity of aberrant salience

    Directory of Open Access Journals (Sweden)

    Kristin Schmidt

    2009-12-01

    Full Text Available We sought to validate the psychometric properties of a recently developed paradigm that aims to measure salience attribution processes proposed to contribute to positive psychotic symptoms, the Salience Attribution Test (SAT. The “aberrant salience” measure from the SAT showed good face validity in previous results, with elevated scores both in high-schizotypy individuals, and in patients with schizophrenia suffering from delusions. Exploring the construct validity of salience attribution variables derived from the SAT is important, since other factors, including latent inhibition/learned irrelevance, attention, probabilistic reward learning, sensitivity to probability, general cognitive ability and working memory could influence these measures. Fifty healthy participants completed schizotypy scales, the SAT, a learned irrelevance task, and a number of other cognitive tasks tapping into potentially confounding processes. Behavioural measures of interest from each task were entered into a principal components analysis, which yielded a five-factor structure accounting for ~75% percent of the variance in behaviour. Implicit aberrant salience was found to load onto its own factor, which was associated with elevated “Introvertive Anhedonia” schizotypy, replicating our previous finding. Learned irrelevance loaded onto a separate factor, which also included implicit adaptive salience, but was not associated with schizotypy. Explicit adaptive and aberrant salience, along with a measure of probabilistic learning, loaded onto a further factor, though this also did not correlate with schizotypy. These results suggest that the measures of learned irrelevance and implicit adaptive salience might be based on similar underlying processes, which are dissociable both from implicit aberrant salience and explicit measures of salience.

  5. A PHYSICAL ACTIVITY QUESTIONNAIRE: REPRODUCIBILITY AND VALIDITY

    Directory of Open Access Journals (Sweden)

    Nicolas Barbosa

    2007-12-01

    Full Text Available This study evaluates the Quantification de L'Activite Physique en Altitude chez les Enfants (QAPACE supervised self-administered questionnaire reproducibility and validity on the estimation of the mean daily energy expenditure (DEE on Bogotá's schoolchildren. The comprehension was assessed on 324 students, whereas the reproducibility was studied on a different random sample of 162 who were exposed twice to it. Reproducibility was assessed using both the Bland-Altman plot and the intra-class correlation coefficient (ICC. The validity was studied in a sample of 18 girls and 18 boys randomly selected, which completed the test - re-test study. The DEE derived from the questionnaire was compared with the laboratory measurement results of the peak oxygen uptake (Peak VO2 from ergo-spirometry and Leger Test. The reproducibility ICC was 0.96 (95% C.I. 0.95-0.97; by age categories 8-10, 0.94 (0.89-0. 97; 11-13, 0.98 (0.96- 0.99; 14-16, 0.95 (0.91-0.98. The ICC between mean TEE as estimated by the questionnaire and the direct and indirect Peak VO2 was 0.76 (0.66 (p<0.01; by age categories, 8-10, 11-13, and 14-16 were 0.89 (0.87, 0.76 (0.78 and 0.88 (0.80 respectively. The QAPACE questionnaire is reproducible and valid for estimating PA and showed a high correlation with the Peak VO2 uptake

  6. Paleoclimate validation of a numerical climate model

    International Nuclear Information System (INIS)

    Schelling, F.J.; Church, H.W.; Zak, B.D.; Thompson, S.L.

    1994-01-01

    An analysis planned to validate regional climate model results for a past climate state at Yucca Mountain, Nevada, against paleoclimate evidence for the period is described. This analysis, which will use the GENESIS model of global climate nested with the RegCM2 regional climate model, is part of a larger study for DOE's Yucca Mountain Site Characterization Project that is evaluating the impacts of long term future climate change on performance of the potential high level nuclear waste repository at Yucca Mountain. The planned analysis and anticipated results are presented

  7. Instrument validation system of general application

    International Nuclear Information System (INIS)

    Filshtein, E.L.

    1990-01-01

    This paper describes the Instrument Validation System (IVS) as a software system which has the capability of evaluating the performance of a set of functionally related instrument channels to identify failed instruments and to quantify instrument drift. Under funding from Combustion Engineering (C-E), the IVS has been developed to the extent that a computer program exists whose use has been demonstrated. The initial development work shows promise for success and for wide application, not only to power plants, but also to industrial manufacturing and process control. Applications in the aerospace and military sector are also likely

  8. Construct Validation of the Physics Metacognition Inventory

    Science.gov (United States)

    Taasoobshirazi, Gita; Farley, John

    2013-02-01

    The 24-item Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. Items were classified into eight subcomponents subsumed under two broader components: knowledge of cognition and regulation of cognition. The students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. An exploratory factor analysis provided evidence of construct validity, revealing six components of students' metacognition when solving physics problems including: knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. Although women and men differed on the components, they had equivalent overall metacognition for problem solving. The implications of these findings for future research are discussed.

  9. Overview of SCIAMACHY validation: 2002–2004

    Directory of Open Access Journals (Sweden)

    A. J. M. Piters

    2006-01-01

    Full Text Available SCIAMACHY, on board Envisat, has been in operation now for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. Since provisional releases of limited data sets in summer 2002, operational SCIAMACHY processors established at DLR on behalf of ESA were upgraded regularly and some data products – level-1b spectra, level-2 O3, NO2, BrO and clouds data – have improved significantly. Validation results summarised in this paper and also reported in this special issue conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, current processor versions still experience known limitations that hamper scientific usability in other periods and domains. Free from the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE/IUP-Bremen, IUP-Heidelberg, KNMI, MPI, SAO and SRON have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products – O3, NO2, SO2, H2O total columns; BrO, OClO slant columns; O3, NO2, BrO profiles

  10. Overview of SCIAMACHY validation: 2002-2004

    Science.gov (United States)

    Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.

    2006-01-01

    SCIAMACHY, on board Envisat, has been in operation now for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. Since provisional releases of limited data sets in summer 2002, operational SCIAMACHY processors established at DLR on behalf of ESA were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper and also reported in this special issue conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, current processor versions still experience known limitations that hamper scientific usability in other periods and domains. Free from the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE/IUP-Bremen, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products - O3, NO2, SO2, H2O total columns; BrO, OClO slant columns; O3, NO2, BrO profiles - already have acceptable

  11. Validation of a Cerebral Palsy Register

    DEFF Research Database (Denmark)

    Topp, Monica; Langhoff-Roos, Jens; Uldall, P.

    1997-01-01

    OBJECTIVES: To analyse completeness and validity of data in the Cerebral Palsy Register in Denmark, 1979-1982. METHODS: Completeness has been assessed by comparing data from The Danish National Patient Register (DNPR) with the cases included in the Cerebral Palsy Register (CPR). Agreement between......, but gestational age was subject to a systematic error, and urinary infections in pregnancy (kappa = 0.43) and placental abruption (kappa = 0.52) were seriously under-reported in the CPR. CONCLUSIONS: Completeness of the Cerebral Palsy Register in Denmark, 1979-1982, has been assessed to maximal 85%, emphasizing...

  12. Validity and efficacy of the labor contract

    Directory of Open Access Journals (Sweden)

    Jorge Toyama

    2012-12-01

    Full Text Available The validity and efficacy of the labor contract as well as cases of nullity and defeasibility import an analysis of scopes of the supplementary application of Civil Code taking into account the peculiarities of Labor Law. Labor contract, while legal business has as regulatory framework to the regulations of Civil Code but it is necessary to determine, in each case, whether to apply fully this normative body, or modulate its supplemental application, or simply conclude that it doesn’t result compatible its regulation due to the special nature of labor relations. Specifically, this issue will be analyzed from cases of nullity and defeasibility of the labor contract.

  13. Validation of the STAFF-5 computer model

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Fields, S.R.

    1981-04-01

    STAFF-5 is a dynamic heat-transfer-fluid-flow stress model designed for computerized prediction of the temperature-stress performance of spent LWR fuel assemblies under storage/disposal conditions. Validation of the temperature calculating abilities of this model was performed by comparing temperature calculations under specified conditions to experimental data from the Engine Maintenance and Dissassembly (EMAD) Fuel Temperature Test Facility and to calculations performed by Battelle Pacific Northwest Laboratory (PNL) using the HYDRA-1 model. The comparisons confirmed the ability of STAFF-5 to calculate representative fuel temperatures over a considerable range of conditions, as a first step in the evaluation and prediction of fuel temperature-stress performance

  14. HTML Validation of Context-Free Languages

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2011-01-01

    We present an algorithm that generalizes HTML validation of individual documents to work on context-free sets of documents. Together with a program analysis that soundly approximates the output of Java Servlets and JSP web applications as context-free languages, we obtain a method for statically...... checking that such web applications never produce invalid HTML at runtime. Experiments with our prototype implementation demonstrate that the approach is useful: On 6 open source web applications consisting of a total of 104 pages, our tool finds 64 errors in less than a second per page, with 0 false...

  15. Validity and reliability of food security measures.

    Science.gov (United States)

    Cafiero, Carlo; Melgar-Quiñonez, Hugo R; Ballard, Terri J; Kepple, Anne W

    2014-12-01

    This paper reviews some of the existing food security indicators, discussing the validity of the underlying concept and the expected reliability of measures under reasonably feasible conditions. The main objective of the paper is to raise awareness on existing trade-offs between different qualities of possible food security measurement tools that must be taken into account when such tools are proposed for practical application, especially for use within an international monitoring framework. The hope is to provide a timely, useful contribution to the process leading to the definition of a food security goal and the associated monitoring framework within the post-2015 Development Agenda. © 2014 New York Academy of Sciences.

  16. The design, validation, and performance of Grace

    Directory of Open Access Journals (Sweden)

    Ru Zhu

    2016-05-01

    Full Text Available The design, validation and performance of Grace, a GPU-accelerated micromagnetic simulation software, are presented. The software adopts C+ + Accelerated Massive Parallelism (C+ + AMP so that it runs on GPUs from various hardware vendors including NVidia, AMD and Intel. At large simulation scales, up to two orders of magnitude of speedup factor is observed, compared to CPU-based micromagnetic simulation software OOMMF. The software can run on high-end professional GPUs as well as budget personal laptops, and is free to download.

  17. SCALE criticality safety verification and validation package

    International Nuclear Information System (INIS)

    Bowman, S.M.; Emmett, M.B.; Jordan, W.C.

    1998-01-01

    Verification and validation (V and V) are essential elements of software quality assurance (QA) for computer codes that are used for performing scientific calculations. V and V provides a means to ensure the reliability and accuracy of such software. As part of the SCALE QA and V and V plans, a general V and V package for the SCALE criticality safety codes has been assembled, tested and documented. The SCALE criticality safety V and V package is being made available to SCALE users through the Radiation Safety Information Computational Center (RSICC) to assist them in performing adequate V and V for their SCALE applications

  18. Transient Mixed Convection Validation for NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Barton [Utah State Univ., Logan, UT (United States); Schultz, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-10-19

    The results of this project are best described by the papers and dissertations that resulted from the work. They are included in their entirety in this document. They are: (1) Jeff Harris PhD dissertation (focused mainly on forced convection); (2) Blake Lance PhD dissertation (focused mainly on mixed and transient convection). This dissertation is in multi-paper format and includes the article currently submitted and one to be submitted shortly; and, (3) JFE paper on CFD Validation Benchmark for Forced Convection.

  19. Models for Validation of Prior Learning (VPL)

    DEFF Research Database (Denmark)

    Ehlers, Søren

    The national policies for the education/training of adults are in the 21st century highly influenced by proposals which are formulated and promoted by The European Union (EU) as well as other transnational players and this shift in policy making has consequences. One is that ideas which in the past...... would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....

  20. The predictive validity of safety climate.

    Science.gov (United States)

    Johnson, Stephen E

    2007-01-01

    Safety professionals have increasingly turned their attention to social science for insight into the causation of industrial accidents. One social construct, safety climate, has been examined by several researchers [Cooper, M. D., & Phillips, R. A. (2004). Exploratory analysis of the safety climate and safety behavior relationship. Journal of Safety Research, 35(5), 497-512; Gillen, M., Baltz, D., Gassel, M., Kirsch, L., & Vacarro, D. (2002). Perceived safety climate, job Demands, and coworker support among union and nonunion injured construction workers. Journal of Safety Research, 33(1), 33-51; Neal, A., & Griffin, M. A. (2002). Safety climate and safety behaviour. Australian Journal of Management, 27, 66-76; Zohar, D. (2000). A group-level model of safety climate: Testing the effect of group climate on microaccidents in manufacturing jobs. Journal of Applied Psychology, 85(4), 587-596; Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group-level climates. Journal of Applied Psychology, 90(4), 616-628] who have documented its importance as a factor explaining the variation of safety-related outcomes (e.g., behavior, accidents). Researchers have developed instruments for measuring safety climate and have established some degree of psychometric reliability and validity. The problem, however, is that predictive validity has not been firmly established, which reduces the credibility of safety climate as a meaningful social construct. The research described in this article addresses this problem and provides additional support for safety climate as a viable construct and as a predictive indicator of safety-related outcomes. This study used 292 employees at three locations of a heavy manufacturing organization to complete the 16 item Zohar Safety Climate Questionnaire (ZSCQ) [Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group

  1. Transient Mixed Convection Validation for NGNP

    International Nuclear Information System (INIS)

    Smith, Barton; Schultz, Richard

    2015-01-01

    The results of this project are best described by the papers and dissertations that resulted from the work. They are included in their entirety in this document. They are: (1) Jeff Harris PhD dissertation (focused mainly on forced convection); (2) Blake Lance PhD dissertation (focused mainly on mixed and transient convection). This dissertation is in multi-paper format and includes the article currently submitted and one to be submitted shortly; and, (3) JFE paper on CFD Validation Benchmark for Forced Convection.

  2. Validity - a matter of resonant experience

    DEFF Research Database (Denmark)

    Revsbæk, Line

    This paper is about doing interview analysis drawing on researcher’s own lived experience concerning the question of inquiry. The paper exemplifies analyzing case study participants’ experience from the resonant experience of researcher’s own life evoked while listening to recorded interview...... across researcher’s past experience from the case study and her own life. The autobiographic way of analyzing conventional interview material is exemplified with a case of a junior researcher researching newcomer innovation of others, drawing on her own experience of being newcomer in work community...... entry processes. The validity of doing interview analysis drawing on the resonant experience of researcher is argued from a pragmatist perspective....

  3. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  4. Reliability and Concurrent Validity of the International Personality ...

    African Journals Online (AJOL)

    Reliability and Concurrent Validity of the International Personality item Pool (IPIP) Big-five Factor Markers in Nigeria. ... Nigerian Journal of Psychiatry ... Aims: The aim of this study was to assess the internal consistency and concurrent validity ...

  5. 78 FR 5866 - Pipeline Safety: Annual Reports and Validation

    Science.gov (United States)

    2013-01-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0319] Pipeline Safety: Annual Reports and Validation AGENCY: Pipeline and Hazardous Materials... 2012 gas transmission and gathering annual reports, remind pipeline owners and operators to validate...

  6. GPM GROUND VALIDATION CAMPAIGN REPORTS IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Campaign Reports IFloodS dataset consists of various reports filed by the scientists during the GPM Ground Validation Iowa Flood Studies...

  7. Guided exploration of physically valid shapes for furniture design

    KAUST Repository

    Umetani, Nobuyuki; Igarashi, Takeo; Mitra, Niloy J.

    2012-01-01

    Geometric modeling and the physical validity of shapes are traditionally considered independently. This makes creating aesthetically pleasing yet physically valid models challenging. We propose an interactive design framework for efficient

  8. Validation of gamma irradiator controls for quality and regulatory compliance

    International Nuclear Information System (INIS)

    Harding, R.B.; Pinteric, F.J.A.

    1995-01-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the current good manufacturing practice (CGMP) regulations in progress to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However it is only recently that FDA audits have focussed on this component of the process validation. (author)

  9. Engineering Software Suite Validates System Design

    Science.gov (United States)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  10. Validation of evaluated neutron standard cross sections

    International Nuclear Information System (INIS)

    Badikov, S.; Golashvili, T.

    2008-01-01

    Some steps of the validation and verification of the new version of the evaluated neutron standard cross sections were carried out. In particular: -) the evaluated covariance data was checked for physical consistency, -) energy-dependent evaluated cross-sections were tested in most important neutron benchmark field - 252 Cf spontaneous fission neutron field, -) a procedure of folding differential standard neutron data in group representation for preparation of specialized libraries of the neutron standards was verified. The results of the validation and verification of the neutron standards can be summarized as follows: a) the covariance data of the evaluated neutron standards is physically consistent since all the covariance matrices of the evaluated cross sections are positive definite, b) the 252 Cf spectrum averaged standard cross-sections are in agreement with the evaluated integral data (except for 197 Au(n,γ) reaction), c) a procedure of folding differential standard neutron data in group representation was tested, as a result a specialized library of neutron standards in the ABBN 28-group structure was prepared for use in reactor applications. (authors)

  11. STAR-CCM+ Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-30

    The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methods (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.

  12. Experimental validation of prototype high voltage bushing

    Science.gov (United States)

    Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.

    2017-08-01

    Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.

  13. Developing and validating a sham cupping device.

    Science.gov (United States)

    Lee, Myeong Soo; Kim, Jong-In; Kong, Jae Cheol; Lee, Dong-Hyo; Shin, Byung-Cheul

    2010-12-01

    The aims of this study were to develop a sham cupping device and to validate its use as a placebo control for healthy volunteers. A sham cupping device was developed by establishing a small hole to reduce the negative pressure after suction such that inner pressure could not be maintained in the cup. We enrolled 34 healthy participants to evaluate the validity of the sham cupping device as a placebo control. The participants were informed that they would receive either real or sham cupping and were asked which treatment they thought they had received. Other sensations and adverse events related to cupping therapy were investigated. 17 patients received real cupping therapy and 17 received sham cupping. The two groups felt similar sensations. There was a tendency for subjects to feel that real cupping created a stronger sensation than sham cupping (48.9±21.4 vs 33.3±20.3 on a 100mm visual analogue scale). There were only mild to moderate adverse events observed in both groups. We developed a new sham cupping device that seems to provide a credible control for real cupping therapy by producing little or no negative pressure. This conclusion was supported by a pilot study, but more rigorous research is warranted regarding the use of this device.

  14. CFD Validation Studies for Hypersonic Flow Prediction

    Science.gov (United States)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  15. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  16. NDE reliability and advanced NDE technology validation

    International Nuclear Information System (INIS)

    Doctor, S.R.; Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Hutton, P.H.; Reid, L.D.; Simonen, F.A.; Spanner, J.C.; Vo, T.V.

    1989-01-01

    This paper reports on progress for three programs: (1) evaluation and improvement in nondestructive examination reliability for inservice inspection of light water reactors (LWR) (NDE Reliability Program), (2) field validation acceptance, and training for advanced NDE technology, and (3) evaluation of computer-based NDE techniques and regional support of inspection activities. The NDE Reliability Program objectives are to quantify the reliability of inservice inspection techniques for LWR primary system components through independent research and establish means for obtaining improvements in the reliability of inservice inspections. The areas of significant progress will be described concerning ASME Code activities, re-analysis of the PISC-II data, the equipment interaction matrix study, new inspection criteria, and PISC-III. The objectives of the second program are to develop field procedures for the AE and SAFT-UT techniques, perform field validation testing of these techniques, provide training in the techniques for NRC headquarters and regional staff, and work with the ASME Code for the use of these advanced technologies. The final program's objective is to evaluate the reliability and accuracy of interpretation of results from computer-based ultrasonic inservice inspection systems, and to develop guidelines for NRC staff to monitor and evaluate the effectiveness of inservice inspections conducted on nuclear power reactors. This program started in the last quarter of FY89, and the extent of the program was to prepare a work plan for presentation to and approval from a technical advisory group of NRC staff

  17. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  18. CASL Validation Data: An Initial Review

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2011-01-01

    The study aims to establish a comprehensive view of “data” needed for supporting implementation of the Consortium of Advanced Simulation of LWRs (CASL). Insights from this review (and its continual refinement), together with other elements developed in CASL, should provide the foundation for developing the CASL Validation Data Plan (VDP). VDP is instrumental to the development and assessment of CASL simulation tools as predictive capability. Most importantly, to be useful for CASL, the VDP must be devised (and agreed upon by all participating stakeholders) with appropriate account for nature of nuclear engineering applications, the availability, types and quality of CASL-related data, and novelty of CASL goals and its approach to the selected challenge problems. The initial review (summarized on the January 2011 report version) discusses a broad range of methodological issues in data review and Validation Data Plan. Such a top-down emphasis in data review is both needed to see a big picture on CASL data and appropriate when the actual data are not available for detailed scrutiny. As the data become available later in 2011, a revision of data review (and regular update) should be performed. It is expected that the basic framework for review laid out in this report will help streamline the CASL data review in a way that most pertinent to CASL VDP.

  19. Verification Validation and Uncertainty Quantification for CGS

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kamm, James R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    The overall conduct of verification, validation and uncertainty quantification (VVUQ) is discussed through the construction of a workflow relevant to computational modeling including the turbulence problem in the coarse grained simulation (CGS) approach. The workflow contained herein is defined at a high level and constitutes an overview of the activity. Nonetheless, the workflow represents an essential activity in predictive simulation and modeling. VVUQ is complex and necessarily hierarchical in nature. The particular characteristics of VVUQ elements depend upon where the VVUQ activity takes place in the overall hierarchy of physics and models. In this chapter, we focus on the differences between and interplay among validation, calibration and UQ, as well as the difference between UQ and sensitivity analysis. The discussion in this chapter is at a relatively high level and attempts to explain the key issues associated with the overall conduct of VVUQ. The intention is that computational physicists can refer to this chapter for guidance regarding how VVUQ analyses fit into their efforts toward conducting predictive calculations.

  20. Range of validity of transport equations

    International Nuclear Information System (INIS)

    Berges, Juergen; Borsanyi, Szabolcs

    2006-01-01

    Transport equations can be derived from quantum field theory assuming a loss of information about the details of the initial state and a gradient expansion. While the latter can be systematically improved, the assumption about a memory loss is not known to be controlled by a small expansion parameter. We determine the range of validity of transport equations for the example of a scalar g 2 Φ 4 theory. We solve the nonequilibrium time evolution using the three-loop 2PI effective action. The approximation includes off-shell and memory effects and assumes no gradient expansion. This is compared to transport equations to lowest order (LO) and beyond (NLO). We find that the earliest time for the validity of transport equations is set by the characteristic relaxation time scale t damp =-2ω/Σ ρ (eq) , where -Σ ρ (eq) /2 denotes the on-shell imaginary-part of the self-energy. This time scale agrees with the characteristic time for partial memory loss, but is much shorter than thermal equilibration times. For times larger than about t damp the gradient expansion to NLO is found to describe the full results rather well for g 2 (less-or-similar sign)1

  1. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  2. Construct Validation of Wenger's Support Network Typology.

    Science.gov (United States)

    Szabo, Agnes; Stephens, Christine; Allen, Joanne; Alpass, Fiona

    2016-10-07

    The study aimed to validate Wenger's empirically derived support network typology of responses to the Practitioner Assessment of Network Type (PANT) in an older New Zealander population. The configuration of network types was tested across ethnic groups and in the total sample. Data (N = 872, Mage = 67 years, SDage = 1.56 years) from the 2006 wave of the New Zealand Health, Work and Retirement study were analyzed using latent profile analysis. In addition, demographic differences among the emerging profiles were tested. Competing models were evaluated based on a range of fit criteria, which supported a five-profile solution. The "locally integrated," "community-focused," "local self-contained," "private-restricted," and "friend- and family-dependent" network types were identified as latent profiles underlying the data. There were no differences between Māori and non-Māori in final profile configurations. However, Māori were more likely to report integrated network types. Findings confirm the validity of Wenger's network types. However, the level to which participants endorse accessibility of family, frequency of interactions, and community engagement can be influenced by sample and contextual characteristics. Future research using the PANT items should empirically verify and derive the social support network types, rather than use a predefined scoring system. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Validating Avionics Conceptual Architectures with Executable Specifications

    Directory of Open Access Journals (Sweden)

    Nils Fischer

    2012-08-01

    Full Text Available Current avionics systems specifications, developed after conceptual design, have a high degree of uncertainty. Since specifications are not sufficiently validated in the early development process and no executable specification exists at aircraft level, system designers cannot evaluate the impact of their design decisions at aircraft or aircraft application level. At the end of the development process of complex systems, e. g. aircraft, an average of about 65 per cent of all specifications have to be changed because they are incorrect, incomplete or too vaguely described. In this paper, a model-based design methodology together with a virtual test environment is described that makes complex high level system specifications executable and testable during the very early levels of system design. An aircraft communication system and its system context is developed to demonstrate the proposed early validation methodology. Executable specifications for early conceptual system architectures enable system designers to couple functions, architecture elements, resources and performance parameters, often called non-functional parameters. An integrated executable specification at Early Conceptual Architecture Level is developed and used to determine the impact of different system architecture decisions on system behavior and overall performance.

  4. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. RESEM-CA: Validation and testing

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Vineeta; Carroll, William L.; Bourassa, Norman

    2002-09-01

    This report documents the results of an extended comparison of RESEM-CA energy and economic performance predictions with the recognized benchmark tool DOE2.1E to determine the validity and effectiveness of this tool for retrofit design and analysis. The analysis was a two part comparison of patterns of (1) monthly and annual energy consumption of a simple base-case building and controlled variations in it to explore the predictions of load components of each program, and (2) a simplified life-cycle cost analysis of the predicted effects of selected Energy Conservation Measures (ECMs). The study tries to analyze and/or explain the differences that were observed. On the whole, this validation study indicates that RESEM is a promising tool for retrofit analysis. As a result of this study some factors (incident solar radiation, outside air film coefficient, IR radiation) have been identified where there is a possibility of algorithmic improvements. These would have to be made in a way that does not sacrifice the speed of the tool, necessary for extensive parametric search of optimum ECM measures.

  6. Validated assessment scales for the lower face.

    Science.gov (United States)

    Narins, Rhoda S; Carruthers, Jean; Flynn, Timothy C; Geister, Thorin L; Görtelmeyer, Roman; Hardas, Bhushan; Himmrich, Silvia; Jones, Derek; Kerscher, Martina; de Maio, Maurício; Mohrmann, Cornelia; Pooth, Rainer; Rzany, Berthold; Sattler, Gerhard; Buchner, Larry; Benter, Ursula; Breitscheidel, Lusine; Carruthers, Alastair

    2012-02-01

    Aging in the lower face leads to lines, wrinkles, depression of the corners of the mouth, and changes in lip volume and lip shape, with increased sagging of the skin of the jawline. Refined, easy-to-use, validated, objective standards assessing the severity of these changes are required in clinical research and practice. To establish the reliability of eight lower face scales assessing nasolabial folds, marionette lines, upper and lower lip fullness, lip wrinkles (at rest and dynamic), the oral commissure and jawline, aesthetic areas, and the lower face unit. Four 5-point rating scales were developed to objectively assess upper and lower lip wrinkles, oral commissures, and the jawline. Twelve experts rated identical lower face photographs of 50 subjects in two separate rating cycles using eight 5-point scales. Inter- and intrarater reliability of responses was assessed. Interrater reliability was substantial or almost perfect for all lower face scales, aesthetic areas, and the lower face unit. Intrarater reliability was high for all scales, areas and the lower face unit. Our rating scales are reliable tools for valid and reproducible assessment of the aging process in lower face areas. © 2012 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.

  7. A project manager's primer on data validation

    International Nuclear Information System (INIS)

    Ramos, S.J.

    1991-01-01

    While carrying out their central responsibility of conducting an environmental investigation in a high- quality, timely, and cost-effective manner, project managers also face a significant challenge due to the many inherent uncertainties associated with characterizing and remediating sites. From all aspects and considerations (health and financial risks; and technical, professional, and legal defensibility/credibility), the project manager must minimize the uncertainty associated with making decisions based on environmental data. A key objective for every project manager is to produce sufficient data of known and acceptable quality. In simple terms, the level of confidence in the gathered data directly relates to: (1) the upfront determination of the types and uses of the data needed (which drives the required quality of the data); (2) the ongoing verification that the prescribed methods by which the data are to be obtained and analyzed are being followed; and (3) the validation of the verified data to determine whether the preestablished data quality objectives have been met, therefore making the data adequate for their intended use(s). This paper focuses on the third clement of the equation for data quality, therefore implying that the first two elements (planning and verification) have been accomplished. The open-quotes Who,close quotes open-quotes What,close quotes open-quotes Why,close quotes open-quotes Whenclose quotes and open-quotes Howclose quotes of data validation are discussed in general terms

  8. PRA (Probabilistic Risk Assessments) Participation versus Validation

    Science.gov (United States)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  9. Validation of the Impostor Phenomenon among Managers.

    Science.gov (United States)

    Rohrmann, Sonja; Bechtoldt, Myriam N; Leonhardt, Mona

    2016-01-01

    Following up on earlier investigations, the present research aims at validating the construct impostor phenomenon by taking other personality correlates into account and to examine whether the impostor phenomenon is a construct in its own right. In addition, gender effects as well as associations with dispositional working styles and strain are examined. In an online study we surveyed a sample of N = 242 individuals occupying leadership positions in different sectors. Confirmatory factor analyses provide empirical evidence for the discriminant validity of the impostor phenomenon. In accord with earlier studies we show that the impostor phenomenon is accompanied by higher levels of anxiety, dysphoric moods, emotional instability, a generally negative self-evaluation, and perfectionism. The study does not reveal any gender differences concerning the impostor phenomenon. With respect to working styles, persons with an impostor self-concept tend to show perfectionist as well as procrastinating behaviors. Moreover, they report being more stressed and strained by their work. In sum, the findings show that the impostor phenomenon constitutes a dysfunctional personality style. Practical implications are discussed.

  10. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  11. Addenbrooke's Cognitive Examination validation in Parkinson's disease.

    Science.gov (United States)

    Reyes, M A; Perez-Lloret, S; Lloret, S P; Roldan Gerschcovich, E; Gerscovich, E R; Martin, M E; Leiguarda, R; Merello, M

    2009-01-01

    There is a clear need for brief, sensitive and specific cognitive screening instruments in Parkinson's disease (PD). To study Addenbrooke's Cognitive Examination (ACE) validity for cognitive assessment of PD patient's using the Mattis Dementia Rating Scale (MDRS) as reference method. A specific scale for cognitive evaluation in PD, in this instance the Scales for Outcomes of Parkinson's disease-Cognition (SCOPA-COG), as well as a general use scale the Mini-mental state examination (MMSE) were also studied for further correlation. Forty-four PD patients were studied, of these 27 were males (61%), with a mean (SD) age of 69.5 (11.8) years, mean (SD) disease duration of 7.6 (6.4) years (range 1-25), mean (SD) total Unified Parkinson's Disease Rating Scale (UPDRS) score 37 (24) points, UPDRS III 16.5 (11.3) points. MDRS, ACE and SCOPA-COG scales were administered in random order. All patients remained in on-state during the study. Addenbrooke's Cognitive Examination correlated with SCOPA-COG (r = 0.93, P Addenbrooke's Cognitive Examination appears to be a valid tool for dementia evaluation in PD, with a cut-off point which should probably be set at 83 points, displaying good correlation with both the scale specifically designed for cognitive deficits in PD namely SCOPA-COG, as well as with less specific tests such as MMSE.

  12. Validation testing of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Hang Bae; Han, Jae Bok

    1995-01-01

    A software engineering process has been developed for the design of safety critical software for Wolsung 2/3/4 project to satisfy the requirements of the regulatory body. Among the process, this paper described the detail process of validation testing performed to ensure that the software with its hardware, developed by the design group, satisfies the requirements of the functional specification prepared by the independent functional group. To perform the tests, test facility and test software were developed and actual safety system computer was connected. Three kinds of test cases, i.e., functional test, performance test and self-check test, were programmed and run to verify each functional specifications. Test failures were feedback to the design group to revise the software and test results were analyzed and documented in the report to submit to the regulatory body. The test methodology and procedure were very efficient and satisfactory to perform the systematic and automatic test. The test results were also acceptable and successful to verify the software acts as specified in the program functional specification. This methodology can be applied to the validation of other safety-critical software. 2 figs., 2 tabs., 14 refs. (Author)

  13. Elder abuse telephone screen reliability and validity.

    Science.gov (United States)

    Buri, Hilary M; Daly, Jeanette M; Jogerst, Gerald J

    2009-01-01

    (a) To identify reliable and valid questions that identify elder abuse, (b) to assess the reliability and validity of extant self-reported elder abuse screens in a high-risk elderly population, and (c) to describe difficulties of completing and interpreting screens in a high-need elderly population. All elders referred to research-trained social workers in a community service agency were asked to participate. Of the 70 elders asked, 49 participated, 44 completed the first questionnaire, and 32 completed the duplicate second questionnaire. A research assistant administered the telephone questionnaires. Twenty-nine (42%) persons were judged abused, 12 (17%) had abuse reported, and 4 (6%) had abuse substantiated. The elder abuse screen instruments were not found to be predictive of assessed abuse or as predictors of reported abuse; the measures tended toward being inversely predictive. Two questions regarding harm and taking of belongings were significantly different for the assessed abused group. In this small group of high-need community-dwelling elders, the screens were not effective in discriminating between abused and nonabused groups. Better instruments are needed to assess for elder abuse.

  14. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  15. Precision validation of MIPAS-Envisat products

    Directory of Open Access Journals (Sweden)

    C. Piccolo

    2007-01-01

    Full Text Available This paper discusses the variation and validation of the precision, or estimated random error, associated with the ESA Level 2 products from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS. This quantity represents the propagation of the radiometric noise from the spectra through the retrieval process into the Level 2 profile values. The noise itself varies with time, steadily rising between ice decontamination events, but the Level 2 precision has a greater variation due to the atmospheric temperature which controls the total radiance received. Hence, for all species, the precision varies latitudinally/seasonally with temperature, with a small superimposed temporal structure determined by the degree of ice contamination on the detectors. The precision validation involves comparing two MIPAS retrievals at the intersections of ascending/descending orbits. For 5 days per month of full resolution MIPAS operation, the standard deviation of the matching profile pairs is computed and compared with the precision given in the MIPAS Level 2 data, except for NO2 since it has a large diurnal variation between ascending/descending intersections. Even taking into account the propagation of the pressure-temperature retrieval errors into the VMR retrieval, the standard deviation of the matching pairs is usually a factor 1–2 larger than the precision. This is thought to be due to effects such as horizontal inhomogeneity of the atmosphere and instability of the retrieval.

  16. On the Statistical Validation of Technical Analysis

    Directory of Open Access Journals (Sweden)

    Rosane Riera Freire

    2007-06-01

    Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.

  17. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  18. Validation of the Rotation Ratios Method

    International Nuclear Information System (INIS)

    Foss, O.A.; Klaksvik, J.; Benum, P.; Anda, S.

    2007-01-01

    Background: The rotation ratios method describes rotations between pairs of sequential pelvic radiographs. The method seems promising but has not been validated. Purpose: To validate the accuracy of the rotation ratios method. Material and Methods: Known pelvic rotations between 165 radiographs obtained from five skeletal pelvises in an experimental material were compared with the corresponding calculated rotations to describe the accuracy of the method. The results from a clinical material of 262 pelvic radiographs from 46 patients defined the ranges of rotational differences compared. Repeated analyses, both on the experimental and the clinical material, were performed using the selected reference points to describe the robustness and the repeatability of the method. Results: The reference points were easy to identify and barely influenced by pelvic rotations. The mean differences between calculated and real pelvic rotations were 0.0 deg (SD 0.6) for vertical rotations and 0.1 deg (SD 0.7) for transversal rotations in the experimental material. The intra- and interobserver repeatability of the method was good. Conclusion: The accuracy of the method was reasonably high, and the method may prove to be clinically useful

  19. Validating and comparing GNSS antenna calibrations

    Science.gov (United States)

    Kallio, Ulla; Koivula, Hannu; Lahtinen, Sonja; Nikkonen, Ville; Poutanen, Markku

    2018-03-01

    GNSS antennas have no fixed electrical reference point. The variation of the phase centre is modelled and tabulated in antenna calibration tables, which include the offset vector (PCO) and phase centre variation (PCV) for each frequency according to the elevations and azimuths of the incoming signal. Used together, PCV and PCO reduce the phase observations to the antenna reference point. The remaining biases, called the residual offsets, can be revealed by circulating and rotating the antennas on pillars. The residual offsets are estimated as additional parameters when combining the daily GNSS network solutions with full covariance matrix. We present a procedure for validating the antenna calibration tables. The dedicated test field, called Revolver, was constructed at Metsähovi. We used the procedure to validate the calibration tables of 17 antennas. Tables from the IGS and three different calibration institutions were used. The tests show that we were able to separate the residual offsets at the millimetre level. We also investigated the influence of the calibration tables from the different institutions on site coordinates by performing kinematic double-difference baseline processing of the data from one site with different antenna tables. We found small but significant differences between the tables.

  20. COVERS Neonatal Pain Scale: Development and Validation

    Directory of Open Access Journals (Sweden)

    Ivan L. Hand

    2010-01-01

    Full Text Available Newborns and infants are often exposed to painful procedures during hospitalization. Several different scales have been validated to assess pain in specific populations of pediatric patients, but no single scale can easily and accurately assess pain in all newborns and infants regardless of gestational age and disease state. A new pain scale was developed, the COVERS scale, which incorporates 6 physiological and behavioral measures for scoring. Newborns admitted to the Neonatal Intensive Care Unit or Well Baby Nursery were evaluated for pain/discomfort during two procedures, a heel prick and a diaper change. Pain was assessed using indicators from three previously established scales (CRIES, the Premature Infant Pain Profile, and the Neonatal Infant Pain Scale, as well as the COVERS Scale, depending upon gestational age. Premature infant testing resulted in similar pain assessments using the COVERS and PIPP scales with an r=0.84. For the full-term infants, the COVERS scale and NIPS scale resulted in similar pain assessments with an r=0.95. The COVERS scale is a valid pain scale that can be used in the clinical setting to assess pain in newborns and infants and is universally applicable to all neonates, regardless of their age or physiological state.

  1. Validation Tools for ATLAS Muon Spectrometer Commissioning

    International Nuclear Information System (INIS)

    Benekos, N.Chr.; Dedes, G.; Laporte, J.F.; Nicolaidou, R.; Ouraou, A.

    2008-01-01

    The ATLAS Muon Spectrometer (MS), currently being installed at CERN, is designed to measure final state muons of 14 TeV proton-proton interactions at the Large Hadron Collider (LHC) with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, taking into account the high level background enviroment, the inhomogeneous magnetic field, and the large size of the apparatus (24 m diameter by 44 m length). The MS layout of the ATLAS detector is made of a large toroidal magnet, arrays of high-pressure drift tubes for precise tracking and dedicated fast detectors for the first-level trigger, and is organized in eight Large and eight Small sectors. All the detectors of the barrel toroid have been installed and the commissioning has started with cosmic rays. In order to validate the MS performance using cosmic events, a Muon Commissioning Validation package has been developed and its results are presented in this paper. Integration with the rest of the ATLAS sub-detectors is now being done in the ATLAS cavern

  2. Identification and Validation of ESP Teacher Competencies: A Research Design

    Science.gov (United States)

    Venkatraman, G.; Prema, P.

    2013-01-01

    The paper presents the research design used for identifying and validating a set of competencies required of ESP (English for Specific Purposes) teachers. The identification of the competencies and the three-stage validation process are also discussed. The observation of classes of ESP teachers for field-testing the validated competencies and…

  3. How Developments in Psychology and Technology Challenge Validity Argumentation

    Science.gov (United States)

    Mislevy, Robert J.

    2016-01-01

    Validity is the sine qua non of properties of educational assessment. While a theory of validity and a practical framework for validation has emerged over the past decades, most of the discussion has addressed familiar forms of assessment and psychological framings. Advances in digital technologies and in cognitive and social psychology have…

  4. Validation and Design Science Research in Information Systems

    NARCIS (Netherlands)

    Sol, H G; Gonzalez, Rafael A.; Mora, Manuel

    2012-01-01

    Validation within design science research in Information Systems (DSRIS) is much debated. The relationship of validation to artifact evaluation is still not clear. This chapter aims at elucidating several components of DSRIS in relation to validation. The role of theory and theorizing are an

  5. Experimental validation of Monte Carlo calculations for organ dose

    International Nuclear Information System (INIS)

    Yalcintas, M.G.; Eckerman, K.F.; Warner, G.G.

    1980-01-01

    The problem of validating estimates of absorbed dose due to photon energy deposition is examined. The computational approaches used for the estimation of the photon energy deposition is examined. The limited data for validation of these approaches is discussed and suggestions made as to how better validation information might be obtained

  6. The Value of Qualitative Methods in Social Validity Research

    Science.gov (United States)

    Leko, Melinda M.

    2014-01-01

    One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…

  7. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  8. 45 CFR 162.1011 - Valid code sets.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public... ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates specified by the organization responsible for maintaining that code set. ...

  9. How Mathematicians Determine if an Argument Is a Valid Proof

    Science.gov (United States)

    Weber, Keith

    2008-01-01

    The purpose of this article is to investigate the mathematical practice of proof validation--that is, the act of determining whether an argument constitutes a valid proof. The results of a study with 8 mathematicians are reported. The mathematicians were observed as they read purported mathematical proofs and made judgments about their validity;…

  10. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    Science.gov (United States)

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  11. Initial Reliability and Validity of the Perceived Social Competence Scale

    Science.gov (United States)

    Anderson-Butcher, Dawn; Iachini, Aidyn L.; Amorose, Anthony J.

    2008-01-01

    Objective: This study describes the development and validation of a perceived social competence scale that social workers can easily use to assess children's and youth's social competence. Method: Exploratory and confirmatory factor analyses were conducted on a calibration and a cross-validation sample of youth. Predictive validity was also…

  12. Validation of the Information/Communications Technology Literacy Test

    Science.gov (United States)

    2016-10-01

    Technical Report 1360 Validation of the Information /Communications Technology Literacy Test D. Matthew Trippe Human Resources Research...TITLE AND SUBTITLE Validation of the Information /Communications Technology Literacy Test 5a. CONTRACT OR GRANT NUMBER W91WAS-09-D-0013 5b...validate a measure of cyber aptitude, the Information /Communications Technology Literacy Test (ICTL), in predicting trainee performance in Information

  13. Somatic Sensitivity and Reflexivity as Validity Tools in Qualitative Research

    Science.gov (United States)

    Green, Jill

    2015-01-01

    Validity is a key concept in qualitative educational research. Yet, it is often not addressed in methodological writing about dance. This essay explores validity in a postmodern world of diverse approaches to scholarship, by looking at the changing face of validity in educational qualitative research and at how new understandings of the concept…

  14. On the Need for Quality Control in Validation Research.

    Science.gov (United States)

    Maier, Milton H.

    1988-01-01

    Validated aptitude tests used to help make personnel decisions about military recruits against hands-on tests of job performance in radio repairers and automotive mechanics. Data were filled with errors, reducing accuracy of validity coefficients. Discusses how validity coefficients can be made more accurate by exercising quality control during…

  15. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    Science.gov (United States)

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  16. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  17. On the validation of risk analysis-A commentary

    International Nuclear Information System (INIS)

    Rosqvist, Tony

    2010-01-01

    Aven and Heide (2009) [1] provided interesting views on the reliability and validation of risk analysis. The four validation criteria presented are contrasted with modelling features related to the relative frequency-based and Bayesian approaches to risk analysis. In this commentary I would like to bring forth some issues on validation that partly confirm and partly suggest changes in the interpretation of the introduced validation criteria-especially, in the context of low probability-high consequence systems. The mental model of an expert in assessing probabilities is argued to be a key notion in understanding the validation of a risk analysis.

  18. Computer system validation: an overview of official requirements and standards.

    Science.gov (United States)

    Hoffmann, A; Kähny-Simonius, J; Plattner, M; Schmidli-Vckovski, V; Kronseder, C

    1998-02-01

    A brief overview of the relevant documents for companies in the pharmaceutical industry, which are to be taken into consideration to fulfil computer system validation requirements, is presented. We concentrate on official requirements and valid standards in the USA, European Community and Switzerland. There are basically three GMP-guidelines. their interpretations by the associations of interests like APV and PDA as well as the GAMP Suppliers Guide. However, the three GMP-guidelines imply the same philosophy about computer system validation. They describe more a what-to-do approach for validation, whereas the GAMP Suppliers Guide describes a how-to-do validation. Nevertheless, they do not contain major discrepancies.

  19. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  20. Isotopic and criticality validation for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Fuentes, E.; Lancaster, D.; Rahimi, M.

    1997-01-01

    The techniques used for actinide-only burnup credit isotopic validation and criticality validation are presented and discussed. Trending analyses have been incorporated into both methodologies, requiring biases and uncertainties to be treated as a function of the trending parameters. The isotopic validation is demonstrated using the SAS2H module of SCALE 4.2, with the 27BURNUPLIB cross section library; correction factors are presented for each of the actinides in the burnup credit methodology. For the criticality validation, the demonstration is performed with the CSAS module of SCALE 4.2 and the 27BURNUPLIB, resulting in a validated upper safety limit

  1. Validation of the Social Appearance Anxiety Scale: Factor, Convergent, and Divergent Validity

    Science.gov (United States)

    Levinson, Cheri A.; Rodebaugh, Thomas L.

    2011-01-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor,…

  2. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    Science.gov (United States)

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  3. Validating Future Force Performance Measures (Army Class): End of Training Longitudinal Validation

    Science.gov (United States)

    2009-09-01

    Caramagno, John Fisher, Patricia Keenan, Julisara Mathew, Alicia Sawyer, Jim Takitch, Shonna Waters, and Elise Weaver Drasgow Consulting Group...promise for enhancing the classification of entry-level Soldiers (Ingerick, Diaz , & Putka, 2009). In Year 2 (2007), the emphasis of the Army...Social Sciences. Ingerick, M., Diaz , T., & Putka, D. (2009). Investigations into Army enlisted classification systems: Concurrent validation report

  4. Design and validation of a comprehensive fecal incontinence questionnaire.

    Science.gov (United States)

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  5. Validation of measured friction by process tests

    DEFF Research Database (Denmark)

    Eriksen, Morten; Henningsen, Poul; Tan, Xincai

    The objective of sub-task 3.3 is to evaluate under actual process conditions the friction formulations determined by simulative testing. As regards task 3.3 the following tests have been used according to the original project plan: 1. standard ring test and 2. double cup extrusion test. The task...... has, however, been extended to include a number of new developed process tests: 3. forward rod extrusion test, 4. special ring test at low normal pressure, 5. spike test (especially developed for warm and hot forging). Validation of the measured friction values in cold forming from sub-task 3.1 has...... been made with forward rod extrusion, and very good agreement was obtained between the measured friction values in simulative testing and process testing....

  6. RELAP-7 Software Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support; Choi, Yong-Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support; Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support

    2014-09-25

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.

  7. Presentation of valid correlations in some morphological

    Directory of Open Access Journals (Sweden)

    Florian Miftari

    2018-05-01

    Full Text Available Study-research deals with younger students of both sexes aged 13-14, who, besides attending classes of physical education and sports, also practice in basketball schools in the city of Pristina. The experiment contains a total of 7 morphological variables, while four tests of basic motion skills and seven variables are from specific motion skills. In this study, the verification and analysis of the correlation of morphological characteristics and basic and situational motor skills in both groups of both sexes (boys and girls were treated. Based on the results obtained between several variables, valid correlations with high coefficients are presented, whereas among the variables are presented correlations with optimal values. The experimentation in question includes the number of 80 entities of both sexes; the group of 40 boys and the other group consisting of 40 girls who have undergone the tests for this study-experiment.

  8. Modelling and validation of electromechanical shock absorbers

    Science.gov (United States)

    Tonoli, Andrea; Amati, Nicola; Girardello Detoni, Joaquim; Galluzzi, Renato; Gasparin, Enrico

    2013-08-01

    Electromechanical vehicle suspension systems represent a promising substitute to conventional hydraulic solutions. However, the design of electromechanical devices that are able to supply high damping forces without exceeding geometric dimension and mass constraints is a difficult task. All these challenges meet in off-road vehicle suspension systems, where the power density of the dampers is a crucial parameter. In this context, the present paper outlines a particular shock absorber configuration where a suitable electric machine and a transmission mechanism are utilised to meet off-road vehicle requirements. A dynamic model is used to represent the device. Subsequently, experimental tests are performed on an actual prototype to verify the functionality of the damper and validate the proposed model.

  9. MAAP4 model and validation status

    International Nuclear Information System (INIS)

    Plys, M.G.; Paik, C.Y.; Henry, R.E.; Wu, Chunder; Suh, K.Y.; Sung Jin Lee; McCartney, M.A.; Wang, Zhe

    1993-01-01

    The MAAP 4 code for integrated severe accident analysis is intended to be used for Level 1 and Level 2 probabilistic safety assessment and severe accident management evaluations for current and advanced light water reactors. MAAP 4 can be used to determine which accidents lead to fuel damage and which are successfully terminated which accidents lead to fuel damage and which are successfully terminated before or after fuel damage (a level 1 application). It can also be used to determine which sequences result in fission product release to the environment and provide the time history of such releases (a level 2 application). The MAAP 4 thermal-hydraulic and fission product models and their validation are discussed here. This code is the newest version of MAAP offered by the Electric Power Research Institute (EPRI) and it contains substantial mechanistic improvements over its predecessor, MAAP 3.0B

  10. Natural analogues and radionuclide transport model validation

    International Nuclear Information System (INIS)

    Lever, D.A.

    1987-08-01

    In this paper, some possible roles for natural analogues are discussed from the point of view of those involved with the development of mathematical models for radionuclide transport and with the use of these models in repository safety assessments. The characteristic features of a safety assessment are outlined in order to address the questions of where natural analogues can be used to improve our understanding of the processes involved and where they can assist in validating the models that are used. Natural analogues have the potential to provide useful information about some critical processes, especially long-term chemical processes and migration rates. There is likely to be considerable uncertainty and ambiguity associated with the interpretation of natural analogues, and thus it is their general features which should be emphasized, and models with appropriate levels of sophistication should be used. Experience gained in modelling the Koongarra uranium deposit in northern Australia is drawn upon. (author)

  11. Towards Seamless Validation of Land Cover Data

    Science.gov (United States)

    Chuprikova, Ekaterina; Liebel, Lukas; Meng, Liqiu

    2018-05-01

    This article demonstrates the ability of the Bayesian Network analysis for the recognition of uncertainty patterns associated with the fusion of various land cover data sets including GlobeLand30, CORINE (CLC2006, Germany) and land cover data derived from Volunteered Geographic Information (VGI) such as Open Street Map (OSM). The results of recognition are expressed as probability and uncertainty maps which can be regarded as a by-product of the GlobeLand30 data. The uncertainty information may guide the quality improvement of GlobeLand30 by involving the ground truth data, information with superior quality, the know-how of experts and the crowd intelligence. Such an endeavor aims to pave a way towards a seamless validation of global land cover data on the one hand and a targeted knowledge discovery in areas with higher uncertainty values on the other hand.

  12. Developing and validating rapid assessment instruments

    CERN Document Server

    Abell, Neil; Kamata, Akihito

    2009-01-01

    This book provides an overview of scale and test development. From conceptualization through design, data collection, analysis, and interpretation, critical concerns are identified and grounded in the increasingly sophisticated psychometric literature. Measurement within the health, social, and behavioral sciences is addressed, and technical and practical guidance is provided. Acknowledging the increasingly sophisticated contributions in social work, psychology, education, nursing, and medicine, the book balances condensation of complex conceptual challenges with focused recommendations for conceiving, planning, and implementing psychometric study. Primary points are carefully referenced and consistently illustrated to illuminate complicated or abstract principles. Basics of construct conceptualization and establishing evidence of validity are complimented with introductions to concept mapping and cross-cultural translation. In-depth discussion of cutting edge topics like bias and invariance in item responses...

  13. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  14. Validation of the danish national diabetes register

    DEFF Research Database (Denmark)

    Green, Anders; Sortsø, Camilla; Jensen, Peter Bjødstrup

    2015-01-01

    The Danish National Diabetes Register (NDR) was established in 2006 and builds on data from Danish health registers. We validated the content of NDR, using full information from the Danish National Patient Register and data from the literature. Our study indicates that the completeness in NDR...... is ≥95% concerning ascertainment from data sources specific for diabetes, ie, prescriptions with antidiabetic drugs and diagnoses of diabetes in the National Patient Register. Since the NDR algorithm ignores diabetes-related hospital contacts terminated before 1990, the establishment of the date...... of encounter, has been taken as the date of inclusion in NDR. We also find that some 20% of the registrations in NDR may represent false positive inclusions of persons with frequent measurements of blood glucose without having diabetes. We conclude that NDR is a novel initiative to support research...

  15. Validation of human factor engineering integrated system

    International Nuclear Information System (INIS)

    Fang Zhou

    2013-01-01

    Apart from hundreds of thousands of human-machine interface resources, the control room of a nuclear power plant is a complex system integrated with many factors such as procedures, operators, environment, organization and management. In the design stage, these factors are considered by different organizations separately. However, whether above factors could corporate with each other well in operation and whether they have good human factors engineering (HFE) design to avoid human error, should be answered in validation of the HFE integrated system before delivery of the plant. This paper addresses the research and implementation of the ISV technology based on case study. After introduction of the background, process and methodology of ISV, the results of the test are discussed. At last, lessons learned from this research are summarized. (authors)

  16. Active Sensor Configuration Validation for Refrigeration Systems

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Blanke, Mogens; Niemann, Hans Henrik

    2010-01-01

    -diagnosis methods falling short on this problem, this paper suggests an active diagnosis procedure to isolate sensor faults at the commissioning stage, before normal operation has started. Using statistical methods, residuals are evaluated versus multiple hypothesis models in a minimization process to uniquely......Major faults in the commissioning phase of refrigeration systems are caused by defects related to sensors. With a number of similar sensors available that do not differ by type but only by spatial location in the plant, interchange of sensors is a common defect. With sensors being used quite...... differently by the control system, fault-finding is difficult in practice and defects are regularly causing commissioning delays at considerable expense. Validation and handling of faults in the sensor configuration are therefore essential to cut costs during commissioning. With passive fault...

  17. Installation and validation of MCNP-4A

    International Nuclear Information System (INIS)

    Marks, N.A.

    1997-01-01

    MCNP-4A is a multi-purpose Monte Carlo program suitable for the modelling of neutron, photon, and electron transport problems. It is a particularly useful technique when studying systems containing irregular shapes. MCNP has been developed over the last 25 years by Los Alamos, and is distributed internationally via RSIC at Oak Ridge. This document describes the installation of MCNP-4A (henceforth referred to as MCNP) on the Silicon Graphics workstation (bluey.ansto.gov.au). A limited number of benchmarks pertaining to fast and thermal systems were performed to check the installation and validate the code. The results are compared to deterministic calculations performed using the AUS neutronics code system developed at ANSTO. (author)

  18. Validating the Rett Syndrome Gross Motor Scale

    DEFF Research Database (Denmark)

    Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley

    2016-01-01

    .93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice......Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated...... the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age...

  19. Discriminant validity study of Achilles enthesis ultrasound.

    Science.gov (United States)

    Expósito Molinero, María Rosa; de Miguel Mendieta, Eugenio

    2016-01-01

    We want to know if the ultrasound examination of the Achilles tendon in spondyloarthritis is different compared to other rheumatic diseases. We studied 97 patients divided into five groups: rheumatoid arthritis, spondyloarthritis, gout, chondrocalcinosis and osteoarthritis, exploring six elementary lesions in 194 Achilles entheses examined. In our study the total index ultrasonographic Achilles is higher in spondyloarthritis with significant differences. The worst elementary spondyloarthritis lesions for discriminations against other pathologies were calcification. This study aims to demonstrate the discriminant validity of Achilles enthesitis observed by ultrasound in spondyloarthritis compared with other rheumatic diseases that may also have ultrasound abnormalities such enthesis level. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  20. IV&V Project Assessment Process Validation

    Science.gov (United States)

    Driskell, Stephen

    2012-01-01

    The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

  1. Oral history: Validating contributions of elders.

    Science.gov (United States)

    Taft, Lois B; Stolder, Mary Ellen; Knutson, Alice Briolat; Tamke, Karolyn; Platt, Jennifer; Bowlds, Tara

    2004-01-01

    Recording memories of World War II is an intervention that can humanize geriatric care in addition to the historical significance provided. Participants in this oral history project described memories of World War II and expressed themes of patriotism, loss, tense moments, makeshift living, self-sufficiency, and uncertain journey. Their ethnic roots were primarily Scandinavian, Dutch, German, and English. The nursing home participants were slightly older than the community participants (mean ages: 85.5 and 82.4 years, respectively). More women (58%) than men (42%) participated, and 35% of the participants were veterans (eight men one woman). Nursing home and community residents participated in this project, and reciprocal benefits were experienced by participants and listeners alike. Memories of World War II provide a meaningful topic for oral histories. Listening and valuing oral history supports, involves, and validates elders. Oral history has reciprocal benefits that can create a culture to enhance a therapeutic environment.

  2. Validations and applications of the FEAST code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z.; Tayal, M.; Lau, J.H.; Evinou, D. [Atomic Energy of Canada Limited, Mississauga, Ontario (Canada); Jun, J.S. [Korea Atomic Energy Research Inst. (Korea, Republic of)

    1999-07-01

    The FEAST (Finite Element Analysis for STresses) code is part of a suite of computer codes that are used to assess the structural integrity of CANDu fuel elements and bundles. A detailed validation of the FEAST code was recently performed. The FEAST calculations are in good agreement with a variety of analytical solutions (18 cases) for stresses, strains and displacements. This consistency shows that the FEAST code correctly incorporates the fundamentals of stress analysis. Further, the calculations of the FEAST code match the variations in axial and hoop strain profiles, measured by strain gauges near the sheath-endcap weld during an out-reactor compression test. The code calculations are also consistent with photoelastic measurements in simulated endcaps. (author)

  3. Validations and applications of the FEAST code

    International Nuclear Information System (INIS)

    Xu, Z.; Tayal, M.; Lau, J.H.; Evinou, D.; Jun, J.S.

    1999-01-01

    The FEAST (Finite Element Analysis for STresses) code is part of a suite of computer codes that are used to assess the structural integrity of CANDu fuel elements and bundles. A detailed validation of the FEAST code was recently performed. The FEAST calculations are in good agreement with a variety of analytical solutions (18 cases) for stresses, strains and displacements. This consistency shows that the FEAST code correctly incorporates the fundamentals of stress analysis. Further, the calculations of the FEAST code match the variations in axial and hoop strain profiles, measured by strain gauges near the sheath-endcap weld during an out-reactor compression test. The code calculations are also consistent with photoelastic measurements in simulated endcaps. (author)

  4. Discriminant validity of well-being measures.

    Science.gov (United States)

    Lucas, R E; Diener, E; Suh, E

    1996-09-01

    The convergent and discriminant validities of well-being concepts were examined using multitrait-multimethod matrix analyses (D. T. Campbell & D. W. Fiske, 1959) on 3 sets of data. In Study 1, participants completed measures of life satisfaction, positive affect, negative affect, self-esteem, and optimism on 2 occasions 4 weeks apart and also obtained 3 informant ratings. In Study 2, participants completed each of the 5 measures on 2 occasions 2 years apart and collected informant reports at Time 2. In Study 3, participants completed 2 different scales for each of the 5 constructs. Analyses showed that (a) life satisfaction is discriminable from positive and negative affect, (b) positive affect is discriminable from negative affect, (c) life satisfaction is discriminable from optimism and self-esteem, and (d) optimism is separable from trait measures of negative affect.

  5. The validity of the 4-Skills Scan: A double validation study.

    Science.gov (United States)

    van Kernebeek, W G; de Kroon, M L A; Savelsbergh, G J P; Toussaint, H M

    2018-06-01

    Adequate gross motor skills are an essential aspect of a child's healthy development. Where physical education (PE) is part of the primary school curriculum, a strong curriculum-based emphasis on evaluation and support of motor skill development in PE is apparent. Monitoring motor development is then a task for the PE teacher. In order to fulfil this task, teachers need adequate tools. The 4-Skills Scan is a quick and easily manageable gross motor skill instrument; however, its validity has never been assessed. Therefore, the purpose of this study is to assess the construct and concurrent validity of both 4-Skills Scans (version 2007 and version 2015). A total of 212 primary school children (6 - 12 years old), was requested to participate in both versions of the 4-Skills Scan. For assessing construct validity, children covered an obstacle course with video recordings for observation by an expert panel. For concurrent validity, a comparison was made with the MABC-2, by calculating Pearson correlations. Multivariable linear regression analyses were performed to determine the contribution of each subscale to the construct of gross motor skills, according to the MABC-2 and the expert panel. Correlations between the 4-Skills Scans and expert valuations were moderate, with coefficients of .47 (version 2007) and .46 (version 2015). Correlations between the 4-Skills Scans and the MABC-2 (gross) were moderate (.56) for version 2007 and high (.64) for version 2015. It is concluded that both versions of the 4-Skills Scans are satisfactory valid instruments for assessing gross motor skills during PE lessons. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    Science.gov (United States)

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  7. Irradiated food: validity of extrapolating wholesomeness data

    International Nuclear Information System (INIS)

    Taub, I.A.; Angelini, P.; Merritt, C. Jr.

    1976-01-01

    Criteria are considered for validly extrapolating the conclusions reached on the wholesomeness of an irradiated food receiving high doses to the same food receiving a lower dose. A consideration first is made of the possible chemical mechanisms that could give rise to different functional dependences of radiolytic products on dose. It is shown that such products should increase linearly with dose and the ratio of products should be constant throughout the dose range considered. The assumption, generally accepted in pharmacology, then is made that if any adverse effects related to the food are discerned in the test animals, then the intensity of these effects would increase with the concentration of radiolytic products in the food. Lastly, the need to compare data from animal studies with foods irradiated to several doses against chemical evidence obtained over a comparable dose range is considered. It is concluded that if the products depend linearly on dose and if feeding studies indicate no adverse effects, then an extrapolation to lower doses is clearly valid. This approach is illustrated for irradiated codfish. The formation of selected volatile products in samples receiving between 0.1 and 3 Mrads was examined, and their concentrations were found to increase linearly at least up to 1 Mrad. These data were compared with results from animal feeding studies establishing the wholesomeness of codfish and haddock irradiated to 0.2, 0.6 and 2.8 Mrads. It is stated, therefore, that if ocean fish, currently under consideration for onboard processing, were irradiated to 0.1 Mrad, it would be correspondingly wholesome

  8. ATHLET validation using accident management experiments

    Energy Technology Data Exchange (ETDEWEB)

    Teschendorff, V.; Glaeser, H.; Steinhoff, F. [Gasellschaft fuer Anlagen - und Reaktorsicherheit (GSR) mbH, Garching (Germany)

    1995-09-01

    The computer code ATHLET is being developed as an advanced best-estimate code for the simulation of leaks and transients in PWRs and BWRs including beyond design basis accidents. The code has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialisation by a steady-state calculation, full-range drift-flux model, and dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The systematic validation of ATHLET is based on a well balanced set of integral and separate effect tests derived from the CSNI proposal emphasising, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities. PKL-III test B 2.1 simulates a cool-down procedure during an emergency power case with three steam generators isolated. Natural circulation under these conditions was investigated in detail in a pressure range of 4 to 2 MPa. The transient was calculated over 22000 s with complicated boundary conditions including manual control actions. The calculations demonstrations the capability to model the following processes successfully: (1) variation of the natural circulation caused by steam generator isolation, (2) vapour formation in the U-tubes of the isolated steam generators, (3) break-down of circulation in the loop containing the isolated steam generator following controlled cool-down of the secondary side, (4) accumulation of vapour in the pressure vessel dome. One conclusion with respect to the suitability of experiments simulating AM procedures for code validation purposes is that complete documentation of control actions during the experiment must be available. Special attention should be given to the documentation of operator actions in the course of the experiment.

  9. Verification and Validation Strategy for LWRS Tools

    Energy Technology Data Exchange (ETDEWEB)

    Carl M. Stoots; Richard R. Schultz; Hans D. Gougar; Thomas K Larson; Michael Corradini; Laura Swiler; David Pointer; Jess Gehin

    2012-09-01

    One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verified and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.

  10. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  11. Trait sexual motivation questionnaire: concept and validation.

    Science.gov (United States)

    Stark, Rudolf; Kagerer, Sabine; Walter, Bertram; Vaitl, Dieter; Klucken, Tim; Wehrum-Osinsky, Sina

    2015-04-01

    Trait sexual motivation defines a psychological construct that reflects the long-lasting degree of motivation for sexual activities, which is assumed to be the result of biological and sociocultural influences. With this definition, it shares commonalities with other sexuality-related constructs like sexual desire, sexual drive, sexual needs, and sexual compulsivity. The Trait Sexual Motivation Questionnaire (TSMQ) was developed in order to measure trait sexual motivation with its different facets. Several steps were conducted: First, items were composed assessing sexual desire, the effort made to gain sex, as well as specific sexual behaviors. Factor analysis of the data of a first sample (n = 256) was conducted. Second, the factor solution was verified by a confirmatory factor analysis in a second sample (n = 498) and construct validity was demonstrated. Third, the temporal stability of the TSMQ was tested in a third study (n = 59). Questionnaire data. The exploratory and confirmatory factor analyses revealed that trait sexual motivation is best characterized by four subscales: Solitary Sexuality, Importance of Sex, Seeking Sexual Encounters, and Comparison with Others. It could be shown that the test quality of the questionnaire is high. Most importantly for the trait concept, the retest reliability after 1 year was r = 0.87. Our results indicate that the TSMQ is indeed a suitable tool for measuring long-lasting sexual motivation with high test quality and high construct validity. A future differentiation between trait and state sexual motivation might be helpful for clinical as well as forensic research. © 2015 International Society for Sexual Medicine.

  12. Validation of dose calculation programmes for recycling

    International Nuclear Information System (INIS)

    Menon, Shankar; Brun-Yaba, Christine; Yu, Charley; Cheng, Jing-Jy; Williams, Alexander

    2002-12-01

    This report contains the results from an international project initiated by the SSI in 1999. The primary purpose of the project was to validate some of the computer codes that are used to estimate radiation doses due to the recycling of scrap metal. The secondary purpose of the validation project was to give a quantification of the level of conservatism in clearance levels based on these codes. Specifically, the computer codes RESRAD-RECYCLE and CERISE were used to calculate radiation doses to individuals during the processing of slightly contaminated material, mainly in Studsvik, Sweden. Calculated external doses were compared with measured data from different steps of the process. The comparison of calculations and measurements shows that the computer code calculations resulted in both overestimations and underestimations of the external doses for different recycling activities. The SSI draws the conclusion that the accuracy is within one order of magnitude when experienced modellers use their programmes to calculate external radiation doses for a recycling process involving material that is mainly contaminated with cobalt-60. No errors in the codes themselves were found. Instead, the inaccuracy seems to depend mainly on the choice of some modelling parameters related to the receptor (e.g., distance, time, etc.) and simplifications made to facilitate modelling with the codes (e.g., object geometry). Clearance levels are often based on studies on enveloping scenarios that are designed to cover all realistic exposure pathways. It is obvious that for most practical cases, this gives a margin to the individual dose constraint (in the order of 10 micro sievert per year within the EC). This may be accentuated by the use of conservative assumptions when modelling the enveloping scenarios. Since there can obviously be a fairly large inaccuracy in the calculations, it seems reasonable to consider some degree of conservatism when establishing clearance levels based on

  13. Validation of dose calculation programmes for recycling

    Energy Technology Data Exchange (ETDEWEB)

    Menon, Shankar [Menon Consulting, Nykoeping (Sweden); Brun-Yaba, Christine [Inst. de Radioprotection et Securite Nucleaire (France); Yu, Charley; Cheng, Jing-Jy [Argonne National Laboratory, IL (United States). Environmental Assessment Div.; Bjerler, Jan [Studsvik Stensand, Nykoeping (Sweden); Williams, Alexander [Dept. of Energy (United States). Office of Environmental Management

    2002-12-01

    This report contains the results from an international project initiated by the SSI in 1999. The primary purpose of the project was to validate some of the computer codes that are used to estimate radiation doses due to the recycling of scrap metal. The secondary purpose of the validation project was to give a quantification of the level of conservatism in clearance levels based on these codes. Specifically, the computer codes RESRAD-RECYCLE and CERISE were used to calculate radiation doses to individuals during the processing of slightly contaminated material, mainly in Studsvik, Sweden. Calculated external doses were compared with measured data from different steps of the process. The comparison of calculations and measurements shows that the computer code calculations resulted in both overestimations and underestimations of the external doses for different recycling activities. The SSI draws the conclusion that the accuracy is within one order of magnitude when experienced modellers use their programmes to calculate external radiation doses for a recycling process involving material that is mainly contaminated with cobalt-60. No errors in the codes themselves were found. Instead, the inaccuracy seems to depend mainly on the choice of some modelling parameters related to the receptor (e.g., distance, time, etc.) and simplifications made to facilitate modelling with the codes (e.g., object geometry). Clearance levels are often based on studies on enveloping scenarios that are designed to cover all realistic exposure pathways. It is obvious that for most practical cases, this gives a margin to the individual dose constraint (in the order of 10 micro sievert per year within the EC). This may be accentuated by the use of conservative assumptions when modelling the enveloping scenarios. Since there can obviously be a fairly large inaccuracy in the calculations, it seems reasonable to consider some degree of conservatism when establishing clearance levels based on

  14. SDQ: Discriminative validity and diagnostic potential

    Directory of Open Access Journals (Sweden)

    Thaysa Brinck Fernandes Silva

    2015-06-01

    Full Text Available The Strengths and Difficulties Questionnaire (SDQ was designed to screen for behavioral problems in youths based on cutoff points that favor the instrument’s diagnostic sensitivity. The present study aimed to analyze the discriminative validity of the SDQ to identify behavioral difficulties and prosocial resources in school-age children compared with the diagnostic data collected by the corresponding sections of the Development and Well-being Assessment (DAWBA. In addition, new cutoff points that value specificity were defined for the SDQ scales, exploring its diagnostic potential. This study was conducted in Brazil and assessed a community convenience sample that consisted of 120 children aged 6 to 12 years who were not under psychological/psychiatric treatment. The mothers of the participants also completed a sociodemographic questionnaire. Descriptive statistics were used to clinically characterize the sample. A ROC curve was used to assess the discriminant validity of the SDQ, and new cutoff points were established to maximize the instrument’s specificity. The new cutoff points enabled a significant increase in specificity without a significant loss of sensitivity, which favors approaches based on measures of screening and diagnosis yet does not damage the instrument’s screening capacity. The following increases were observed: 100% for the depressive disorder scale (cutoff point=7, 95.1% for the generalized anxiety disorder scale (cutoff point=7, 46.6% for the conduct disorder scale (cutoff point=6, 19.2% for the hyperactive disorder scale (cutoff point=8, and 27.6% for the antisocial personality disorder scale (cutoff point=6. A cutoff point of 8 was applied to the prosocial behavior scale, which exhibited a 62.1% increase in specificity. The use of more specific cutoff points generated more accurate results and favored SDQ's use, particularly in contexts of care that require more precise and faster procedures for identification of

  15. Broadband IR Measurements for Modis Validation

    Science.gov (United States)

    Jessup, Andrew T.

    2003-01-01

    The primary objective of this research was the development and deployment of autonomous shipboard systems for infrared measurement of ocean surface skin temperature (SST). The focus was on demonstrating long-term, all-weather capability and supplying calibrated skin SST to the MODIS Ocean Science Team (MOCEAN). A secondary objective was to investigate and account for environmental factors that affect in situ measurements of SST for validation of satellite products. We developed and extensively deployed the Calibrated, InfraRed, In situ Measurement System, or CIRIMS, for at-sea validation of satellite-derived SST. The design goals included autonomous operation at sea for up to 6 months and an accuracy of +/- 0.1 C. We used commercially available infrared pyrometers and a precision blackbody housed in a temperature-controlled enclosure. The sensors are calibrated at regular interval using a cylindro-cone target immersed in a temperature-controlled water bath, which allows the calibration points to follow the ocean surface temperature. An upward-looking pyrometer measures sky radiance in order to correct for the non-unity emissivity of water, which can introduce an error of up to 0.5 C. One of the most challenging aspects of the design was protection against the marine environment. A wide range of design strategies to provide accurate, all-weather measurements were investigated. The CIRIMS uses an infrared transparent window to completely protect the sensor and calibration blackbody from the marine environment. In order to evaluate the performance of this approach, the design incorporates the ability to make measurements with and without the window in the optical path.

  16. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  17. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  18. Validation of psychoanalytic theories: towards a conceptualization of references.

    Science.gov (United States)

    Zachrisson, Anders; Zachrisson, Henrik Daae

    2005-10-01

    The authors discuss criteria for the validation of psychoanalytic theories and develop a heuristic and normative model of the references needed for this. Their core question in this paper is: can psychoanalytic theories be validated exclusively from within psychoanalytic theory (internal validation), or are references to sources of knowledge other than psychoanalysis also necessary (external validation)? They discuss aspects of the classic truth criteria correspondence and coherence, both from the point of view of contemporary psychoanalysis and of contemporary philosophy of science. The authors present arguments for both external and internal validation. Internal validation has to deal with the problems of subjectivity of observations and circularity of reasoning, external validation with the problem of relevance. They recommend a critical attitude towards psychoanalytic theories, which, by carefully scrutinizing weak points and invalidating observations in the theories, reduces the risk of wishful thinking. The authors conclude by sketching a heuristic model of validation. This model combines correspondence and coherence with internal and external validation into a four-leaf model for references for the process of validating psychoanalytic theories.

  19. Reliability and Validity of Qualitative and Operational Research Paradigm

    Directory of Open Access Journals (Sweden)

    Muhammad Bashir

    2008-01-01

    Full Text Available Both qualitative and quantitative paradigms try to find the same result; the truth. Qualitative studies are tools used in understanding and describing the world of human experience. Since we maintain our humanity throughout the research process, it is largely impossible to escape the subjective experience, even for the most experienced of researchers. Reliability and Validity are the issue that has been described in great deal by advocates of quantitative researchers. The validity and the norms of rigor that are applied to quantitative research are not entirely applicable to qualitative research. Validity in qualitative research means the extent to which the data is plausible, credible and trustworthy; and thus can be defended when challenged. Reliability and validity remain appropriate concepts for attaining rigor in qualitative research. Qualitative researchers have to salvage responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers’ judgments to the investigators themselves. There have different opinions on validity with some suggesting that the concepts of validity is incompatible with qualitative research and should be abandoned while others argue efforts should be made to ensure validity so as to lend credibility to the results. This paper is an attempt to clarify the meaning and use of reliability and validity in the qualitative research paradigm.

  20. VERA-CS Verification & Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States)

    2017-02-01

    This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISON / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17

  1. Summary of KOMPSAT-5 Calibration and Validation

    Science.gov (United States)

    Yang, D.; Jeong, H.; Lee, S.; Kim, B.

    2013-12-01

    including pointing, relative and absolute calibration as well as geolocation accuracy determination. The absolute calibration will be accomplished by determining absolute radiometric accuracy using already deployed trihedral corner reflectors on calibration and validation sites located southeast from Ulaanbaatar, Mongolia. To establish a measure for the assess the final image products, geolocation accuracies of image products with different imaging modes will be determined by using deployed point targets and available Digital Terrain Model (DTM), and on different image processing levels. In summary, this paper will present calibration and validation activities performed during the LEOP and IOT of KOMPSAT-5. The methodology and procedure of calibration and validation will be explained as well as its results. Based on the results, the applications of SAR image products on geophysical processes will be also discussed.

  2. Assessment model validity document FARF31

    International Nuclear Information System (INIS)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria

    2004-08-01

    The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

  3. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  4. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  5. Test of Gross Motor Development : Expert Validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-12-01

    Full Text Available The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motordevelopment. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by expertsand the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. Across-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionalsand 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls.Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated thatthe Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices ofconfirmatory factorial validity (χ2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tuckerand Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. ThePortuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  6. Test of Gross Motor Development: expert validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-01-01

    The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motor development. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by experts and the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. A cross-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionals and 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls. Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated that the Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices of confirmatory factorial validity (÷2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tucker and Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. The Portuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  7. Validity of the Danish Prostate Symptom Score questionnaire in stroke

    DEFF Research Database (Denmark)

    Tibaek, S.; Dehlendorff, Christian

    2009-01-01

    Objective – To determine the content and face validity of the Danish Prostate Symptom Score (DAN-PSS-1) questionnaire in stroke patients. Materials and methods – Content validity was judged among an expert panel in neuro-urology. The judgement was measured by the content validity index (CVI). Face...... validity was indicated in a clinical sample of 482 stroke patients in a hospital-based, cross-sectional survey. Results – I-CVI was rated >0.78 (range 0.94–1.00) for 75% of symptom and bother items corresponding to adequate content validity. The expert panel rated the entire DAN-PSS-1 questionnaire highly...... questionnaire appears to be content and face valid for measuring lower urinary tract symptoms after stroke....

  8. Experimental validation of wireless communication with chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian [Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xian University of Technology, Xian 710048 (China); Baptista, Murilo S.; Grebogi, Celso [Institute for Complex System and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen AB24 3UE (United Kingdom)

    2016-08-15

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  9. ISS Logistics Hardware Disposition and Metrics Validation

    Science.gov (United States)

    Rogers, Toneka R.

    2010-01-01

    I was assigned to the Logistics Division of the International Space Station (ISS)/Spacecraft Processing Directorate. The Division consists of eight NASA engineers and specialists that oversee the logistics portion of the Checkout, Assembly, and Payload Processing Services (CAPPS) contract. Boeing, their sub-contractors and the Boeing Prime contract out of Johnson Space Center, provide the Integrated Logistics Support for the ISS activities at Kennedy Space Center. Essentially they ensure that spares are available to support flight hardware processing and the associated ground support equipment (GSE). Boeing maintains a Depot for electrical, mechanical and structural modifications and/or repair capability as required. My assigned task was to learn project management techniques utilized by NASA and its' contractors to provide an efficient and effective logistics support infrastructure to the ISS program. Within the Space Station Processing Facility (SSPF) I was exposed to Logistics support components, such as, the NASA Spacecraft Services Depot (NSSD) capabilities, Mission Processing tools, techniques and Warehouse support issues, required for integrating Space Station elements at the Kennedy Space Center. I also supported the identification of near-term ISS Hardware and Ground Support Equipment (GSE) candidates for excessing/disposition prior to October 2010; and the validation of several Logistics Metrics used by the contractor to measure logistics support effectiveness.

  10. Concepts of Model Verification and Validation

    International Nuclear Information System (INIS)

    Thacker, B.H.; Doebling, S.W.; Hemez, F.M.; Anderson, M.C.; Pepin, J.E.; Rodriguez, E.A.

    2004-01-01

    Model verification and validation (VandV) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model VandV procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model VandV program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model VandV is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define VandV methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for VandV applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of VandV procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model

  11. Computerized Italian criticality guide, description and validation

    International Nuclear Information System (INIS)

    Carotenuto, M.; Landeyro, P.A.

    1988-10-01

    Our group is developing an 'expert system' for collecting engineering know-how on back-end nuclear plant design. An expert system is the most suitable software tool for our problem. During the analysis, the design process was divided into different branches. At each branch of the design process the Expert System relates a computerized design procedure. Any design procedure is composed of a set of design methods, together with their condition of application and reliability limits. In the framework of this expert system, the nuclear criticality safety analysis procedure was developed, in the form of a computerized criticality guide, attempting to reproduce the designer's normal 'reasoning' process. The criticality guide is composed of two parts: A computerized text, including theory, a description of the accidents occurred in the past and a description of the italian design experience; An interactive computer aided calculation module, containing a graphical facility for critical parameter curves. In the present report are presented the criticality guide (computerized Italian Criticality Guide) and its validation test. (author)

  12. Experimental validation of wireless communication with chaos

    International Nuclear Information System (INIS)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S.; Grebogi, Celso

    2016-01-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  13. Neutronics validation during conversion to LEU

    International Nuclear Information System (INIS)

    Hendriks, J. A.; Sciolla, C. M.; Van Der Marck, S. C.; Valko, J.

    2006-01-01

    From October 2005 to May 2006 the High Flux Reactor at Petten, the Netherlands, was progressively converted to low-enriched uranium. The core calculations were performed with two code systems, one being Rebus/MCNP, the other being Oscar-3. These systems were chosen because Rebus (for fuel burn-up) and MCNP (for flux, power, and activation reaction rates) have a long and good track record, whereas Oscar-3 is a newer code, with more user-friendly interfaces that facilitate day to day and cycle to cycle variable input generation. The following measurements have been used for validation of the neutronics calculations: control rod settings at begin and end of cycle, reactivity of control rods, Cu-wire activation during low power runs of the reactor, activation monitor sets present during part of the full power cycle, and isotope production measurements. We report on a comparison of measurements and calculational results for the control rod settings, Cu-wire activation and monitor set data. The Cu-wire activation results are mostly within 10% of experimental values, the monitor set activation results are easily within 5%, based on absolute predictions from the calculations. (authors)

  14. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  15. Nano-immunosafety: issues in assay validation

    International Nuclear Information System (INIS)

    Boraschi, Diana; Italiani, Paola; Oostingh, Gertie J; Duschl, Albert; Casals, Eudald; Puntes, Victor F; Nelissen, Inge

    2011-01-01

    Assessing the safety of engineered nanomaterials for human health must include a thorough evaluation of their effects on the immune system, which is responsible for defending the integrity of our body from damage and disease. An array of robust and representative assays should be set up and validated, which could be predictive of the effects of nanomaterials on immune responses. In a trans-European collaborative work, in vitro assays have been developed to this end. In vitro tests have been preferred for their suitability to standardisation and easier applicability. Adapting classical assays to testing the immunotoxicological effects of nanoparticulate materials has raised a series of issues that needed to be appropriately addressed in order to ensure reliability of results. Besides the exquisitely immunological problem of selecting representative endpoints predictive of the risk of developing disease, assay results turned out to be significantly biased by artefactual interference of the nanomaterials or contaminating agents with the assay protocol. Having addressed such problems, a series of robust and representative assays have been developed that describe the effects of engineered nanoparticles on professional and non-professional human defence cells. Two of such assays are described here, one based on primary human monocytes and the other employing human lung epithelial cells transfected with a reporter gene.

  16. Space Suit Joint Torque Measurement Method Validation

    Science.gov (United States)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  17. Measurement of reactivity coefficients for code validation

    International Nuclear Information System (INIS)

    Nuding, Matthias; Loetsch, Thomas

    2005-01-01

    In the year 2003 measurements in the cold reactor state have been performed at the NPP KKI 2 in order to validate the codes that are used for reactor core calculations and especially for the proof of the shutdown margin that is produced by calculations only. For full power states code verification is quite easy because the calculations can be compared with different measured values, e.g. with the activation values determined by the aeroball system. For cold reactor states, however the data base is smaller, especially for reactor cores that are quite 'inhomogeneous' and have rather high Pu-fiss-and 235 U-contents. At the same time the cold reactor state is important regarding the shutdown margin. For these reasons the measurements mentioned above have been performed in order to check the accuracy of the codes that are used by the operator and by our organization for many years. Basically, boron concentrations and control rod worths for different configurations have been measured. The results of the calculation show a very good agreement with the measured values. Therefore, it can be stated that the operator's as well as our code system is suitable for routine use, e.g. during licensing procedures (Authors)

  18. Atmospheric corrosion: statistical validation of models

    International Nuclear Information System (INIS)

    Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.

    2003-01-01

    In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs

  19. Empirical validation of directed functional connectivity.

    Science.gov (United States)

    Mill, Ravi D; Bagic, Anto; Bostan, Andreea; Schneider, Walter; Cole, Michael W

    2017-02-01

    Mapping directions of influence in the human brain connectome represents the next phase in understanding its functional architecture. However, a host of methodological uncertainties have impeded the application of directed connectivity methods, which have primarily been validated via "ground truth" connectivity patterns embedded in simulated functional MRI (fMRI) and magneto-/electro-encephalography (MEG/EEG) datasets. Such simulations rely on many generative assumptions, and we hence utilized a different strategy involving empirical data in which a ground truth directed connectivity pattern could be anticipated with confidence. Specifically, we exploited the established "sensory reactivation" effect in episodic memory, in which retrieval of sensory information reactivates regions involved in perceiving that sensory modality. Subjects performed a paired associate task in separate fMRI and MEG sessions, in which a ground truth reversal in directed connectivity between auditory and visual sensory regions was instantiated across task conditions. This directed connectivity reversal was successfully recovered across different algorithms, including Granger causality and Bayes network (IMAGES) approaches, and across fMRI ("raw" and deconvolved) and source-modeled MEG. These results extend simulation studies of directed connectivity, and offer practical guidelines for the use of such methods in clarifying causal mechanisms of neural processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. External validation of EPIWIN biodegradation models.

    Science.gov (United States)

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  1. Validation of pig operations through pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Tolmasquim, Sueli Tiomno [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil); Nieckele, Angela O. [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Mecanica

    2005-07-01

    In the oil industry, pigging operations in pipelines have been largely applied for different purposes: pipe cleaning, inspection, liquid removal and product separation, among others. An efficient and safe pigging operation requires that a number of operational parameters, such as maximum and minimum pressures in the pipeline and pig velocity, to be well evaluated during the planning stage and maintained within stipulated limits while the operation is accomplished. With the objective of providing an efficient tool to assist in the control and design of pig operations through pipelines, a numerical code was developed, based on a finite difference scheme, which allows the simulation of two fluid transient flow, like liquid-liquid, gas-gas or liquid-gas products in the pipeline. Modules to automatically control process variables were included to employ different strategies to reach an efficient operation. Different test cases were investigated, to corroborate the robustness of the methodology. To validate the methodology, the results obtained with the code were compared with a real liquid displacement operation of a section of the OSPAR oil pipeline, belonging to PETROBRAS, with 30'' diameter and 60 km length, presenting good agreement. (author)

  2. A validated physical model of greenhouse climate

    International Nuclear Information System (INIS)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the greenhouse and of the control system. The greenhouse model is based on the energy, water vapour and CO 2 balances of the crop-greenhouse system. While the emphasis is on the dynamic behaviour of the greenhouse for implementation in continuous optimization, the state variables temperature, water vapour pressure and carbondioxide concentration in the relevant greenhouse parts crop, air, soil and cover are calculated from the balances over these parts. To do this in a proper way, the physical exchange processes between the system parts have to be quantified first. Therefore the greenhouse model is constructed from submodels describing these processes: a. Radiation transmission model for the modification of the outside to the inside global radiation. b. Ventilation model to describe the ventilation exchange between greenhouse and outside air. c. The description of the exchange of energy and mass between the crop and the greenhouse air. d. Calculation of the thermal radiation exchange between the various greenhouse parts. e. Quantification of the convective exchange processes between the greenhouse air and respectively the cover, the heating pipes and the soil surface and between the cover and the outside air. f. Determination of the heat conduction in the soil. The various submodels are validated first and then the complete greenhouse model is verified

  3. FDIR Strategy Validation with the B Method

    Science.gov (United States)

    Sabatier, D.; Dellandrea, B.; Chemouil, D.

    2008-08-01

    In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.

  4. Validating the Rett Syndrome Gross Motor Scale.

    Directory of Open Access Journals (Sweden)

    Jenny Downs

    Full Text Available Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98. The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.

  5. Automated Liquibase Generator And ValidatorALGV

    Directory of Open Access Journals (Sweden)

    Manik Jain

    2015-08-01

    Full Text Available Abstract This paper presents an automation tool namely ALGV Automated Liquibase Generator and Validator for the automated generation and verification of liquibase scripts. Liquibase is one of the most efficient ways of applying and persisting changes to a database schema. Since its invention by Nathan Voxland 1 it has become de facto standard for database change management. The advantages of using liquibase scripts over traditional sql queries ranges from version control to reusing the same scripts over multiple database platforms. Irrespective of its advantages manual creation of liquibase scripts takes a lot of effort and sometimes is error-prone. ALGV helps to reduce the time consuming liquibase script generation manual typing efforts possible error occurrence and manual verification process and time by 75. Automating the liquibase generation process also helps to remove the burden of recollecting specific tags to be used for a particular change. Moreover developers can concentrate on the business logic and business data rather than wasting their precious efforts in writing files.

  6. Softcopy quality ruler method: implementation and validation

    Science.gov (United States)

    Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying

    2009-01-01

    A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.

  7. Verification and validation of control system software

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.; Kisner, R.A.; Bhadtt, S.C.

    1991-01-01

    The following guidelines are proposed for verification and validation (V ampersand V) of nuclear power plant control system software: (a) use risk management to decide what and how much V ampersand V is needed; (b) classify each software application using a scheme that reflects what type and how much V ampersand V is needed; (c) maintain a set of reference documents with current information about each application; (d) use Program Inspection as the initial basic verification method; and (e) establish a deficiencies log for each software application. The following additional practices are strongly recommended: (a) use a computer-based configuration management system to track all aspects of development and maintenance; (b) establish reference baselines of the software, associated reference documents, and development tools at regular intervals during development; (c) use object-oriented design and programming to promote greater software reliability and reuse; (d) provide a copy of the software development environment as part of the package of deliverables; and (e) initiate an effort to use formal methods for preparation of Technical Specifications. The paper provides background information and reasons for the guidelines and recommendations. 3 figs., 3 tabs

  8. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  9. Experimental validation of wireless communication with chaos.

    Science.gov (United States)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S; Grebogi, Celso

    2016-08-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  10. Computerized Italian criticality guide, description and validation

    Energy Technology Data Exchange (ETDEWEB)

    Carotenuto, M; Landeyro, P A [ENEA - Dipartimento Ciclo del Combustibile, Centro Ricerche Energia, Casaccia (Italy)

    1988-10-15

    Our group is developing an 'expert system' for collecting engineering know-how on back-end nuclear plant design. An expert system is the most suitable software tool for our problem. During the analysis, the design process was divided into different branches. At each branch of the design process the Expert System relates a computerized design procedure. Any design procedure is composed of a set of design methods, together with their condition of application and reliability limits. In the framework of this expert system, the nuclear criticality safety analysis procedure was developed, in the form of a computerized criticality guide, attempting to reproduce the designer's normal 'reasoning' process. The criticality guide is composed of two parts: A computerized text, including theory, a description of the accidents occurred in the past and a description of the italian design experience; An interactive computer aided calculation module, containing a graphical facility for critical parameter curves. In the present report are presented the criticality guide (computerized Italian Criticality Guide) and its validation test. (author)

  11. Verification and software validation for nuclear instrumentation

    International Nuclear Information System (INIS)

    Gaytan G, E.; Salgado G, J. R.; De Andrade O, E.; Ramirez G, A.

    2014-10-01

    In this work is presented a Verification Methodology and Software Validation, to be applied in instruments of nuclear use with associate software. This methodology was developed under the auspices of IAEA, through the regional projects RLA4022 (ARCAL XCIX) and RLA1011 (RLA CXXIII), led by Mexico. In the first project three plans and three procedures were elaborated taking into consideration IEEE standards, and in the second project these documents were updated considering ISO and IEC standards. The developed methodology has been distributed to the participant countries of Latin America in the ARCAL projects and two related courses have been imparted with the participation of several countries, and participating institutions of Mexico like Instituto Nacional de Investigaciones Nucleares (ININ), Comision Federal de Electricidad (CFE) and Comision Nacional de Seguridad Nuclear y Salvaguardias (CNSNS). In the ININ due to the necessity to work with Software Quality Guarantee in systems for the nuclear power plant of the CFE, a Software Quality Guarantee Plan and five procedures were developed in the year 2004, obtaining the qualification of the ININ for software development for the nuclear power plant of CFE. These first documents were developed taking like reference IEEE standards and regulator guides of NRC, being the first step for the development of the methodology. (Author)

  12. Validating experimental and theoretical Langmuir probe analyses

    Science.gov (United States)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  13. Validating the Rett Syndrome Gross Motor Scale.

    Science.gov (United States)

    Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley; Syhler, Birgit; Bisgaard, Anne-Marie; Jacoby, Peter; Leonard, Helen

    2016-01-01

    Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.

  14. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    Energy Technology Data Exchange (ETDEWEB)

    SEXTON, R.A.

    2000-03-13

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation.

  15. Extending the validity of the Feeding Practices and Structure Questionnaire

    OpenAIRE

    Jansen, Elena; Mallan, Kimberley M.; Daniels, Lynne A.

    2015-01-01

    Background Feeding practices are commonly examined as potentially modifiable determinants of children?s eating behaviours and weight status. Although a variety of questionnaires exist to assess different feeding aspects, many lack thorough reliability and validity testing. The Feeding Practices and Structure Questionnaire (FPSQ) is a tool designed to measure early feeding practices related to non-responsive feeding and structure of the meal environment. Face validity, factorial validity, inte...

  16. Validation of the Vanderbilt Holistic Face Processing Test

    OpenAIRE

    Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  17. Validation of the Vanderbilt Holistic Face Processing Test.

    OpenAIRE

    Chao-Chih Wang; Chao-Chih Wang; David Andrew Ross; Isabel Gauthier; Jennifer Joanna Richler

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  18. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  19. Construct validity of the Individual Work Performance Questionnaire.

    OpenAIRE

    Koopmans, L.; Bernaards, C.M.; Hildebrandt, V.H.; Vet, H.C.W. de; Beek, A.J. van der

    2014-01-01

    Objective: To examine the construct validity of the Individual Work Performance Questionnaire (IWPQ). Methods: A total of 1424 Dutch workers from three occupational sectors (blue, pink, and white collar) participated in the study. First, IWPQ scores were correlated with related constructs (convergent validity). Second, differences between known groups were tested (discriminative validity). Results: First, IWPQ scores correlated weakly to moderately with absolute and relative presenteeism, and...

  20. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    International Nuclear Information System (INIS)

    SEXTON, R.A.

    2000-01-01

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation