WorldWideScience

Sample records for validation dirk tasche

  1. Dirk Bakker 1947 – 2009

    CERN Multimedia

    2009-01-01

    Dirk Bakker, far left, with his colleagues in the former AB-CO group during a test of new prototype consoles for the CERN Control Centre in 2005.It was with great sorrow that we learnt of the death of our colleague Dirk, taken too quickly by an incurable illness against which he fought with courage and dignity. Dirk arrived at CERN on 16 April, 1972 and spent nearly 36 years in the accelerator sector. He was instrumental in the distribution of the audio and video signals between the various accelerators and the control rooms and for the deployment of TV transmissions on the many screens all over the site. He was also the key person organizing the installations in the control rooms and his recent contributions for the CERN Control Centre (CCC) were exemplary. All the users of his services knew Dirk as an indispensable expert whose knowledge and professionalism were always appreciated. His discretion and pride in his work made Dirk a ...

  2. Essays in theoretical physics in honour of Dirk Ter Haar

    CERN Document Server

    Parry, W E

    2013-01-01

    Essays in Theoretical Physics: In Honour of Dirk ter Haar is devoted to Dirk ter Haar, detailing the breadth of Dirk's interest in physics. The book contains 15 chapters, with some chapters elucidating stellar dynamics with non-classical integrals; a mean-field treatment of charge density waves in a strong magnetic field; electrodynamics of two-dimensional (surface) superconductors; and the Bethe Ansatz and exact solutions of the Kondo and related magnetic impurity models. Other chapters focus on probing the interiors of neutron stars; macroscopic quantum tunneling; unitary transformation meth

  3. Pädevuskeskne õpe / Dirk van Vierssen

    Index Scriptorium Estoniae

    Vierssen, Dirk van

    2002-01-01

    Eestis korraldatakse politseikoolitust ümber. Konsultandiks on Hollandi Politseikoolituse Keskuse spetsialist, kasvatusteaduste doktor Dirk van Vierssen. Reformi sisu on üleminek kvalifikatsioonikeskselt õppelt pädevuskesksele (kompetentsusele põhinevale) õppele. Kvalifikatsioonikeskse ja pädevuskeskse õppe erinevusi / lindilt üles kirjutanud Raivo Juurak

  4. The design and implementation of the DIRK system for dosemeter issue and record keeping

    International Nuclear Information System (INIS)

    Kendall, G.M.; Kay, P.; Saw, G.M.A.; Salmon, L.; Carter, C.D.; Law, D.V.

    1983-05-01

    DIRK, the computerised system which the National Radiological Protection Board employs for its Personal Monitoring Service, is described. DIRK is also used to store the data for the National Registry for Radiation Workers and could support the Central Index of Dose Information should this be set up. The general principles of the design of DIRK, as well as a detailed description of the system, are included in the report. DIRK is based on a set of interlocked index sequential files manipulated by PL/1 programs. Data compaction techniques are used to reduce by a factor of ten the size of the files stored on magnetic disk. Security of the database is most important and two levels of security have been implemented. Table driven techniques are used for updating the database. A specially designed free-format language is used for specifying changes. Statistics, sorted listings of selected data and summaries are provided by a general purpose program for this type of operation. However, it has still been necessary to write a number of special purpose programs for some particular needs of DIRK users. The final section of the report describes the experiences gained during the planning, implementation and maintenance of DIRK. The importance of liaison with the eventual users of the system is emphasised. (author)

  5. The design and implementation of the DIRK system for dosemeter issue and record keeping

    CERN Document Server

    Kendall, G M; Kay, P; Law, D V; Salmon, L; Saw, G M A

    1983-01-01

    DIRK, the computerised system which the National Radiological Protection Board employs for its Personal Monitoring Service, is described. DIRK is also used to store the data for the National Registry for Radiation Workers and could support the Central Index of Dose Information should this be set up. The general principles of the design of DIRK, as well as a detailed description of the system, are included in the report. DIRK is based on a set of interlocked index sequential files manipulated by PL/1 programs. Data compaction techniques are used to reduce by a factor of ten the size of the files stored on magnetic disk. Security of the database is most important and two levels of security have been implemented. Table driven techniques are used for updating the database. A specially designed free-format language is used for specifying changes. Statistics, sorted listings of selected data and summaries are provided by a general purpose program for this type of operation. However, it has still been necessary to w...

  6. Jodi / Dirk Paesmans ; interv. Tilman Baumgärtel

    Index Scriptorium Estoniae

    Paesmans, Dirk

    2006-01-01

    1994. aastast nime Jodi all töötavast kunstnikepaarist, kelle moodustavad Barcelonas elavad Dirk Paesmans ja Joan Heemskerk. 1999. a. omistati Jodile Webby auhind kunsti kategoorias. D. Paesmans 2001. a. tehtud telefoniintervjuus töödest "OSS****", "SOD", versioonidest arvutimängudele "Quake" ja "Wolfenstein", "Valebrauseritest", huvist kunstilise tarkvara loomise ja eksisteerivate programmide modifitseerimise vastu

  7. Inzicht door onderdompeling Een reactie op Bart Van de Putte, Henk de Smaele en Dirk Jan Wolffram

    Directory of Open Access Journals (Sweden)

    Jan Hein Furnée

    2014-09-01

    Full Text Available Giving a detailed account of the social history of The Hague’s most prominent sites of civilised leisure – the gentlemen’s clubs, the zoo, the Royal Theatre and the seaside resort of Scheveningen – Plaatsen van beschaafd vertier demonstrates how the constant struggle for social in- and exclusion structured the daily lives of upper and middle class men and women in The Hague in the nineteenth century. In response to Bart Van de Putte, Jan Hein Furnée argues that extensive quantitative analyses of ‘class’ and ‘social class’ show that objective class stratifications based onwealth and/or occupation are important tools, but at most semi-finished products for historical research. Furnée fully agrees with Henk de Smaele’s objection that his study would have benefitted from a more in-depth reflection on the ways in which shifting patterns in women’s freedom of movement in urban spaces were related to their political and economic emancipation. In response to Dirk Jan Wolffram, Furnée repeats some examples given in his book that show how political practices in places of leisure impacted upon local and national politics, even though this didnot directly contribute to a linear process of increasing political participation and representation.Aan de hand van een gedetailleerde analyse van de sociale geschiedenis van herenen burgersociëteiten, de dierentuin, de Koninklijke Schouwburg en badplaats Scheveningen demonstreert Plaatsen van beschaafd vertier hoe de constante strijd om sociale in- en uitsluiting het dagelijks leven van mannen en vrouwen uit de hogere en middenstanden in negentiende-eeuws Den Haag beheerste. In reactie op Bart Van de Putte betoogt Jan Hein Furnée dat grondige kwantitatieve analyses van ‘class’ en ‘social class’ uitwijst dat objectieve sociale stratificaties op basis van welstand en/of beroep voor historisch onderzoek weliswaar zeer nuttig enzelfs noodzakelijk, maar uiteindelijk slechts een

  8. Review: Dirk Michel (2009. Politisierung und Biographie. Politische Einstellungen deutscher Zionisten und Holocaustüberlebender [Political Socialization and Biography: German Zionists and Holocaust Survivors and Their Political Attitudes

    Directory of Open Access Journals (Sweden)

    Susanne Bressan

    2012-07-01

    Full Text Available How do extraordinary experiences, especially during childhood and adolescence, affect political attitudes? Most studies focusing on political movements only implicitly address the connection between biographical experiences and political attitudes. Moreover, a detailed understanding of these impacts often remains merely hypothetical. Biographical studies increasingly address the relationship between politics and biography through empirical and hermeneutic approaches. For his doctoral thesis, Dirk MICHEL conducts autobiographical narrative interviews with 20 Jewish Israelis. Based on their extraordinary biographical experiences, MICHEL categorized the interviewees into two groups—the "German Zionists" and the "German Holocaust survivors." He then conducts semi-structured interviews with each of the participants with the aim of analyzing their political attitudes. However, the conceptual categorization of the interviewees, the empirical investigation of the research question and the subsequent analysis all challenge the underpinning theoretical and methodological concepts of the study. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1203165

  9. Preserve America News

    Science.gov (United States)

    Hillary Clinton, and Mrs. Laura Bush on stage at the Sewall-Belmont House. Mrs. Bush was joined by Secretary of the Interior Dirk Kempthorne and Senators Hillary Clinton (NY) and Pete Domenici (NM), who Save America's Treasures established by the Clinton Administration. Secretary Dirk Kempthorne, Senator

  10. Book Review: Is Fair Value Fair? Financial Reporting from an International Perspective

    DEFF Research Database (Denmark)

    Thinggaard, Frank

    2005-01-01

    This is a review of Henk Langendijk, Dirk Swagerman and Willem Verhoog (Eds) "Is Fair Value Fair? Financial Reporting from an International Perspective," Chichester: John Wiley, 2003, ISBN 0 470 85028 0.......This is a review of Henk Langendijk, Dirk Swagerman and Willem Verhoog (Eds) "Is Fair Value Fair? Financial Reporting from an International Perspective," Chichester: John Wiley, 2003, ISBN 0 470 85028 0....

  11. Diagonally Implicit Runge-Kutta Methods for Ordinary Differential Equations. A Review

    Science.gov (United States)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2016-01-01

    A review of diagonally implicit Runge-Kutta (DIRK) methods applied to rst-order ordinary di erential equations (ODEs) is undertaken. The goal of this review is to summarize the characteristics, assess the potential, and then design several nearly optimal, general purpose, DIRK-type methods. Over 20 important aspects of DIRKtype methods are reviewed. A design study is then conducted on DIRK-type methods having from two to seven implicit stages. From this, 15 schemes are selected for general purpose application. Testing of the 15 chosen methods is done on three singular perturbation problems. Based on the review of method characteristics, these methods focus on having a stage order of two, sti accuracy, L-stability, high quality embedded and dense-output methods, small magnitudes of the algebraic stability matrix eigenvalues, small values of aii, and small or vanishing values of the internal stability function for large eigenvalues of the Jacobian. Among the 15 new methods, ESDIRK4(3)6L[2]SA is recommended as a good default method for solving sti problems at moderate error tolerances.

  12. IN MEMORIAM DOCTOR DIRK FOK VAN SLOOTEN

    Directory of Open Access Journals (Sweden)

    DIRK FOK VAN SLOOTEN

    2015-11-01

    Full Text Available In the midst of his work Van Slooten has been suddenly called awayat the relatively early age of 61. It was known that his heart was not toogood, but it was expected that living a quiet life he would be able to finishhis life's work, the monograph of the Malaysian Dipterocarpaceae, towhich he had been able since 1951 to devote all his time and concentrationundisturbed by other duties. The striving towards the completion of thiswork on the most important family of Malaysian forest trees alwaysoccupied his mind and had been to a large extent the main object of his life.Van Slooten's ambition was to produce careful work, meticulous inall details. This made him a slow worker, but at the same time one of thetrustworthy kind. This trend towards perfectionism expressed itselfequally in the preliminaries and routine work towards his objective.Through his method of working progress was steady but unfortunatelyrelatively slow. Other factors beyond his control added to this result.Besides delays due to World War II, Van Slooten performed many otherofficial duties in the same earnest way in which he carried out hisresearch work. Any spontaneity and opportunism he had in his characterwas suppressed through his orderliness. Only in exceptional and veryurgent circumstances would he make decisions a l'improviste. It is ofcourse questionable whether one can deduce a man's character from hispublished writings. Whether this thesis be accepted as a generality or not,it is certain that it held for Van Slooten. His care for details, for straight-forwardness, for trying to find the truth in his work found a remarkableparallel in his office work, and his private life. He wanted things to beclean and orderly. Even on excursions, which he made surprisingly seldom,his clothes were as speckless as they could possibly be in the circumstances.

  13. Postsovkhoz / Margus Kiis

    Index Scriptorium Estoniae

    Kiis, Margus

    2004-01-01

    Mõtte- ja keskkonnakunsti talgud aastail 2001-2003 Moostes. Kommenteerivad üritustest osavõtjad Dirk Lange, Maja Linke, Natalie Waldbaum, Jaakko Himanen, Jere Ruotsalainen, Jorge Tarazona, Slobodanka Stevceska ja Denis Saraginovski

  14. Analysis of growth, yield potential and horticultural performance of ...

    African Journals Online (AJOL)

    PTC Lab

    2013-04-03

    Apr 3, 2013 ... Lakadong. Key words: Turmeric, micropropagation, field performance, tissue culture. ... MATERIALS AND METHODS ... the changes in leaf morphology and colour. ... intervening callus phase (Dirk et al. ,1996; Smith, 1988;.

  15. Hamburg kui hiiglaslik ehitustander

    Index Scriptorium Estoniae

    2003-01-01

    6. novembrist Eesti Arhitektuurimuuseumis Rotermanni soolalaos näitus "arcHH - Architektur made in Hamburg", mis esitleb Hamburgi arhitektide viimase viie aasta loomingut. Näituse koostaja Dirk Meyhöfer, arhitekt Michael Karassowitch

  16. 75 FR 8182 - Qualification of Drivers; Exemption Applications; Diabetes

    Science.gov (United States)

    2010-02-23

    ...., William J. Cobb, Jr., Wallace E. Conover, Daniel C. Druffel, Gregory J. Godley, Troy A. Gortmaker, Charles.... Schlieckau, Richard L. Sulzberger, Clayton F. Tapscott, Dirk VanStralen and Henry L. Waskow, from the ITDM...

  17. Monitoring White-backed Vultures Gyps africanus in the North West ...

    African Journals Online (AJOL)

    Campbell Murn

    Stellaland), South Africa. Dirk, Karen and Stefan van Stuyvenberg. Stellaland Raptor Project. e-mail: stuyvies@telkomsa.net. Introduction. We became involved with the ringing of birds in 2002 and started to ring raptors actively since 2003.

  18. Külalisesinejate sõnavõtud

    Index Scriptorium Estoniae

    1999-01-01

    Ettekandjad: Elisabeth Arnold, Dirk van der Maelen, FranciscoTorres, JiriMashtalka, Demetrios Syllouris, Karoly Lotz, Uwe Gehlen, Michael Sahlin, Tony Gregory, Roma Dovydeniene, Janusz Lewandowski, Ekkehard Pabsch, Jelko Kacir, Jean-Jacques Subrenat, Savas Tsitouridis, Alfred E. Kellermann

  19. Filmische Biographiearbeit

    Directory of Open Access Journals (Sweden)

    Caroline Baetge

    2013-04-01

    Full Text Available Rezension zu: Medebach, Dirk H. 2011. Filmische Biographiearbeit im Bereich Demenz: eine soziologische Studie über Interaktion, Medien, Biographie und Identität in der stationären Pflege. Demenz, Bd. 2. Berlin: Lit-Verl.

  20. Determination of iron and copper contents in certain indigenous varieties of wheat (Triticum aestivum, L.)

    International Nuclear Information System (INIS)

    Akhtar, M.S.; Abbas, N.; Shaheen, A.

    2004-01-01

    Forty seven wheat varieties were tested for their iron and copper contents. The iron and copper contents were found to differ significantly (P 0.05) with respect to iron and copper contents. The variety named Dirk was found to possess the highest iron contents, while the variety Pasban-90 showed the highest copper contents. The varieties Dirk, Sariab, Tandojam-83, Punjab-88, Sarsabz, Punjab-81, Sandal and Sind-81 contained significantly higher iron contents as compared to other wheat varieties. The varieties, which contained the highest concentrations of copper, were Pasban-90, Chenab-79, Faisalabad-85, Lyp-73, Sind-81, Anmol-91, C-271, Rohtas-90 and Chakwal-86. However, the differences in copper contents among all these wheat varieties were non-significant (P>0.05). These varieties can therefore, be recommended to be included for future breeding and commercial exploitation. (author)

  1. Een geschiedenis van vleesloos eten

    NARCIS (Netherlands)

    Dagevos, H.

    2009-01-01

    Recensie van de populaire editie van het proefschrift van Dirk-Jan Verdonk: Het dierloze gerecht: een vegetarische geschiedenis in Nederland. Hierin wordt de geschiedenis van het vegetarisme vanaf de tweede helft van de 19e eeuw tot nu beschreven

  2. M-A-G-I-C viis marsruuti, viis pilku meie Euroopale

    Index Scriptorium Estoniae

    2006-01-01

    10. juunil Eesti Rahvusraamatukogus etendunud teatrivõrgustiku Magic Net (organisatsioon, mis ühendab 14 Euroopa teatrit) ühislavastusest "Peidetud lood" (Projekti juht Dirk Neldner). Arvustavad teatrikriitikud Rait Avestik, Madis Kolk, Ivar Põllu, Andres Keil ja üliõpilane Uku Uusberg

  3. Keskklassi meistrivõistlused / Wolfgang König, Dirk Branke

    Index Scriptorium Estoniae

    König, Wolfgang

    2012-01-01

    Suur test. Mõõtu võtavad 15 pereautot: Audi A4 2.0 TDI, BMW 318d, Mercedes-Benz C 200 CDI, Škoda Superb 2.0 TDI, VW Passat 2.0 TDI, Citroën C5 HDi 140, Hyundai i40 1.7 CRDi, Ford Mondeo 2.0 TDCi, Kia Optima 1.7 CRDi, Mazda 6 2.2 MZR-CD, Opel Insignia 2.0 CDTI, Renault Laguna dCi 150, Peugeot 508 HDi 140, Volvo S60 D3, Seat Exeo 2.0 TDI

  4. Democracy, globalization and ethnic violence

    NARCIS (Netherlands)

    Bezemer, D.J.; Jong-A-Pin, R.

    Bezemer, Dirk, and Jong-A-Pin, Richard Democracy, globalization and ethnic violence Do democratization and globalization processes combine to increase the incidence of violence in developing and emerging economies? The present paper examines this hypothesis by a study of internal violence in

  5. Regional Integration, Trade and Private Sector Development ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Dr. Dirk Hansohm. Institution. Namibian Economic Policy Research Unit. Pays d' institution. Namibia. Site internet. http://www.nepru.org.na. Extrants. Rapports. Regional integration, trade and private sector development : final report. Contenus connexes. L'Initiative des conseils subventionnaires de la recherche scientifique ...

  6. Southern African Journal of HIV Medicine - Vol 17, No 1 (2016)

    African Journals Online (AJOL)

    Andrew Revell, Paul Khabo, Lotty Ledwaba, Sean Emery, Dechao Wang, Robin Wood, Carl Morrow, Hugo Tempelman, Raph L Hamers, Peter Reiss, Ard van Sighem, Anton Pozniak, Julio Montaner, H Clifford Lane, Brendan Larder ... Nnamdi O. Ndubuka, Hyun J. Lim, Dirk M. van der Wal, Valerie J. Ehlers.

  7. Collective labour law after Viking, Laval, Rüffert, and Commission v. Luxembourg

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    The judgments of the European Court of Justice (ECJ) in the International Transport Workers’ Federation and Finnish Seamen’s Union v Viking Line ABP and OÜ Viking Line Eesti (hereinafter ‘Viking’), Laval un Partneri Ltd v Svenska Byggnadsarbetareförbundet and Others (hereinafter ‘Laval’), Dirk

  8. Journal of Biosciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Heat stress response in plants: a complex game with chaperones and more than twenty heat stress transcription factors · Sanjeev Kumar Baniwal Kapil Bharti Kwan Yu Chan Markus Fauth Arnab Ganguli Sachin Kotak Shravan Kumar Mishra Lutz Nover Markus Port Klaus-Dieter Scharf Joanna Tripp Christian Weber Dirk ...

  9. Supporting emergency management through process-aware information systems

    NARCIS (Netherlands)

    Hagebölling, D.; Leoni, de M.; Ardagna, D.; Mecella, M.; Yang, J.

    2009-01-01

    This short paper aims at summarising the invited talk given by Dirk Hagebölling in the PM4HDPS workshop and the subsequent discussion with participants. The talk concerned the current situation of systems for emergency management and their drawbacks. Moreover, it dealt also with the expectation for

  10. Antarctica: Chile’s Claim,

    Science.gov (United States)

    1987-01-01

    India in this petition. More recently, Prime Minister Indira Ghandi asserted that in relation with Antarctica, her country is nei- ther expansionist...West Germany 68 Ghandi , Indira, 69 D’Urville, Dumont, 7 Gherritsz, Dirk, 6 Gold deposits, 28, 29. 88 Gonzalez, Ariel. 95 East Germany, 23, 72

  11. Tshuzhoi Tanel Padar / Dmitri Babitshenko

    Index Scriptorium Estoniae

    Babitshenko, Dmitri

    2007-01-01

    Saksamaalt pärit, Eesti ülikoolides filmiteooria ja -ajaloo loenguid pidav ja filmi "Ring tee" valmimist režissöörina juhtinud Dirk Hoyeri mängufilm esilinastub 27. apr. kinos "Sõprus". Praeguseks on film ümber nimetatud "Võõraks", ühes peaosadest astub üles muusik Tanel Padar

  12. 30. III tutvustab Mooste külalisstuudio Tallinna Kunstihoone galeriis...

    Index Scriptorium Estoniae

    2005-01-01

    Tutvustatakse resideerimisprogrammi võimalusi ning keskkonnakunsti sümpoosioni "PostsovkhoZ". Ürituse kava (John Grzinichi lühifilm MoKSist", Dirk Lange näituse "ööTöö" avamine, ajakirja Palaster esitlus, Marcus Öhrnilt oma projekti, video ja performance'i tutvustus, Kurt Korthalsi, John Grzinichi kontsert jne.)

  13. Mõtlemine algab kujustamisest / Igor Garšnek

    Index Scriptorium Estoniae

    Garšnek, Igor, 1958-

    2010-01-01

    2. detsembril toimunud kontserdist "The Music of the Image", kus esinesid Brüsseli Filharmoonikud, Lewis Morison, Gaelle Mechaly, Juanjo Mosalini ja Gabriel Yared Dirk Brosse dirigeerimisel ning 6. detsembril Estonia kontserdisaalis toimunud "Suurest filmimuusika kontserdist", kus esinesid Peterburi sümfooniaorkester Cinema ja Žanna Dombrovskaja Igor Ponomarenko juhatusel

  14. On implementation of the EU – Ukraine / Dirk Hartman

    Index Scriptorium Estoniae

    Hartman, Dirk

    2014-01-01

    Euroopa Liidu ja selle liikmesriikide ning Ukraina vahelise assotsieerimislepingu rakendamisest (poliitiline dialoog; õigus, vabadus ja turvalisus; majanduslik koostöö; kaubandus; tuumaenergia ja taastuvad energiaallikad)

  15. Miscellaneous news

    NARCIS (Netherlands)

    NN,

    1995-01-01

    Dr. Ruurd (“Ru”) Dirk Hoogland, born 24 July 1924 in Leeuwarden (The Netherlands), died still rather unexpectedly on 18 November 1994 in a hospital in the neighbourhood of Paris, just 8 days after an operation. The later years of his life were overshadowed by a serious illness. Ru did not accept

  16. Say goodbye to coffee stains

    NARCIS (Netherlands)

    Eral, Burak; van den Ende, Henricus T.M.; Mugele, Friedrich Gunther

    2012-01-01

    Discussing ideas over a mug of coffee or tea is the lifeblood of science, but have you ever thought about the stains that can be inadvertently left behind? H Burak Eral, Dirk van den Ende and Frieder Mugele explain how these stains, which can be a major annoyance in some biology techniques, can be

  17. Niigugim Qalgadangis (Atkan Food).

    Science.gov (United States)

    Dirks, Moses; Dirks, Lydia

    A history of food gathering and food preparation techniques of Alaska natives on Atka Island in the Aleutians are presented in Western Aleut and English with illustrations by J. Leslie Boffa and Mike Dirks. Directions are given for preparing: various plants, including wild rice; salted, dried, or smoked fish; baked flour; fried dough; boiled…

  18. 77 FR 65690 - Change in Bank Control Notices; Formations of, Acquisitions by, and Mergers of Bank Holding...

    Science.gov (United States)

    2012-10-30

    ... FEDERAL RESERVE SYSTEM Change in Bank Control Notices; Formations of, Acquisitions by, and Mergers of Bank Holding Companies; Correction This notice corrects a notice (FR Doc. 2012-26297) published on page 65190 of the issue for Thursday, October 25, 2012. Under the Federal Reserve Bank of Dallas heading, the entry for Bryon Dirk Bagenstos,...

  19. Teaching with Purpose: An Interview with Thomas E. Ludwig

    Science.gov (United States)

    Ludwig, Timothy D.; Ludwig, David J.

    2010-01-01

    Thomas E. Ludwig is the John Dirk Werkman Professor of Psychology at Hope College, where he joined the faculty in 1977 after receiving his PhD in development and aging from Washington University in St. Louis. His research focuses on developmental issues in cognitive neuropsychology. He is also the author or coauthor of more than a dozen sets of…

  20. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  1. Kas koostöö on vajadus või kohustus?

    Index Scriptorium Estoniae

    2008-01-01

    15. nov. 2007 toimus Tallinnas turismi aastakonverents "Kas koostöö on vajadus või kohustus?". Ülevaade Maailma Turismiorganisatsiooni riski- ja kriisijuhtimise osakonna juhataja Dirk Glaesseri, Footprint Traveli juhi Jason Barry, MTÜ Maaturism juhatuse esimehe Varri Väli, Kaleva Travel Eesti tegevdirektori Merike Halliku, Riigikogu liikme Urmas Klaase, Eesti Kaubandus-Tööstuskoja teenuste direktori Merike Kompus van der Hoeveni ettekannetest

  2. Transferts intergénérationnels, vieillissement de la population et ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Agent(e) responsable du CRDI. Rodriguez, Mr. Edgard. Financement total. CAD$ 531,100. Pays. Brésil, Amérique du Sud, Chili, Costa Rica, Amérique Nord et Centrale, Mexique, Uruguay. Chargé(e) de projet. Jaspers, Dirk. Chargé(e) de projet. Saad, Paulo. Institution. Commission économique pour l'Amérique latine et les ...

  3. Genomewide association study to detect QTL for twinning rate in ...

    Indian Academy of Sciences (India)

    Mohsen Gholizadeh, Ghodrat Rahimi-Mianji, Ardeshir Nejati-Javaremi, Dirk Jan De Koning and Elisabeth Jonas. J. Genet. 93, 489–493. Chromosome. 02468. − log. 10. (p. ) 1. 2. 3. 4 5 6 7. 9. 11 13 15 17 19. 22 25 27. Figure 1. Manhattan plot of the results from the genomewide association analysis for twinning during the ...

  4. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  5. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  6. Airborne Multi-Spectral Minefield Survey

    Science.gov (United States)

    2005-05-01

    Swedish Defence Research Agency), GEOSPACE (Austria), GTD ( Ingenieria de Sistemas y Software Industrial, Spain), IMEC (Ineruniversity MicroElectronic...RTO-MP-SET-092 18 - 1 UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED Airborne Multi-Spectral Minefield Survey Dirk-Jan de Lange, Eric den...actions is the severe lack of baseline information. To respond to this in a rapid way, cost-efficient data acquisition methods are a key issue. de

  7. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  8. Processed fly ash for workability: stretching to its limits

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, M. [Dirk India Pvt Ltd., Nashik (India)

    2003-07-01

    The paper describes use of fly ash produced by the British Multinational Company called Dirk, in a fire grade, Pozzocreta 63 to improve the workability of concrete used to reline tunnels for the disposal of sewage from Mumbai City, 4 km into the Arabian Sea. It mainly involved rehabilitation of 5.5 km of tunnels from Sion to Banda, 30 m below ground level. 5 figs., 3 tabs.

  9. Direct and indirect effects of paliperidone extended-release tablets on negative symptoms of schizophrenia

    OpenAIRE

    Bossie, Cynthia

    2008-01-01

    Ibrahim Turkoz, Cynthia A Bossie, Bryan Dirks, Carla M CanusoOrtho-McNeil Janssen Scientific Affairs, LLC, Titusville, NJ, USAAbstract: Direct and indirect effects of the new psychotropic paliperidone extended-release (paliperidone ER) tablets on negative symptom improvement in schizophrenia were investigated using path analysis. A post hoc analysis of pooled data from three 6-week, double-blind, placebo-controlled studies of paliperidone ER in patients experiencing acute exacerbation was con...

  10. The Impact of Anti-Immigration Parties on Mainstream Parties' Immigration Positions in the Netherlands, Flanders and the UK 1987-2010: Divided electorates, left-right politics and the pull towards restrictionism

    OpenAIRE

    DAVIS, Amber

    2012-01-01

    Defence date: 20 April 2012; Examining Board: Professor Rainer Bauböck, EUI, for Professor Peter Mair (†), EUI (supervisor); Professor Virginie Guiraudon, National Center for Scientific Research, Paris; Professor Meindert Fennema, Universiteit van Amsterdam; Professor Dirk Jacobs, Université Libre de Bruxelles The rise of anti-immigration parties across Western Europe has put enormous pressure on mainstream parties to adapt their competitive strategies. This thesis tests the hypothesis tha...

  11. Clinical Evaluation of an Adhesive Sealant for Controlling Dental Caries in Naval Personnel: One-Year Results

    Science.gov (United States)

    1974-08-20

    safeguards were employed in that a voltage monitoring strip chart recorder was employed during sealant placement to ensure continuous delivery of...adequate voltage to the ultraviolet light source (Nuva- Lite)* used for sealant polymerization. Also, a new air compressor** with suitable...ment effect in young Naval personnel. 2. The very low occlusal attack rate for untreated control teeth 3. Dirks, O. B., Houwink, B. and Kwant , G. W

  12. The Elusiveness of Welfare-State Specificity

    Directory of Open Access Journals (Sweden)

    Tahl Kaminer

    2015-12-01

    Full Text Available Review of Architecture and the Welfare State, edited by Mark Swenarton, Tom Avermaete and Dirk van den Heuvel (Oxon and New York: Routledge, 2015.Incomprehensibly, the relation of architecture to society is, on the one hand, a trivial fact, and, on the other, a perplexing assumption. Trivial, because the evidence of the tight relationship is ubiquitous, screaming its existence from the tops of skyscrapers, from the basements of gloomy panopticon prisons, and from the doorsteps of Levittown houses. Perplexing, because, despite of such an abundance of evidence, the actual form of such a relationship remains contested and, mostly, obscure. This review article will interrogate the relation of architecture to society via the recently published anthology Architecture and the Welfare State, edited by Mark Swenarton, Tom Avermaete and Dirk van den Heuvel. The anthology postulates that a rigorous correlation can be established between architectural design and the welfare state. The review article, in turn, posits two questions to the anthology: what is specific about the welfare state which differentiates it from other societies of the era, and how is a rigorous correlation of a specific form of architecture to the welfare state established, beyond limited notions such as zeitgeist? 

  13. Venemaa on meie enda loodud oht / Noam Chomsky ; interv. Dirk Hoyer

    Index Scriptorium Estoniae

    Chomsky, Noam, 1928-

    2008-01-01

    Ühendriikide tuntud arvamusliider sõna- ja ajakirjandusvabadusest USA-s, suhtumisest Iraagi sõtta, avalikkussuhete tööstuse ja parteide juhtkondade toodete müügist presidendivalimiste kampaania käigus, tervishoiusüsteemi katastroofilisest olukorrast USA-s, valikust presidendikandidaatide Barack Obama ja John McCaini vahel, kosmose militariseerimisest, suhtumisest 11. septembrisse, Venemaast Balti julgeoleku ohuna, NATOst julgeoleku garanteerijana, energiaressurssidest kui hirmutamisvahenditest

  14. Stemcell Information: SKIP001188 [SKIP Stemcell Database[Archive

    Lifescience Database Archive (English)

    Full Text Available tem Cell Biology, The Hospital for Sick Children (SickKids) Arthur and Sonia Laba...tt Brain Tumor Research Center and Developmental and Stem Cell Biology, The Hospital for Sick Children (Sick...Kids) Arthur and Sonia Labatt Brain Tumor Research Center and Developmental and Stem Cell Biology, The Hospital for Sick Children...ter and Developmental and Stem Cell Biology, The Hospital for Sick Children (SickKids) Peter B. Dirks Peter

  15. Scalable Biomarker Discovery for Diverse High-Dimensional Phenotypes

    Science.gov (United States)

    2015-11-23

    William D. Shannon, Richard R. Sharp, Thomas J. Sharpton, Narmada Shenoy, Nihar U. Sheth, Gina A. Simone, Indresh Singh, Chris S. Smillie, Jack D... William D. Shannon, Richard R. Sharp, Thomas J. Sharpton, Narmada Shenoy, Nihar U. Sheth, Gina A. Simone, Indresh Singh, Christopher S. Smillie, Jack D...Susanne J. Szabo, Jeff Porter, Harri Lähdesmäki, Curtis Huttenhower, Dirk Gevers, Thomas W. Cullen , Mikael Knip, on behalf of the DIABIMMUNE Study Group

  16. A Spatial and Temporal Characterization of the Background Neutron Environment at the Navy and Marine Corps Memorial Stadium

    Science.gov (United States)

    2017-04-01

    Naval Academy Annapolis, MD Abstract This project utilized neutron detection near the Naval Academy football stadium in order to map and quantify...Introduction The Navy and Marine Corps Memorial Stadium is the U.S. Naval Academy’s football venue in Annapolis, Maryland, with a seating capacity of...Ziegler and H. Puchner, SER - History , Trends and Challenges A Guide for Designing with Memory ICs, San Jose: Cypress, 2004. [7] J.D. Dirk et al

  17. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  18. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  19. Oxygen recoil implant from SiO2 layers into single-crystalline silicon

    International Nuclear Information System (INIS)

    Wang, G.; Chen, Y.; Li, D.; Oak, S.; Srivastav, G.; Banerjee, S.; Tasch, A.; Merrill, P.; Bleiler, R.

    2001-01-01

    It is important to understand the distribution of recoil-implanted atoms and the impact on device performance when ion implantation is performed at a high dose through surface materials into single crystalline silicon. For example, in ultralarge scale integration impurity ions are often implanted through a thin layer of screen oxide and some of the oxygen atoms are inevitably recoil implanted into single-crystalline silicon. Theoretical and experimental studies have been performed to investigate this phenomenon. We have modified the Monte Carlo ion implant simulator, UT-Marlowe (B. Obradovic, G. Wang, Y. Chen, D. Li, C. Snell, and A. F. Tasch, UT-MARLOWE Manual, 1999), which is based on the binary collision approximation, to follow the full cascade and to dynamically modify the stoichiometry of the Si layer as oxygen atoms are knocked into it. CPU reduction techniques are used to relieve the demand on computational power when such a full cascade simulation is involved. Secondary ion mass spectrometry (SIMS) profiles of oxygen have been carefully obtained for high dose As and BF 2 implants at different energies through oxide layers of various thicknesses, and the simulated oxygen profiles are found to agree very well with the SIMS data. [copyright] 2001 American Institute of Physics

  20. Slow Money for Soft Energy: Lessons for Energy Finance from the Slow Money Movement

    Energy Technology Data Exchange (ETDEWEB)

    Kock, Beaudry E. [Environmental Change Institute, University of Oxford, Oxford (United Kingdom)], e-mail: beaudry.kock@ouce.ox.ac.uk

    2012-12-15

    Energy infrastructure is decarbonizing, shifting from dirty coal to cleaner gas- and emissions-free renewables. This is an important and necessary change that unfortunately risks preserving many problematic technical and institutional properties of the old energy system: in particular, the large scales, high aggregation, and excessive centralization of renewable energy infrastructure and, importantly, its financing. Large-scale renewables carry environmental, social and political risks that cannot be ignored, and more importantly they may not alone accomplish the necessary decarbonization of the power sector. We need to revive a different approach to clean energy infrastructure: a 'softer' (Lovins 1978), more distributed, decentralized, local-scale strategy. To achieve this, we need a fundamentally different approach to the financing of clean energy infrastructure. I propose we learn from the 'Slow Money' approach being pioneered in sustainable agriculture (Tasch 2010), emphasizing a better connection to place, smaller scales, and a focus on quality over quantity. This 'slow money, soft energy' vision is not a repudiation of big-scale renewables, since there are some societal needs, which can only be met by big, centralized power. But we do not need the level of concentration in control and finance epitomized by the current trends in the global renewables sector: this can and must change.

  1. Review of Interorganizational Trust Models

    Science.gov (United States)

    2010-09-01

    mais les résultats de la recherche se sont avérés relativement maigres. Bien que nous ayons trouvé de nombreux modèles de confiance...rooted in common values, including a common concept of moral obligation. This type of trust typically takes a long time to develop, and is the type of...Perspectives on relationship repair and implications (Dirks et al., 2009, p. 72) Attributional theories propose that one party uses information about a

  2. Proceedings of the Frontiers of Retrovirology Conference 2016

    OpenAIRE

    Zurnic, Irena; H?tter, Sylvia; Lehmann, Ute; Stanke, Nicole; Reh, Juliane; Kern, Tobias; Lindel, Fabian; Gerresheim, Gesche; Hamann, Martin; M?llers, Erik; Lesbats, Paul; Cherepanov, Peter; Serrao, Erik; Engelman, Alan; Lindemann, Dirk

    2016-01-01

    Table of contents Oral presentations Session 1: Entry & uncoating O1 Host cell polo-like kinases (PLKs) promote early prototype foamy virus (PFV) replication Irena Zurnic, Sylvia H?tter, Ute Lehmann, Nicole Stanke, Juliane Reh, Tobias Kern, Fabian Lindel, Gesche Gerresheim, Martin Hamann, Erik M?llers, Paul Lesbats, Peter Cherepanov, Erik Serrao, Alan Engelman, Dirk Lindemann O2 A novel entry/uncoating assay reveals the presence of at least two species of viral capsids during synchronized HIV...

  3. Proceedings of the Frontiers of Retrovirology Conference 2016

    OpenAIRE

    Zurnic, Irena; Hütter, Sylvia; Lehmann, Ute; Stanke, Nicole; Reh, Juliane; Kern, Tobias; Lindel, Fabian; Gerresheim, Gesche; Hamann, Martin; Müllers, Erik; Lesbats, Paul; Cherepanov, Peter; Serrao, Erik; Engelman, Alan; Lindemann, Dirk

    2016-01-01

    Table of contents Oral presentations Session 1: Entry & uncoating O1 Host cell polo-like kinases (PLKs) promote early prototype foamy virus (PFV) replication Irena Zurnic, Sylvia Hütter, Ute Lehmann, Nicole Stanke, Juliane Reh, Tobias Kern, Fabian Lindel, Gesche Gerresheim, Martin Hamann, Erik Müllers, Paul Lesbats, Peter Cherepanov, Erik Serrao, Alan Engelman, Dirk Lindemann O2 A novel entry/uncoating assay reveals the presence of at least two species of viral capsids during synchronized HIV...

  4. Electron scattering. Lectures given at Argonne National Laboratory

    International Nuclear Information System (INIS)

    Walecka, J.D.

    1984-01-01

    This report is an almost verbatim copy of lectures on Electron Scattering given at Argonne National Laboratory in the Fall of 1982 by John Dirk Walecka. Professor Walecka was an Argonne Fellow in the Physics Division from October 1982 to January 1983. Broad headings include general considerations, coincidence cross section (e,e'x), quantum electrodynamics and radiative corrections, unification of electroweak interactions, relativistic models of nuclear structure, electroproduction of pions and nucleon resonances, and deep inelastic (e,e')

  5. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  6. Nonlinear Frequency Conversion in III-V Semiconductor Photonic Crystals

    Science.gov (United States)

    2012-03-01

    benefited early in my PhD from the assistance of Dirk Englund and Vanessa Sih. Subsequently, I was fortunate to collab- orate with newer members of the...to defense practice talks. I would also like to acknowledge Seth Lloyd, Vicky Wen, Jason Pelc, Qiang Zhang, Peter McMahon, Liz Edwards , Stephanie...2006. [67] Z. Yang, P. Chak, A. D. Bristow, H. M. van Driel, R. Iyer, J. S. Aitchison, A. L. Smirl, and J. E. Sipe, “Enhanced second-harmonic

  7. Igal pool ja mitte kuskil / Piibe Piirma

    Index Scriptorium Estoniae

    Piirma, Piibe

    2006-01-01

    Võrgu- ehk netikunsti ajaloost ja arengust. Korea kunstniku Nam June Paiki, Vene kunstnike Aleksei Shulgini ja Olja Ljalina, hollandi fotograafi Joan Heemskerk'i, belgia kunstniku Dirk Paesmans'i, sloveeni kunstnike Vuk Cosic'i ja Marko Pelijhan'i, kanada kunstniku Robert Adrian X-i, saksa kunstnike Wolfgang Staehle, Rena Tangens'i ja Cornelia Sollfranki tegevusest. Šveitsi kunstirühmituse "Etoy" performance'ist "Toywar". Lühidalt eesti võrgukunstist, Mare Tralla ja Tiia Johannsoni tegevusest. Repliigid Heie Treierilt ja Raivo Kelomehelt

  8. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  9. Both Islam and Christianity Invite to Tolerance: A Commentary on Dirk Baier.

    Science.gov (United States)

    Salamati, Payman; Naji, Zohrehsadat; Koutlaki, Sofia A; Rahimi-Movaghar, Vafa

    2015-12-01

    Baier recently published an interesting original article in the Journal of Interpersonal Violence. He compared violent behavior (VB) between Christians and Muslims and concluded that religiosity was not a protecting factor against violence and that Muslim religiosity associated positively with increased VB. We appreciate the author's enormous efforts on researching such an issue of relevance to today's world. However, in our view, the article has methodological weaknesses in terms of participants, instruments, and statistical analyses, which we examine in detail. Therefore, Baier's results should be interpreted more cautiously. Although interpersonal violence may sometimes be observable among Muslims, we do not attribute these to Islam's teachings. In our opinion, both Islam and Christianity invite to tolerance, peace, and friendship. So, the comparison of such differences and the drawing of conclusions that may reflect negatively on specific religious groups need better defined research, taking into consideration other basic variables in different communities. © The Author(s) 2014.

  10. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  11. CosmoQuest:Using Data Validation for More Than Just Data Validation

    Science.gov (United States)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  12. The measurement of instrumental ADL: content validity and construct validity

    DEFF Research Database (Denmark)

    Avlund, K; Schultz-Larsen, K; Kreiner, S

    1993-01-01

    do not depend on help. It is also possible to add the items in a valid way. However, to obtain valid IADL-scales, we omitted items that were highly relevant to especially elderly women, such as house-work items. We conclude that the criteria employed for this IADL-measure are somewhat contradictory....... showed that 14 items could be combined into two qualitatively different additive scales. The IADL-measure complies with demands for content validity, distinguishes between what the elderly actually do, and what they are capable of doing, and is a good discriminator among the group of elderly persons who...

  13. Assessment of teacher competence using video portfolios: reliability, construct validity and consequential validity

    NARCIS (Netherlands)

    Admiraal, W.; Hoeksma, M.; van de Kamp, M.-T.; van Duin, G.

    2011-01-01

    The richness and complexity of video portfolios endanger both the reliability and validity of the assessment of teacher competencies. In a post-graduate teacher education program, the assessment of video portfolios was evaluated for its reliability, construct validity, and consequential validity.

  14. Italian version of Dyspnoea-12: cultural-linguistic validation, quantitative and qualitative content validity study.

    Science.gov (United States)

    Caruso, Rosario; Arrigoni, Cristina; Groppelli, Katia; Magon, Arianna; Dellafiore, Federica; Pittella, Francesco; Grugnetti, Anna Maria; Chessa, Massimo; Yorke, Janelle

    2018-01-16

    Dyspnoea-12 is a valid and reliable scale to assess dyspneic symptom, considering its severity, physical and emotional components. However, it is not available in Italian version due to it was not yet translated and validated. For this reason, the aim of this study was to develop an Italian version Dyspnoea-12, providing a cultural and linguistic validation, supported by the quantitative and qualitative content validity. This was a methodological study, divided into two phases: phase one is related to the cultural and linguistic validation, phase two is related to test the quantitative and qualitative content validity. Linguistic validation followed a standardized translation process. Quantitative content validity was assessed computing content validity ratio (CVR) and index (I-CVIs and S-CVI) from expert panellists response. Qualitative content validity was assessed by the narrative analysis on the answers of three open-ended questions to the expert panellists, aimed to investigate the clarity and the pertinence of the Italian items. The translation process found a good agreement in considering clear the items in both the six involved bilingual expert translators and among the ten voluntary involved patients. CVR, I-CVIs and S-CVI were satisfactory for all the translated items. This study has represented a pivotal step to use Dyspnoea-12 amongst Italian patients. Future researches are needed to deeply investigate the Italian version of  Dyspnoea-12 construct validity and its reliability, and to describe how dyspnoea components (i.e. physical and emotional) impact the life of patients with cardiorespiratory diseases.

  15. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  16. Validation of Symptom Validity Tests Using a "Child-model" of Adult Cognitive Impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P. E. J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children's cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  17. Validation of symptom validity tests using a "child-model" of adult cognitive impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P.E.J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children’s cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  18. Nonlinear mechanics a supplement to theoretical mechanics of particles and continua

    CERN Document Server

    Fetter, Alexander L

    2006-01-01

    In their prior Dover book, Theoretical Mechanics of Particles and Continua, Alexander L. Fetter and John Dirk Walecka provided a lucid and self-contained account of classical mechanics, together with appropriate mathematical methods. This supplement-an update of that volume-offers a bridge to contemporary mechanics.The original book's focus on continuum mechanics-with chapters on sound waves in fluids, surface waves on fluids, heat conduction, and viscous fluids-forms the basis for this supplement's discussion of nonlinear continuous systems. Topics include linearized stability analysis; a det

  19. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  20. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  1. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  2. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  3. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  4. The nuclear era. From the nuclear fission to the disposal; Das nukleare Zeitalter. Von der Kernspaltung bis zur Entsorgung

    Energy Technology Data Exchange (ETDEWEB)

    Eidemueller, Dirk

    2012-07-01

    Nuclear energy is a controverse theme - and the debate between proponents and opponents is often very emotional. by this not only the sober scientific facts but also socially significant points can easily lose sight. These information gaps Dirk Eidemueller will close with his book. He explains the foundations of the atomic force and the risks, which are connected with this: In the uranium mining, in the proliferation of nuclear weapons, but particularly in the operation of nuclear power plants and the disposal of atomic waste, which will us still keep busy for generations.

  5. Thermal-induced changes on the properties of spin-coated P3HT:C60 thin films for solar cell applications

    CSIR Research Space (South Africa)

    Motaung, DE

    2009-09-01

    Full Text Available on the properties of spin- coated P3HT:C60 thin films for solar cell applications David E. Motaung1, 2, Gerald F. Malgas1,*, Christopher J. Arendse1, Sipho E. Mavundla1, 3 Clive J. Oliphant 1, 2 and Dirk Knoesen2 1National Centre for Nano...-structured Materials, Council for Scientific Industrial Research, P. O. Box 395, Pretoria, 0001, South Africa 2Deparment of Physics, University of the Western Cape, Private Bag X17, Bellville, 7535, South Africa 3Deparment of Chemistry, University of the Western...

  6. Validity in Qualitative Evaluation

    OpenAIRE

    Vasco Lub

    2015-01-01

    This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate), the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of con...

  7. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  9. Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.

    Science.gov (United States)

    Levinson, Cheri A; Rodebaugh, Thomas L

    2011-09-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.

  10. Validity in Qualitative Evaluation

    Directory of Open Access Journals (Sweden)

    Vasco Lub

    2015-12-01

    Full Text Available This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate, the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of connecting them with aspects of evaluation in social policy. It argues that different purposes of qualitative evaluations can be linked with different scientific paradigms and perspectives, thus transcending unproductive paradigmatic divisions as well as providing a flexible yet rigorous validity framework for researchers and reviewers of qualitative evaluations.

  11. An information architecture for validating courseware

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    Courseware validation should locate Learning Objects inconsistent with the courseware instructional design being used. In order for validation to take place it is necessary to identify the implicit and explicit information needed for validation. In this paper, we identify this information and formally define an information architecture to model courseware validation information explicitly. This promotes tool-support for courseware validation and its interoperability with the courseware specif...

  12. Lesson 6: Signature Validation

    Science.gov (United States)

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  13. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    Science.gov (United States)

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  14. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  15. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  16. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  17. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  18. NVN 5694 intra laboratory validation. Feasibility study for interlaboratory- validation

    International Nuclear Information System (INIS)

    Voors, P.I.; Baard, J.H.

    1998-11-01

    Within the project NORMSTAR 2 a number of Dutch prenormative protocols have been defined for radioactivity measurements. Some of these protocols, e.g. the Dutch prenormative protocol NVN 5694, titled Methods for radiochemical determination of polonium-210 and lead-210, have not been validated, neither by intralaboratory nor interlaboratory studies. Validation studies are conducted within the framework of the programme 'Normalisatie and Validatie van Milieumethoden 1993-1997' (Standardization and Validation of test methods for environmental parameters) of the Dutch Ministry of Housing, Physical Planning and the Environment (VROM). The aims of this study were (a) a critical evaluation of the protocol, (b) investigation on the feasibility of an interlaboratory study, and (c) the interlaboratory validation of NVN 5694. The evaluation of the protocol resulted in a list of deficiencies varying from missing references to incorrect formulae. From the survey by interview it appeared that for each type of material, there are 4 to 7 laboratories willing to participate in a interlaboratory validation study. This reflects the situation in 1997. Consequently, if 4 or 6 (the minimal number) laboratories are participating and each laboratory analyses 3 subsamples, the uncertainty in the repeatability standard deviation is 49 or 40 %, respectively. If the ratio of reproducibility standard deviation to the repeatability standard deviation is equal to 1 or 2, then the uncertainty in the reproducibility standard deviation increases from 42 to 67 % and from 34 to 52 % for 4 or 6 laboratories, respectively. The intralaboratory validation was established on four different types of materials. Three types of materials (milkpowder condensate and filter) were prepared in the laboratory using the raw material and certified Pb-210 solutions, and one (sediment) was obtained from the IAEA. The ECN-prepared reference materials were used after testing on homogeneity. The pre-normative protocol can

  19. SHIELD verification and validation report

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    This document outlines the verification and validation effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system code. Along with its predecessors, SHIELD has been in use at the Savannah River Site (SRS) for more than ten years. During this time the code has been extensively tested and a variety of validation documents have been issued. The primary function of this report is to specify the features and capabilities for which SHIELD is to be considered validated, and to reference the documents that establish the validation

  20. Regstellende aksie, aliënasie en die nie-aangewese groep / Dirk Johannes Hermann

    OpenAIRE

    Hermann, Dirk Johannes

    2006-01-01

    Affirmative action is a central concept in South African politics and the workplace. The Employment Equity Act divides society into a designated group (blacks, women and people with disabilities) and a non-designated group (white men and white women). In this study, the influence of affirmative action on alienation of the non-designated group was investigated. Guidelines were also developed for employers in order to lead the non-designated group from a state of alienation to th...

  1. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    Science.gov (United States)

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  2. Comparative Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow r...... is that the comparative validation can be regarded as the main argument to continue the validation of the building simulation software for the buildings with the double skin façade with the empirical validation test cases.......The scope of this subtask is to perform a comparative validation of the building simulation software for the buildings with the double skin façade. The outline of the results in the comparative validation identifies the areas where is no correspondence achieved, i.e. calculation of the air flow...

  3. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  4. Validation of a scenario-based assessment of critical thinking using an externally validated tool.

    Science.gov (United States)

    Buur, Jennifer L; Schmidt, Peggy; Smylie, Dean; Irizarry, Kris; Crocker, Carlos; Tyler, John; Barr, Margaret

    2012-01-01

    With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

  5. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  6. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  7. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  8. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  9. The validity of the 4-Skills Scan: A double validation study.

    Science.gov (United States)

    van Kernebeek, W G; de Kroon, M L A; Savelsbergh, G J P; Toussaint, H M

    2018-06-01

    Adequate gross motor skills are an essential aspect of a child's healthy development. Where physical education (PE) is part of the primary school curriculum, a strong curriculum-based emphasis on evaluation and support of motor skill development in PE is apparent. Monitoring motor development is then a task for the PE teacher. In order to fulfil this task, teachers need adequate tools. The 4-Skills Scan is a quick and easily manageable gross motor skill instrument; however, its validity has never been assessed. Therefore, the purpose of this study is to assess the construct and concurrent validity of both 4-Skills Scans (version 2007 and version 2015). A total of 212 primary school children (6 - 12 years old), was requested to participate in both versions of the 4-Skills Scan. For assessing construct validity, children covered an obstacle course with video recordings for observation by an expert panel. For concurrent validity, a comparison was made with the MABC-2, by calculating Pearson correlations. Multivariable linear regression analyses were performed to determine the contribution of each subscale to the construct of gross motor skills, according to the MABC-2 and the expert panel. Correlations between the 4-Skills Scans and expert valuations were moderate, with coefficients of .47 (version 2007) and .46 (version 2015). Correlations between the 4-Skills Scans and the MABC-2 (gross) were moderate (.56) for version 2007 and high (.64) for version 2015. It is concluded that both versions of the 4-Skills Scans are satisfactory valid instruments for assessing gross motor skills during PE lessons. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    Science.gov (United States)

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  11. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  12. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  13. Contextual Validity in Hybrid Logic

    DEFF Research Database (Denmark)

    Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin

    2013-01-01

    interpretations. Moreover, such indexicals give rise to a special kind of validity—contextual validity—that interacts with ordinary logi- cal validity in interesting and often unexpected ways. In this paper we model these interactions by combining standard techniques from hybrid logic with insights from the work...... of Hans Kamp and David Kaplan. We introduce a simple proof rule, which we call the Kamp Rule, and first we show that it is all we need to take us from logical validities involving now to contextual validities involving now too. We then go on to show that this deductive bridge is strong enough to carry us...... to contextual validities involving yesterday, today and tomorrow as well....

  14. Works of Game

    DEFF Research Database (Denmark)

    Sharp, John

    and games has clouded for both artists and gamemakers. Contemporary art has drawn on the tool set of videogames, but has not considered them a cultural form with its own conceptual, formal, and experiential affordances. For their part, game developers and players focus on the innate properties of games...... and offers case studies for each. “Game Art,” which includes such artists as Julian Oliver, Cory Arcangel, and JODI (Joan Heemskerk and Dirk Paesmans) treats videogames as a form of popular culture from which can be borrowed subject matter, tools, and processes. “Artgames,” created by gamemakers including...

  15. Validation Process Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  16. CFD validation experiments for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  17. Human Factors methods concerning integrated validation of nuclear power plant control rooms; Metodutveckling foer integrerad validering

    Energy Technology Data Exchange (ETDEWEB)

    Oskarsson, Per-Anders; Johansson, Bjoern J.E.; Gonzalez, Natalia (Swedish Defence Research Agency, Information Systems, Linkoeping (Sweden))

    2010-02-15

    The frame of reference for this work was existing recommendations and instructions from the NPP area, experiences from the review of the Turbic Validation and experiences from system validations performed at the Swedish Armed Forces, e.g. concerning military control rooms and fighter pilots. These enterprises are characterized by complex systems in extreme environments, often with high risks, where human error can lead to serious consequences. A focus group has been performed with representatives responsible for Human Factors issues from all Swedish NPP:s. The questions that were discussed were, among other things, for whom an integrated validation (IV) is performed and its purpose, what should be included in an IV, the comparison with baseline measures, the design process, the role of SSM, which methods of measurement should be used, and how the methods are affected of changes in the control room. The report brings different questions to discussion concerning the validation process. Supplementary methods of measurement for integrated validation are discussed, e.g. dynamic, psychophysiological, and qualitative methods for identification of problems. Supplementary methods for statistical analysis are presented. The study points out a number of deficiencies in the validation process, e.g. the need of common guidelines for validation and design, criteria for different types of measurements, clarification of the role of SSM, and recommendations for the responsibility of external participants in the validation process. The authors propose 12 measures for taking care of the identified problems

  18. MARS Validation Plan and Status

    International Nuclear Information System (INIS)

    Ahn, Seung-hoon; Cho, Yong-jin

    2008-01-01

    The KINS Reactor Thermal-hydraulic Analysis System (KINS-RETAS) under development is directed toward a realistic analysis approach of best-estimate (BE) codes and realistic assumptions. In this system, MARS is pivoted to provide the BE Thermal-Hydraulic (T-H) response in core and reactor coolant system to various operational transients and accidental conditions. As required for other BE codes, the qualification is essential to ensure reliable and reasonable accuracy for a targeted MARS application. Validation is a key element of the code qualification, and determines the capability of a computer code in predicting the major phenomena expected to occur. The MARS validation was made by its developer KAERI, on basic premise that its backbone code RELAP5/MOD3.2 is well qualified against analytical solutions, test or operational data. A screening was made to select the test data for MARS validation; some models transplanted from RELAP5, if already validated and found to be acceptable, were screened out from assessment. It seems to be reasonable, but does not demonstrate whether code adequacy complies with the software QA guidelines. Especially there may be much difficulty in validating the life-cycle products such as code updates or modifications. This paper presents the plan for MARS validation, and the current implementation status

  19. Reliability and validity in a nutshell.

    Science.gov (United States)

    Bannigan, Katrina; Watson, Roger

    2009-12-01

    To explore and explain the different concepts of reliability and validity as they are related to measurement instruments in social science and health care. There are different concepts contained in the terms reliability and validity and these are often explained poorly and there is often confusion between them. To develop some clarity about reliability and validity a conceptual framework was built based on the existing literature. The concepts of reliability, validity and utility are explored and explained. Reliability contains the concepts of internal consistency and stability and equivalence. Validity contains the concepts of content, face, criterion, concurrent, predictive, construct, convergent (and divergent), factorial and discriminant. In addition, for clinical practice and research, it is essential to establish the utility of a measurement instrument. To use measurement instruments appropriately in clinical practice, the extent to which they are reliable, valid and usable must be established.

  20. Screening for postdeployment conditions: development and cross-validation of an embedded validity scale in the neurobehavioral symptom inventory.

    Science.gov (United States)

    Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G

    2014-01-01

    To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.

  1. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  2. Verification, validation, and reliability of predictions

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1987-04-01

    The objective of predicting long-term performance should be to make reliable determinations of whether the prediction falls within the criteria for acceptable performance. Establishing reliable predictions of long-term performance of a waste repository requires emphasis on valid theories to predict performance. The validation process must establish the validity of the theory, the parameters used in applying the theory, the arithmetic of calculations, and the interpretation of results; but validation of such performance predictions is not possible unless there are clear criteria for acceptable performance. Validation programs should emphasize identification of the substantive issues of prediction that need to be resolved. Examples relevant to waste package performance are predicting the life of waste containers and the time distribution of container failures, establishing the criteria for defining container failure, validating theories for time-dependent waste dissolution that depend on details of the repository environment, and determining the extent of congruent dissolution of radionuclides in the UO 2 matrix of spent fuel. Prediction and validation should go hand in hand and should be done and reviewed frequently, as essential tools for the programs to design and develop repositories. 29 refs

  3. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  4. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  5. [Validation of the IBS-SSS].

    Science.gov (United States)

    Betz, C; Mannsdörfer, K; Bischoff, S C

    2013-10-01

    Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterised by abdominal pain, associated with stool abnormalities and changes in stool consistency. Diagnosis of IBS is based on characteristic symptoms and exclusion of other gastrointestinal diseases. A number of questionnaires exist to assist diagnosis and assessment of severity of the disease. One of these is the irritable bowel syndrome - severity scoring system (IBS-SSS). The IBS-SSS was validated 1997 in its English version. In the present study, the IBS-SSS has been validated in German language. To do this, a cohort of 60 patients with IBS according to the Rome III criteria, was compared with a control group of healthy individuals (n = 38). We studied sensitivity and reproducibility of the score, as well as the sensitivity to detect changes of symptom severity. The results of the German validation largely reflect the results of the English validation. The German version of the IBS-SSS is also a valid, meaningful and reproducible questionnaire with a high sensitivity to assess changes in symptom severity, especially in IBS patients with moderate symptoms. It is unclear if the IBS-SSS is also a valid questionnaire in IBS patients with severe symptoms because this group of patients was not studied. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Further Validation of the IDAS: Evidence of Convergent, Discriminant, Criterion, and Incremental Validity

    Science.gov (United States)

    Watson, David; O'Hara, Michael W.; Chmielewski, Michael; McDade-Montez, Elizabeth A.; Koffel, Erin; Naragon, Kristin; Stuart, Scott

    2008-01-01

    The authors explicated the validity of the Inventory of Depression and Anxiety Symptoms (IDAS; D. Watson et al., 2007) in 2 samples (306 college students and 605 psychiatric patients). The IDAS scales showed strong convergent validity in relation to parallel interview-based scores on the Clinician Rating version of the IDAS; the mean convergent…

  7. The Role of Generalizability in Validity.

    Science.gov (United States)

    Kane, Michael

    The relationship between generalizability and validity is explained, making four important points. The first is that generalizability coefficients provide upper bounds on validity. The second point is that generalization is one step in most interpretive arguments, and therefore, generalizability is a necessary condition for the validity of these…

  8. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  9. Construct Validity: Advances in Theory and Methodology

    OpenAIRE

    Strauss, Milton E.; Smith, Gregory T.

    2009-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review f...

  10. Simulation Validation for Societal Systems

    National Research Council Canada - National Science Library

    Yahja, Alex

    2006-01-01

    .... There are however, substantial obstacles to validation. The nature of modeling means that there are implicit model assumptions, a complex model space and interactions, emergent behaviors, and uncodified and inoperable simulation and validation knowledge...

  11. Shift Verification and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Gregory G [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  12. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  13. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  14. ValidatorDB: database of up-to-date validation results for ligands and non-standard residues from the Protein Data Bank.

    Science.gov (United States)

    Sehnal, David; Svobodová Vařeková, Radka; Pravda, Lukáš; Ionescu, Crina-Maria; Geidl, Stanislav; Horský, Vladimír; Jaiswal, Deepti; Wimmerová, Michaela; Koča, Jaroslav

    2015-01-01

    Following the discovery of serious errors in the structure of biomacromolecules, structure validation has become a key topic of research, especially for ligands and non-standard residues. ValidatorDB (freely available at http://ncbr.muni.cz/ValidatorDB) offers a new step in this direction, in the form of a database of validation results for all ligands and non-standard residues from the Protein Data Bank (all molecules with seven or more heavy atoms). Model molecules from the wwPDB Chemical Component Dictionary are used as reference during validation. ValidatorDB covers the main aspects of validation of annotation, and additionally introduces several useful validation analyses. The most significant is the classification of chirality errors, allowing the user to distinguish between serious issues and minor inconsistencies. Other such analyses are able to report, for example, completely erroneous ligands, alternate conformations or complete identity with the model molecules. All results are systematically classified into categories, and statistical evaluations are performed. In addition to detailed validation reports for each molecule, ValidatorDB provides summaries of the validation results for the entire PDB, for sets of molecules sharing the same annotation (three-letter code) or the same PDB entry, and for user-defined selections of annotations or PDB entries. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  16. Validation of the Danish PAROLE lexicon (upubliceret)

    DEFF Research Database (Denmark)

    Møller, Margrethe; Christoffersen, Ellen

    2000-01-01

    This validation is based on the Danish PAROLE lexicon dated June 20, 1998, downloaded on March 16, 1999. Subsequently, the developers of the lexicon have informed us that they have been revising the lexicon, in particular the morphological level. Morphological entries were originally generated...... automatically from a machine-readable version of the Official Danish Spelling Dictionary (Retskrivningsordbogen 1986, in the following RO86), and this resulted in some overgeneration, which the developers started eliminating after submitting the Danish PAROLE lexicon for validation. The present validation is......, however, based on the January 1997 version of the lexicon. The validation as such complies with the specifications described in ELRA validation manuals for lexical data, i.e. Underwood and Navaretta: "A Draft Manual for the Validation of Lexica, Final Report" [Underwood & Navaretta1997] and Braasch: "A...

  17. Validating presupposed versus focused text information.

    Science.gov (United States)

    Singer, Murray; Solar, Kevin G; Spear, Jackie

    2017-04-01

    There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.

  18. Validity evidence based on test content.

    Science.gov (United States)

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  19. The validation of an infrared simulation system

    CSIR Research Space (South Africa)

    De Waal, A

    2013-08-01

    Full Text Available theoretical validation framework. This paper briefly describes the procedure used to validate software models in an infrared system simulation, and provides application examples of this process. The discussion includes practical validation techniques...

  20. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  1. Transient FDTD simulation validation

    OpenAIRE

    Jauregui Tellería, Ricardo; Riu Costa, Pere Joan; Silva Martínez, Fernando

    2010-01-01

    In computational electromagnetic simulations, most validation methods have been developed until now to be used in the frequency domain. However, the EMC analysis of the systems in the frequency domain many times is not enough to evaluate the immunity of current communication devices. Based on several studies, in this paper we propose an alternative method of validation of the transients in time domain allowing a rapid and objective quantification of the simulations results.

  2. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  3. Leptospira Species in Feral Cats and Black Rats from Western Australia and Christmas Island.

    Science.gov (United States)

    Dybing, Narelle A; Jacobson, Caroline; Irwin, Peter; Algar, David; Adams, Peter J

    2017-05-01

    Leptospirosis is a neglected, re-emerging bacterial disease with both zoonotic and conservation implications. Rats and livestock are considered the usual sources of human infection, but all mammalian species are capable of carrying Leptospira spp. and transmitting pathogenic leptospires in their urine, and uncertainty remains about the ecology and transmission dynamics of Leptospira in different regions. In light of a recent case of human leptospirosis on tropical Christmas Island, this study aimed to investigate the role of introduced animals (feral cats and black rats) as carriers of pathogenic Leptospira spp. on Christmas Island and to compare this with two different climatic regions of Western Australia (one island and one mainland). Kidney samples were collected from black rats (n = 68) and feral cats (n = 59) from Christmas Island, as well as feral cats from Dirk Hartog Island (n = 23) and southwest Western Australia (n = 59). Molecular (PCR) screening detected pathogenic leptospires in 42.4% (95% confidence interval 29.6-55.9) of cats and 2.9% (0.4-10.2) of rats from Christmas Island. Sequencing of cat- and rat-positive samples from Christmas Island showed 100% similarity for Leptospira interrogans. Pathogenic leptospires were not detected in cats from Dirk Hartog Island or southwest Western Australia. These findings were consistent with previous reports of higher Leptospira spp. prevalence in tropical regions compared with arid and temperate regions. Despite the abundance of black rats on Christmas Island, feral cats appear to be the more important reservoir species for the persistence of pathogenic L. interrogans on the island. This research highlights the importance of disease surveillance and feral animal management to effectively control potential disease transmission.

  4. Cleaning Validation of Fermentation Tanks

    DEFF Research Database (Denmark)

    Salo, Satu; Friis, Alan; Wirtanen, Gun

    2008-01-01

    Reliable test methods for checking cleanliness are needed to evaluate and validate the cleaning process of fermentation tanks. Pilot scale tanks were used to test the applicability of various methods for this purpose. The methods found to be suitable for validation of the clenlinees were visula...

  5. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  6. The development of a self-administered dementia checklist: the examination of concurrent validity and discriminant validity.

    Science.gov (United States)

    Miyamae, Fumiko; Ura, Chiaki; Sakuma, Naoko; Niikawa, Hirotoshi; Inagaki, Hiroki; Ijuin, Mutsuo; Okamura, Tsuyoshi; Sugiyama, Mika; Awata, Shuichi

    2016-01-01

    The present study aims to develop a self-administered dementia checklist to enable community-residing older adults to realize their declining functions and start using necessary services. A previous study confirmed the factorial validity and internal reliability of the checklist. The present study examined its concurrent validity and discriminant validity. The authors conducted a 3-step study (a self-administered survey including the checklist, interviews by nurses, and interviews by doctors and psychologists) of 7,682 community-residing individuals who were over 65 years of age. The authors calculated Spearman's correlation coefficients between the scores of the checklist and the results of a psychological test to examine the concurrent validity. They also compared the average total scores of the checklist between groups with different Clinical Dementia Rating (CDR) scores to examine discriminant validity and conducted a receiver operating characteristic analysis to examine the discriminative power for dementia. The authors analyzed the data of 131 respondents who completed all 3 steps. The checklist scores were significantly correlated with the respondents' Mini-Mental State Examination and Frontal Assessment Battery scores. The checklist also significantly discriminated the patients with dementia (CDR = 1+) from those without dementia (CDR = 0 or 0.5). The optimal cut-off point for the two groups was 17/18 (sensitivity, 72.0%; specificity, 69.2%; positive predictive value, 69.2%; negative predictive value, 72.0%). This study confirmed the concurrent validity and discriminant validity of the self-administered dementia checklist. However, due to its insufficient discriminative power as a screening tool for older people with declining cognitive functions, the checklist is only recommended as an educational and public awareness tool.

  7. Rapid Robot Design Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies will create a comprehensive software infrastructure for rapid validation of robot designs. The software will support push-button validation...

  8. Validation of EAF-2005 data

    International Nuclear Information System (INIS)

    Kopecky, J.

    2005-01-01

    Full text: Validation procedures applied on EAF-2003 starter file, which lead to the production of EAF-2005 library, are described. The results in terms of reactions with assigned quality scores in EAF-20005 are given. Further the extensive validation against the recent integral data is discussed together with the status of the final report 'Validation of EASY-2005 using integral measurements'. Finally, the novel 'cross section trend analysis' is presented with some examples of its use. This action will lead to the release of improved library EAF-2005.1 at the end of 2005, which shall be used as the starter file for EAF-2007. (author)

  9. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  10. A CFD validation roadmap for hypersonic flows

    Science.gov (United States)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  11. The Consequences of Consequential Validity.

    Science.gov (United States)

    Mehrens, William A.

    1997-01-01

    There is no agreement at present about the importance or meaning of the term "consequential validity." It is important that the authors of revisions to the "Standards for Educational and Psychological Testing" recognize the debate and relegate discussion of consequences to a context separate from the discussion of validity.…

  12. Validity in SSM: neglected areas

    NARCIS (Netherlands)

    Pala, O.; Vennix, J.A.M.; Mullekom, T.L. van

    2003-01-01

    Contrary to the prevailing notion in hard OR, in soft system methodology (SSM), validity seems to play a minor role. The primary reason for this is that SSM models are of a different type, they are not would-be descriptions of real-world situations. Therefore, establishing their validity, that is

  13. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  14. Validation of Yoon's Critical Thinking Disposition Instrument.

    Science.gov (United States)

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  15. Validation for chromatographic and electrophoretic methods

    OpenAIRE

    Ribani, Marcelo; Bottoli, Carla Beatriz Grespan; Collins, Carol H.; Jardim, Isabel Cristina Sales Fontes; Melo, Lúcio Flávio Costa

    2004-01-01

    The validation of an analytical method is fundamental to implementing a quality control system in any analytical laboratory. As the separation techniques, GC, HPLC and CE, are often the principal tools used in such determinations, procedure validation is a necessity. The objective of this review is to describe the main aspects of validation in chromatographic and electrophoretic analysis, showing, in a general way, the similarities and differences between the guidelines established by the dif...

  16. Saving and gaining energy

    International Nuclear Information System (INIS)

    Lauritzen, T.

    2008-01-01

    In this interview with Dirk U. Hindrichs from the Schueco International KG company, differences between ecological and economical points of view in general are discussed, as is the world's energy consumption and the visions held by the Schueco company in this respect. The importance of building facades, windows and photovoltaics for his business is discussed, as are solar thermal systems for the production of heat and cold. Further, energy-efficiency and examples of buildings realised internationally are discussed and co-operation with important players in the climate protection area is noted. Hindrichs' opinion, that pro-active actions must be taken by entrepreneurs, is noted.

  17. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  18. Validating MEDIQUAL Constructs

    Science.gov (United States)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  19. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Science.gov (United States)

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  20. 45 CFR 162.1011 - Valid code sets.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public... ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates specified by the organization responsible for maintaining that code set. ...

  1. Further Validation of the Coach Identity Prominence Scale

    Science.gov (United States)

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  2. Validation of non-formal and informal learning from a European perspective – linking validation arrangements with national qualifications frameworks

    Directory of Open Access Journals (Sweden)

    Borut Mikulec

    2015-12-01

    Full Text Available The paper analyses European policy on the validation of non-formal and informal learning, which is presented as a “salvation narrative” that can improve the functioning of the labour market, provide a way out from unemployment and strengthen the competitiveness of the economy. Taking as our starting point recent findings in adult education theory on the validation of non-formal and informal learning, we aim to prove the thesis that what European validation policy promotes is above all economic purpose and that it establishes a “Credential/Credit-exchange” model of validation of non-formal and informal learning. We proceed to ecxamine the effect of European VNIL policy in selected European countries where validation arrangements are linked to the qualifications framework. We find that the “Credential/ Credit-exchange” validation model was first established in a few individual European countries and then transferred, as a “successful” model, to the level of common European VNIL policy.

  3. Validation of self-reported erythema

    DEFF Research Database (Denmark)

    Petersen, B; Thieden, E; Lerche, C M

    2013-01-01

    Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data.......Most epidemiological data of sunburn related to skin cancer have come from self-reporting in diaries and questionnaires. We thought it important to validate the reliability of such data....

  4. Active Transportation Demand Management (ATDM) Trajectory Level Validation

    Data.gov (United States)

    Department of Transportation — The ATDM Trajectory Validation project developed a validation framework and a trajectory computational engine to compare and validate simulated and observed vehicle...

  5. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  6. Validity Semantics in Educational and Psychological Assessment

    Science.gov (United States)

    Hathcoat, John D.

    2013-01-01

    The semantics, or meaning, of validity is a fluid concept in educational and psychological testing. Contemporary controversies surrounding this concept appear to stem from the proper location of validity. Under one view, validity is a property of score-based inferences and entailed uses of test scores. This view is challenged by the…

  7. Validation of the Child Sport Cohesion Questionnaire

    Science.gov (United States)

    Martin, Luc J.; Carron, Albert V.; Eys, Mark A.; Loughead, Todd

    2013-01-01

    The purpose of the present study was to test the validity evidence of the Child Sport Cohesion Questionnaire (CSCQ). To accomplish this task, convergent, discriminant, and known-group difference validity were examined, along with factorial validity via confirmatory factor analysis (CFA). Child athletes (N = 290, M[subscript age] = 10.73 plus or…

  8. Validation of the Social Inclusion Scale with Students

    Directory of Open Access Journals (Sweden)

    Ceri Wilson

    2015-07-01

    Full Text Available Interventions (such as participatory arts projects aimed at increasing social inclusion are increasingly in operation, as social inclusion is proving to play a key role in recovery from mental ill health and the promotion of mental wellbeing. These interventions require evaluation with a systematically developed and validated measure of social inclusion; however, a “gold-standard” measure does not yet exist. The Social Inclusion Scale (SIS has three subscales measuring social isolation, relations and acceptance. This scale has been partially validated with arts and mental health project users, demonstrating good internal consistency. However, test-retest reliability and construct validity require assessment, along with validation in the general population. The present study aimed to validate the SIS in a sample of university students. Test-retest reliability, internal consistency, and convergent validity (one aspect of construct validity were assessed by comparing SIS scores with scores on other measures of social inclusion and related concepts. Participants completed the measures at two time-points seven-to-14 days apart. The SIS demonstrated high internal consistency and test-retest reliability, although convergent validity was less well-established and possible reasons for this are discussed. This systematic validation of the SIS represents a further step towards the establishment of a “gold-standard” measure of social inclusion.

  9. All Validity Is Construct Validity. Or Is It?

    Science.gov (United States)

    Kane, Michael

    2012-01-01

    Paul E. Newton's article on the consensus definition of validity tackles a number of big issues and makes a number of strong claims. I agreed with much of what he said, and I disagreed with a number of his claims, but I found his article to be consistently interesting and thought provoking (whether I agreed or not). I will focus on three general…

  10. Validity and Reliability in Social Science Research

    Science.gov (United States)

    Drost, Ellen A.

    2011-01-01

    In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…

  11. Development and Initial Validation of the Need Satisfaction and Need Support at Work Scales: A Validity-Focused Approach

    Directory of Open Access Journals (Sweden)

    Susanne Tafvelin

    2018-01-01

    Full Text Available Although the relevance of employee need satisfaction and manager need support have been examined, the integration of self-determination theory (SDT into work and organizational psychology has been hampered by the lack of validated measures. The purpose of the current study was to develop and validate measures of employees’ perception of need satisfaction (NSa-WS and need support (NSu-WS at work that were grounded in SDT. We used three Swedish samples (total 'N' = 1,430 to develop and validate our scales. We used a confirmatory approach including expert panels to assess item content relevance, confirmatory factor analysis for factorial validity, and associations with theoretically warranted outcomes to assess criterion-related validity. Scale reliability was also assessed. We found evidence of content, factorial, and criterion-related validity of our two scales of need satisfaction and need support at work. Further, the scales demonstrated high internal consistency. Our newly developed scales may be used in research and practice to further our understanding regarding how satisfaction and support of employee basic needs influence employee motivation, performance, and well-being. Our study makes a contribution to the current literature by providing (1 scales that are specifically designed for the work context, (2 an example of how expert panels can be used to assess content validity, and (3 testing of theoretically derived hypotheses that, although SDT is built on them, have not been examined before.

  12. The ALICE Software Release Validation cluster

    International Nuclear Information System (INIS)

    Berzano, D; Krzewicki, M

    2015-01-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future. (paper)

  13. Content validation applied to job simulation and written examinations

    International Nuclear Information System (INIS)

    Saari, L.M.; McCutchen, M.A.; White, A.S.; Huenefeld, J.C.

    1984-08-01

    The application of content validation strategies in work settings have become increasingly popular over the last few years, perhaps spurred by an acknowledgment in the courts of content validation as a method for validating employee selection procedures (e.g., Bridgeport Guardians v. Bridgeport Police Dept., 1977). Since criterion-related validation is often difficult to conduct, content validation methods should be investigated as an alternative for determining job related selection procedures. However, there is not yet consensus among scientists and professionals concerning how content validation should be conducted. This may be because there is a lack of clear cut operations for conducting content validation for different types of selection procedures. The purpose of this paper is to discuss two content validation approaches being used for the development of a licensing examination that involves a job simulation exam and a written exam. These represent variations in methods for applying content validation. 12 references

  14. Validation of psychoanalytic theories: towards a conceptualization of references.

    Science.gov (United States)

    Zachrisson, Anders; Zachrisson, Henrik Daae

    2005-10-01

    The authors discuss criteria for the validation of psychoanalytic theories and develop a heuristic and normative model of the references needed for this. Their core question in this paper is: can psychoanalytic theories be validated exclusively from within psychoanalytic theory (internal validation), or are references to sources of knowledge other than psychoanalysis also necessary (external validation)? They discuss aspects of the classic truth criteria correspondence and coherence, both from the point of view of contemporary psychoanalysis and of contemporary philosophy of science. The authors present arguments for both external and internal validation. Internal validation has to deal with the problems of subjectivity of observations and circularity of reasoning, external validation with the problem of relevance. They recommend a critical attitude towards psychoanalytic theories, which, by carefully scrutinizing weak points and invalidating observations in the theories, reduces the risk of wishful thinking. The authors conclude by sketching a heuristic model of validation. This model combines correspondence and coherence with internal and external validation into a four-leaf model for references for the process of validating psychoanalytic theories.

  15. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    Science.gov (United States)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  16. Reconceptualising the external validity of discrete choice experiments.

    Science.gov (United States)

    Lancsar, Emily; Swait, Joffre

    2014-10-01

    External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.

  17. Assessment of juveniles testimonies’ validity

    Directory of Open Access Journals (Sweden)

    Dozortseva E.G.

    2015-12-01

    Full Text Available The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA by means of Criteria-Based Content Analysis (CBCA and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice

  18. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  19. Validity of a Measure of Assertiveness

    Science.gov (United States)

    Galassi, John P.; Galassi, Merna D.

    1974-01-01

    This study was concerned with further validation of a measure of assertiveness. Concurrent validity was established for the College Self-Expression Scale using the method of contrasted groups and through correlations of self-and judges' ratings of assertiveness. (Author)

  20. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  1. Test of Gross Motor Development : Expert Validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-12-01

    Full Text Available The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motordevelopment. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by expertsand the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. Across-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionalsand 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls.Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated thatthe Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices ofconfirmatory factorial validity (χ2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tuckerand Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. ThePortuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  2. Test of Gross Motor Development: expert validity, confirmatory validity and internal consistence

    Directory of Open Access Journals (Sweden)

    Nadia Cristina Valentini

    2008-01-01

    The Test of Gross Motor Development (TGMD-2 is an instrument used to evaluate children’s level of motor development. The objective of this study was to translate and verify the clarity and pertinence of the TGMD-2 items by experts and the confirmatory factorial validity and the internal consistence by means of test-retest of the Portuguese TGMD-2. A cross-cultural translation was used to construct the Portuguese version. The participants of this study were 7 professionals and 587 children, from 27 schools (kindergarten and elementary from 3 to 10 years old (51.1% boys and 48.9% girls. Each child was videotaped performing the test twice. The videotaped tests were then scored. The results indicated that the Portuguese version of the TGMD-2 contains clear and pertinent motor items; demonstrated satisfactory indices of confirmatory factorial validity (÷2/gl = 3.38; Goodness-of-fit Index = 0.95; Adjusted Goodness-of-fit index = 0.92 and Tucker and Lewis’s Index of Fit = 0.83 and test-retest internal consistency (locomotion r = 0.82; control of object: r = 0.88. The Portuguese TGMD-2 demonstrated validity and reliability for the sample investigated.

  3. Worldwide Protein Data Bank validation information: usage and trends.

    Science.gov (United States)

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  4. On the validation of risk analysis-A commentary

    International Nuclear Information System (INIS)

    Rosqvist, Tony

    2010-01-01

    Aven and Heide (2009) [1] provided interesting views on the reliability and validation of risk analysis. The four validation criteria presented are contrasted with modelling features related to the relative frequency-based and Bayesian approaches to risk analysis. In this commentary I would like to bring forth some issues on validation that partly confirm and partly suggest changes in the interpretation of the introduced validation criteria-especially, in the context of low probability-high consequence systems. The mental model of an expert in assessing probabilities is argued to be a key notion in understanding the validation of a risk analysis.

  5. Design and validation of a comprehensive fecal incontinence questionnaire.

    Science.gov (United States)

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  6. Independent validation of the MMPI-2-RF Somatic/Cognitive and Validity scales in TBI Litigants tested for effort.

    Science.gov (United States)

    Youngjohn, James R; Wershba, Rebecca; Stevenson, Matthew; Sturgeon, John; Thomas, Michael L

    2011-04-01

    The MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) is replacing the MMPI-2 as the most widely used personality test in neuropsychological assessment, but additional validation studies are needed. Our study examines MMPI-2-RF Validity scales and the newly created Somatic/Cognitive scales in a recently reported sample of 82 traumatic brain injury (TBI) litigants who either passed or failed effort tests (Thomas & Youngjohn, 2009). The restructured Validity scales FBS-r (restructured symptom validity), F-r (restructured infrequent responses), and the newly created Fs (infrequent somatic responses) were not significant predictors of TBI severity. FBS-r was significantly related to passing or failing effort tests, and Fs and F-r showed non-significant trends in the same direction. Elevations on the Somatic/Cognitive scales profile (MLS-malaise, GIC-gastrointestinal complaints, HPC-head pain complaints, NUC-neurological complaints, and COG-cognitive complaints) were significant predictors of effort test failure. Additionally, HPC had the anticipated paradoxical inverse relationship with head injury severity. The Somatic/Cognitive scales as a group were better predictors of effort test failure than the RF Validity scales, which was an unexpected finding. MLS arose as the single best predictor of effort test failure of all RF Validity and Somatic/Cognitive scales. Item overlap analysis revealed that all MLS items are included in the original MMPI-2 Hy scale, making MLS essentially a subscale of Hy. This study validates the MMPI-2-RF as an effective tool for use in neuropsychological assessment of TBI litigants.

  7. Construct validity of adolescents' self-reported big five personality traits: importance of conceptual breadth and initial validation of a short measure.

    Science.gov (United States)

    Morizot, Julien

    2014-10-01

    While there are a number of short personality trait measures that have been validated for use with adults, few are specifically validated for use with adolescents. To trust such measures, it must be demonstrated that they have adequate construct validity. According to the view of construct validity as a unifying form of validity requiring the integration of different complementary sources of information, this article reports the evaluation of content, factor, convergent, and criterion validities as well as reliability of adolescents' self-reported personality traits. Moreover, this study sought to address an inherent potential limitation of short personality trait measures, namely their limited conceptual breadth. In this study, starting with items from a known measure, after the language-level was adjusted for use with adolescents, items tapping fundamental primary traits were added to determine the impact of added conceptual breadth on the psychometric properties of the scales. The resulting new measure was named the Big Five Personality Trait Short Questionnaire (BFPTSQ). A group of expert judges considered the items to have adequate content validity. Using data from a community sample of early adolescents, the results confirmed the factor validity of the Big Five structure in adolescence as well as its measurement invariance across genders. More important, the added items did improve the convergent and criterion validities of the scales, but did not negatively affect their reliability. This study supports the construct validity of adolescents' self-reported personality traits and points to the importance of conceptual breadth in short personality measures. © The Author(s) 2014.

  8. Assessment of validity with polytrauma Veteran populations.

    Science.gov (United States)

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  9. Regulatory perspectives on human factors validation

    International Nuclear Information System (INIS)

    Harrison, F.; Staples, L.

    2001-01-01

    Validation is an important avenue for controlling the genesis of human error, and thus managing loss, in a human-machine system. Since there are many ways in which error may intrude upon system operation, it is necessary to consider the performance-shaping factors that could introduce error and compromise system effectiveness. Validation works to this end by examining, through objective testing and measurement, the newly developed system, procedure or staffing level, in order to identify and eliminate those factors which may negatively influence human performance. It is essential that validation be done in a high-fidelity setting, in an objective and systematic manner, using appropriate measures, if meaningful results are to be obtained, In addition, inclusion of validation work in any design process can be seen as contributing to a good safety culture, since such activity allows licensees to eliminate elements which may negatively impact on human behaviour. (author)

  10. Validation of the Classroom Behavior Inventory

    Science.gov (United States)

    Blunden, Dale; And Others

    1974-01-01

    Factor-analytic methods were used toassess contruct validity of the Classroom Behavior Inventory, a scale for rating behaviors associated with hyperactivity. The Classroom Behavior Inventory measures three dimensions of behavior: Hyperactivity, Hostility, and Sociability. Significant concurrent validity was obtained for only one Classroom Behavior…

  11. Principles of validation of diagnostic assays for infectious diseases

    International Nuclear Information System (INIS)

    Jacobson, R.H.

    1998-01-01

    Assay validation requires a series of inter-related processes. Assay validation is an experimental process: reagents and protocols are optimized by experimentation to detect the analyte with accuracy and precision. Assay validation is a relative process: its diagnostic sensitivity and diagnostic specificity are calculated relative to test results obtained from reference animal populations of known infection/exposure status. Assay validation is a conditional process: classification of animals in the target population as infected or uninfected is conditional upon how well the reference animal population used to validate the assay represents the target population; accurate predictions of the infection status of animals from test results (PV+ and PV-) are conditional upon the estimated prevalence of disease/infection in the target population. Assay validation is an incremental process: confidence in the validity of an assay increases over time when use confirms that it is robust as demonstrated by accurate and precise results; the assay may also achieve increasing levels of validity as it is upgraded and extended by adding reference populations of known infection status. Assay validation is a continuous process: the assay remains valid only insofar as it continues to provide accurate and precise results as proven through statistical verification. Therefore, the work required for validation of diagnostic assays for infectious diseases does not end with a time-limited series of experiments based on a few reference samples rather, to assure valid test results from an assay requires constant vigilance and maintenance of the assay, along with reassessment of its performance characteristics for each unique population of animals to which it is applied. (author)

  12. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  13. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  14. Validity in assessment of prior learning

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne; Aarkrog, Vibe

    2015-01-01

    , the article discusses the need for specific criteria for assessment. The reliability and validity of the assessment procedures depend on whether the competences are well-defined, and whether the teachers are adequately trained for the assessment procedures. Keywords: assessment, prior learning, adult...... education, vocational training, lifelong learning, validity...

  15. Empirical Validation of Listening Proficiency Guidelines

    Science.gov (United States)

    Cox, Troy L.; Clifford, Ray

    2014-01-01

    Because listening has received little attention and the validation of ability scales describing multidimensional skills is always challenging, this study applied a multistage, criterion-referenced approach that used a framework of aligned audio passages and listening tasks to explore the validity of the ACTFL and related listening proficiency…

  16. Validation and Design Science Research in Information Systems

    NARCIS (Netherlands)

    Sol, H G; Gonzalez, Rafael A.; Mora, Manuel

    2012-01-01

    Validation within design science research in Information Systems (DSRIS) is much debated. The relationship of validation to artifact evaluation is still not clear. This chapter aims at elucidating several components of DSRIS in relation to validation. The role of theory and theorizing are an

  17. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  18. Validation in the Absence of Observed Events.

    Science.gov (United States)

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  19. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  20. Is intercessory prayer valid nursing intervention?

    Science.gov (United States)

    Stang, Cecily Wellelr

    2011-01-01

    Is the use of intercessory prayer (IP) in modern nursing a valid practice? As discussed in current healthcare literature, IP is controversial, with authors offering support for and against the efficacy of the practice. This article reviews IP literature and research, concluding IP is a valid intervention for Christian nurses.

  1. Certification Testing as an Illustration of Argument-Based Validation

    Science.gov (United States)

    Kane, Michael

    2004-01-01

    The theories of validity developed over the past 60 years are quite sophisticated, but the methodology of validity is not generally very effective. The validity evidence for major testing programs is typically much weaker than the evidence for more technical characteristics such as reliability. In addition, most validation efforts have a strong…

  2. Toward a Unified Validation Framework in Mixed Methods Research

    Science.gov (United States)

    Dellinger, Amy B.; Leech, Nancy L.

    2007-01-01

    The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…

  3. A validated RP-HPLC method for the determination of Irinotecan hydrochloride residues for cleaning validation in production area

    Directory of Open Access Journals (Sweden)

    Sunil Reddy

    2013-03-01

    Full Text Available Introduction: cleaning validation is an integral part of current good manufacturing practices in pharmaceutical industry. The main purpose of cleaning validation is to prove the effectiveness and consistency of cleaning in a given pharmaceutical production equipment to prevent cross contamination and adulteration of drug product with other active ingredient. Objective: a rapid, sensitive and specific reverse phase HPLC method was developed and validated for the quantitative determination of irinotecan hydrochloride in cleaning validation swab samples. Method: the method was validated using waters symmetry shield RP-18 (250mm x 4.6mm 5 µm column with isocratic mobile phase containing a mixture of 0.02 M potassium di-hydrogen ortho-phosphate, pH adjusted to 3.5 with ortho-phosphoric acid, methanol and acetonitrile (60:20:20 v/v/v. The flow rate of mobile phase was 1.0 mL/min with column temperature of 25°C and detection wavelength at 220nm. The sample injection volume was 100 µl. Results: the calibration curve was linear over a concentration range from 0.024 to 0.143 µg/mL with a correlation coefficient of 0.997. The intra-day and inter-day precision expressed as relative standard deviation were below 3.2%. The recoveries obtained from stainless steel, PCGI, epoxy, glass and decron cloth surfaces were more than 85% and there was no interference from the cotton swab. The detection limit (DL and quantitation limit (QL were 0.008 and 0.023 µg ml-1, respectively. Conclusion: the developed method was validated with respect to specificity, linearity, limit of detection and quantification, accuracy, precision and solution stability. The overall procedure can be used as part of a cleaning validation program in pharmaceutical manufacture of irinotecan hydrochloride.

  4. On Line Validation Exercise (OLIVE: A Web Based Service for the Validation of Medium Resolution Land Products. Application to FAPAR Products

    Directory of Open Access Journals (Sweden)

    Marie Weiss

    2014-05-01

    Full Text Available The OLIVE (On Line Interactive Validation Exercise platform is dedicated to the validation of global biophysical products such as LAI (Leaf Area Index and FAPAR (Fraction of Absorbed Photosynthetically Active Radiation. It was developed under the framework of the CEOS (Committee on Earth Observation Satellites Land Product Validation (LPV sub-group. OLIVE has three main objectives: (i to provide a consistent and centralized information on the definition of the biophysical variables, as well as a description of the main available products and their performances (ii to provide transparency and traceability by an online validation procedure compliant with the CEOS LPV and QA4EO (Quality Assurance for Earth Observation recommendations (iii and finally, to provide a tool to benchmark new products, update product validation results and host new ground measurement sites for accuracy assessment. The functionalities and algorithms of OLIVE are described to provide full transparency of its procedures to the community. The validation process and typical results are illustrated for three FAPAR products: GEOV1 (VEGETATION sensor, MGVIo (MERIS sensor and MODIS collection 5 FPAR. OLIVE is available on the European Space Agency CAL/VAL portal, including full documentation, validation exercise results, and product extracts.

  5. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  6. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  8. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  9. Notice: PSPB articles by authors with retracted articles at PSPB or other journals: Stapel, Smeesters, and Sanna.

    Science.gov (United States)

    Funder, David

    2014-01-01

    Numerous articles by the social psychologists Diederick Stapel, Dirk Smeesters, and Lawrence Sanna have been retracted in several different journals. The present notice reports the results of an investigation into papers authored or coauthored by these individuals, and published in Personality and Social Psychology Bulletin, that have not been retracted. The status of these papers range from data confirmed as legitimate by coauthors to, in many cases, being unknown as to their legitimacy. Given the lack of information in the latter cases, there is insufficient basis to recommend retraction at this time. Researchers using the results of these papers in their own work are advised to take the information reported in this notice into account.

  10. Burocratización y racionalización en Max Weber a la luz de las interpretaciones actuales de su obra

    Directory of Open Access Journals (Sweden)

    Alexis Emanuel Gros

    2015-03-01

    Full Text Available Este trabajo se propone presentar de forma sucinta y sistemática la teoría de la burocracia de Max Weber y problematizar el rol que esta juega en sus reflexiones sobre el proceso de racionalización occidental (okzidentale Rationalisierung. Para ello, se analizan textos centrales del autor como Wirtschaft und Gesellschaft, Politik als Beruf y Die protestantische Ethik und der "Geist“ des Kapitalismus, entre otros. La exposición se guía en las más recientes interpretaciones germanoparlantes de la obra weberiana, a saber: las de Dirk Kaesler, Hans-Peter Müller, Uwe Barrelmeyer y Volker Kruse; y Harmut Rosa, David Strecker y Andreas Kottmann.

  11. 77 FR 27135 - HACCP Systems Validation

    Science.gov (United States)

    2012-05-09

    ... validation, the journal article should identify E.coli O157:H7 and other pathogens as the hazard that the..., or otherwise processes ground beef may determine that E. coli O157:H7 is not a hazard reasonably... specifications that require that the establishment's suppliers apply validated interventions to address E. coli...

  12. Terminology, Emphasis, and Utility in Validation

    Science.gov (United States)

    Kane, Michael T.

    2008-01-01

    Lissitz and Samuelsen (2007) have proposed an operational definition of "validity" that shifts many of the questions traditionally considered under validity to a separate category associated with the utility of test use. Operational definitions support inferences about how well people perform some kind of task or how they respond to some kind of…

  13. A validated battery of vocal emotional expressions

    Directory of Open Access Journals (Sweden)

    Pierre Maurage

    2007-11-01

    Full Text Available For a long time, the exploration of emotions focused on facial expression, and vocal expression of emotion has only recently received interest. However, no validated battery of emotional vocal expressions has been published and made available to the researchers’ community. This paper aims at validating and proposing such material. 20 actors (10 men recorded sounds (words and interjections expressing six basic emotions (anger, disgust, fear, happiness, neutral and sadness. These stimuli were then submitted to a double validation phase: (1 preselection by experts; (2 quantitative and qualitative validation by 70 participants. 195 stimuli were selected for the final battery, each one depicting a precise emotion. The ratings provide a complete measure of intensity and specificity for each stimulus. This paper provides, to our knowledge, the first validated, freely available and highly standardized battery of emotional vocal expressions (words and intonations. This battery could constitute an interesting tool for the exploration of prosody processing among normal and pathological populations, in neuropsychology as well as psychiatry. Further works are nevertheless needed to complement the present material.

  14. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  15. Detecting Symptom Exaggeration in Combat Veterans Using the MMPI-2 Symptom Validity Scales: A Mixed Group Validation

    Science.gov (United States)

    Tolin, David F.; Steenkamp, Maria M.; Marx, Brian P.; Litz, Brett T.

    2010-01-01

    Although validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989) have proven useful in the detection of symptom exaggeration in criterion-group validation (CGV) studies, usually comparing instructed feigners with known patient groups, the…

  16. Student mathematical imagination instruments: construction, cultural adaptation and validity

    Science.gov (United States)

    Dwijayanti, I.; Budayasa, I. K.; Siswono, T. Y. E.

    2018-03-01

    Imagination has an important role as the center of sensorimotor activity of the students. The purpose of this research is to construct the instrument of students’ mathematical imagination in understanding concept of algebraic expression. The researcher performs validity using questionnaire and test technique and data analysis using descriptive method. Stages performed include: 1) the construction of the embodiment of the imagination; 2) determine the learning style questionnaire; 3) construct instruments; 4) translate to Indonesian as well as adaptation of learning style questionnaire content to student culture; 5) perform content validation. The results stated that the constructed instrument is valid by content validation and empirical validation so that it can be used with revisions. Content validation involves Indonesian linguists, english linguists and mathematics material experts. Empirical validation is done through a legibility test (10 students) and shows that in general the language used can be understood. In addition, a questionnaire test (86 students) was analyzed using a biserial point correlation technique resulting in 16 valid items with a reliability test using KR 20 with medium reability criteria. While the test instrument test (32 students) to find all items are valid and reliability test using KR 21 with reability is 0,62.

  17. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  18. WSRC approach to validation of criticality safety computer codes

    International Nuclear Information System (INIS)

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed

  19. Reliability and Validity of Qualitative and Operational Research Paradigm

    Directory of Open Access Journals (Sweden)

    Muhammad Bashir

    2008-01-01

    Full Text Available Both qualitative and quantitative paradigms try to find the same result; the truth. Qualitative studies are tools used in understanding and describing the world of human experience. Since we maintain our humanity throughout the research process, it is largely impossible to escape the subjective experience, even for the most experienced of researchers. Reliability and Validity are the issue that has been described in great deal by advocates of quantitative researchers. The validity and the norms of rigor that are applied to quantitative research are not entirely applicable to qualitative research. Validity in qualitative research means the extent to which the data is plausible, credible and trustworthy; and thus can be defended when challenged. Reliability and validity remain appropriate concepts for attaining rigor in qualitative research. Qualitative researchers have to salvage responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers’ judgments to the investigators themselves. There have different opinions on validity with some suggesting that the concepts of validity is incompatible with qualitative research and should be abandoned while others argue efforts should be made to ensure validity so as to lend credibility to the results. This paper is an attempt to clarify the meaning and use of reliability and validity in the qualitative research paradigm.

  20. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  1. Development of a novel active muzzle brake for an artillery weapon system / Dirk Johannes Downing

    OpenAIRE

    Downing, Dirk Johannes

    2002-01-01

    A conventional muzzle brake is a baffle device located at some distance in front of the muzzle exit of a gun. The purpose of a muzzle brake is to alleviate the force on the weapon platform by diverting a portion of the muzzle gas resulting in a forward impulse being exerted on the recoiling parts of the weapon. A very efficient muzzle brake unfortunately gives rise to an excessive overpressure in the crew environment due to the deflection of the emerging shock waves. The novel ...

  2. Redundant sensor validation by using fuzzy logic

    International Nuclear Information System (INIS)

    Holbert, K.E.; Heger, A.S.; Alang-Rashid, N.K.

    1994-01-01

    This research is motivated by the need to relax the strict boundary of numeric-based signal validation. To this end, the use of fuzzy logic for redundant sensor validation is introduced. Since signal validation employs both numbers and qualitative statements, fuzzy logic provides a pathway for transforming human abstractions into the numerical domain and thus coupling both sources of information. With this transformation, linguistically expressed analysis principles can be coded into a classification rule-base for signal failure detection and identification

  3. Validation of the prosthetic esthetic index

    DEFF Research Database (Denmark)

    Özhayat, Esben B; Dannemand, Katrine

    2014-01-01

    OBJECTIVES: In order to diagnose impaired esthetics and evaluate treatments for these, it is crucial to evaluate all aspects of oral and prosthetic esthetics. No professionally administered index currently exists that sufficiently encompasses comprehensive prosthetic esthetics. This study aimed...... to validate a new comprehensive index, the Prosthetic Esthetic Index (PEI), for professional evaluation of esthetics in prosthodontic patients. MATERIAL AND METHODS: The content, criterion, and construct validity; the test-retest, inter-rater, and internal consistency reliability; and the sensitivity...... furthermore distinguish between participants and controls, indicating sufficient sensitivity. CONCLUSION: The PEI is considered a valid and reliable instrument involving sufficient aspects for assessment of the professionally evaluated esthetics in prosthodontic patients. CLINICAL RELEVANCE...

  4. The dialogic validation

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2005-01-01

    This paper is inspired by dialogism and the title is a paraphrase on Bakhtin's (1981) "The Dialogic Imagination". The paper investigates how dialogism can inform the process of validating inquiry-based qualitative research. The paper stems from a case study on the role of recognition...

  5. The validation of Huffaz Intelligence Test (HIT)

    Science.gov (United States)

    Rahim, Mohd Azrin Mohammad; Ahmad, Tahir; Awang, Siti Rahmah; Safar, Ajmain

    2017-08-01

    In general, a hafiz who can memorize the Quran has many specialties especially in respect to their academic performances. In this study, the theory of multiple intelligences introduced by Howard Gardner is embedded in a developed psychometric instrument, namely Huffaz Intelligence Test (HIT). This paper presents the validation and the reliability of HIT of some tahfiz students in Malaysia Islamic schools. A pilot study was conducted involving 87 huffaz who were randomly selected to answer the items in HIT. The analysis method used includes Partial Least Square (PLS) on reliability, convergence and discriminant validation. The study has validated nine intelligences. The findings also indicated that the composite reliabilities for the nine types of intelligences are greater than 0.8. Thus, the HIT is a valid and reliable instrument to measure the multiple intelligences among huffaz.

  6. DESIGN AND VALIDATION OF A CARDIORESPIRATORY ...

    African Journals Online (AJOL)

    UJA

    This study aimed to validate the 10x20m test for children aged 3 to 6 years in order ... obtained adequate parameters of reliability and validity in healthy children aged 3 ... and is a determinant of cardiovascular risk in preschool children (Bürgi et al., ... (Seca 222, Hamburg, Germany), and weight (kg) that was recorded with a ...

  7. Global cue inconsistency diminishes learning of cue validity

    Directory of Open Access Journals (Sweden)

    Tony Wang

    2016-11-01

    Full Text Available We present a novel two-stage probabilistic learning task that examines the participants’ ability to learn and utilize valid cues across several levels of probabilistic feedback. In the first stage, participants sample from one of three cues that gives predictive information about the outcome of the second stage. Participants are rewarded for correct prediction of the outcome in stage two. Only one of the three cues gives valid predictive information and thus participants can maximise their reward by learning to sample from the valid cue. The validity of this predictive information, however, is reinforced across several levels of probabilistic feedback. A second manipulation involved changing the consistency of the predictive information in stage one and the outcome in stage two. The results show that participants, with higher probabilistic feedback, learned to utilise the valid cue. In inconsistent task conditions, however, participants were significantly less successful in utilising higher validity cues. We interpret this result as implying that learning in probabilistic categorization is based on developing a representation of the task that allows for goal-directed action.

  8. Global Land Product Validation Protocols: An Initiative of the CEOS Working Group on Calibration and Validation to Evaluate Satellite-derived Essential Climate Variables

    Science.gov (United States)

    Guillevic, P. C.; Nickeson, J. E.; Roman, M. O.; camacho De Coca, F.; Wang, Z.; Schaepman-Strub, G.

    2016-12-01

    The Global Climate Observing System (GCOS) has specified the need to systematically produce and validate Essential Climate Variables (ECVs). The Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and in particular its subgroup on Land Product Validation (LPV) is playing a key coordination role leveraging the international expertise required to address actions related to the validation of global land ECVs. The primary objective of the LPV subgroup is to set standards for validation methods and reporting in order to provide traceable and reliable uncertainty estimates for scientists and stakeholders. The Subgroup is comprised of 9 focus areas that encompass 10 land surface variables. The activities of each focus area are coordinated by two international co-leads and currently include leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FAPAR), vegetation phenology, surface albedo, fire disturbance, snow cover, land cover and land use change, soil moisture, land surface temperature (LST) and emissivity. Recent additions to the focus areas include vegetation indices and biomass. The development of best practice validation protocols is a core activity of CEOS LPV with the objective to standardize the evaluation of land surface products. LPV has identified four validation levels corresponding to increasing spatial and temporal representativeness of reference samples used to perform validation. Best practice validation protocols (1) provide the definition of variables, ancillary information and uncertainty metrics, (2) describe available data sources and methods to establish reference validation datasets with SI traceability, and (3) describe evaluation methods and reporting. An overview on validation best practice components will be presented based on the LAI and LST protocol efforts to date.

  9. Validering av vattenkraftmodeller i ARISTO

    OpenAIRE

    Lundbäck, Maja

    2013-01-01

    This master thesis was made to validate hydropower models of a turbine governor, Kaplan turbine and a Francis turbine in the power system simulator ARISTO at Svenska Kraftnät. The validation was made in three steps. The first step was to make sure the models was implement correctly in the simulator. The second was to compare the simulation results from the Kaplan turbine model to data from a real hydropower plant. The comparison was made to see how the models could generate simulation result ...

  10. PIV Data Validation Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  11. Isotopic and criticality validation for actinide-only burnup credit

    International Nuclear Information System (INIS)

    Fuentes, E.; Lancaster, D.; Rahimi, M.

    1997-01-01

    The techniques used for actinide-only burnup credit isotopic validation and criticality validation are presented and discussed. Trending analyses have been incorporated into both methodologies, requiring biases and uncertainties to be treated as a function of the trending parameters. The isotopic validation is demonstrated using the SAS2H module of SCALE 4.2, with the 27BURNUPLIB cross section library; correction factors are presented for each of the actinides in the burnup credit methodology. For the criticality validation, the demonstration is performed with the CSAS module of SCALE 4.2 and the 27BURNUPLIB, resulting in a validated upper safety limit

  12. Congruent Validity of the Rathus Assertiveness Schedule.

    Science.gov (United States)

    Harris, Thomas L.; Brown, Nina W.

    1979-01-01

    The validity of the Rathus Assertiveness Schedule (RAS) was investigated by correlating it with the six Class I scales of the California Psychological Inventory on a sample of undergraduate students. Results supported the validity of the RAS. (JKS)

  13. Software for validating parameters retrieved from satellite

    Digital Repository Service at National Institute of Oceanography (India)

    Muraleedharan, P.M.; Sathe, P.V.; Pankajakshan, T.

    -channel Scanning Microwave Radiometer (MSMR) onboard the Indian satellites Occansat-1 during 1999-2001 were validated using this software as a case study. The program has several added advantages over the conventional method of validation that involves strenuous...

  14. DESCQA: Synthetic Sky Catalog Validation Framework

    Science.gov (United States)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  15. Ensuring validity in qualitative international business research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman; Skaates, Maria Anne

    2002-01-01

    The purpose of this paper is to provide an account of how the validity issue related to qualitative research strategies within the IB field may be grasped from an at least partially subjectivist point of view. In section two, we will first assess via the aforementioned literature review the extent...... to which the validity issue has been treated in qualitative research contributions published in six leading English-language journals which publish IB research. Thereafter, in section three, we will discuss our findings and relate them to (a) various levels of a research project and (b) the existing...... literature on potential validity problems from a more subjectivist point of view. As a part of this step, we will demonstrate that the assumptions of objectivist and subjectivist ontologies and their corresponding epistemologies merit different canons for assessing research validity. In the subsequent...

  16. Excellent cross-cultural validity, intra-test reliability and construct validity of the dutch rivermead mobility index in patients after stroke undergoing rehabilitation

    NARCIS (Netherlands)

    Roorda, Leo D.; Green, John; De Kluis, Kiki R. A.; Molenaar, Ivo W.; Bagley, Pam; Smith, Jane; Geurts, Alexander C. H.

    2008-01-01

    Objective: To investigate the cross-cultural validity of international Dutch-English comparisons when using the Dutch Rivermead Mobility Index (RMI), and the intra-test reliability and construct validity of the Dutch RMI. Methods: Cross-cultural validity was studied in a combined data-set of Dutch

  17. Validation of the 4P's Plus screen for substance use in pregnancy validation of the 4P's Plus.

    Science.gov (United States)

    Chasnoff, I J; Wells, A M; McGourty, R F; Bailey, L K

    2007-12-01

    The purpose of this study is to validate the 4P's Plus screen for substance use in pregnancy. A total of 228 pregnant women enrolled in prenatal care underwent screening with the 4P's Plus and received a follow-up clinical assessment for substance use. Statistical analyses regarding reliability, sensitivity, specificity, and positive and negative predictive validity of the 4Ps Plus were conducted. The overall reliability for the five-item measure was 0.62. Seventy-four (32.5%) of the women had a positive screen. Sensitivity and specificity were very good, at 87 and 76%, respectively. Positive predictive validity was low (36%), but negative predictive validity was quite high (97%). Of the 31 women who had a positive clinical assessment, 45% were using less than 1 day per week. The 4P's Plus reliably and effectively screens pregnant women for risk of substance use, including those women typically missed by other perinatal screening methodologies.

  18. Validation of the Information/Communications Technology Literacy Test

    Science.gov (United States)

    2016-10-01

    Technical Report 1360 Validation of the Information /Communications Technology Literacy Test D. Matthew Trippe Human Resources Research...TITLE AND SUBTITLE Validation of the Information /Communications Technology Literacy Test 5a. CONTRACT OR GRANT NUMBER W91WAS-09-D-0013 5b...validate a measure of cyber aptitude, the Information /Communications Technology Literacy Test (ICTL), in predicting trainee performance in Information

  19. A valid licence

    NARCIS (Netherlands)

    Spoolder, H.A.M.; Ingenbleek, P.T.M.

    2010-01-01

    A valid licence Tuesday, April 20, 2010 Dr Hans Spoolder and Dr Paul Ingenbleek, of Wageningen University and Research Centres, share their thoughts on improving farm animal welfare in Europe At the presentation of the European Strategy 2020 on 3rd March, President Barroso emphasised the need for

  20. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  1. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  2. Fuel Cell and Hydrogen Technology Validation | Hydrogen and Fuel Cells |

    Science.gov (United States)

    NREL Fuel Cell and Hydrogen Technology Validation Fuel Cell and Hydrogen Technology Validation The NREL technology validation team works on validating hydrogen fuel cell electric vehicles; hydrogen fueling infrastructure; hydrogen system components; and fuel cell use in early market applications such as

  3. The Perceived Leadership Communication Questionnaire (PLCQ): Development and Validation.

    Science.gov (United States)

    Schneider, Frank M; Maier, Michaela; Lovrekovic, Sara; Retzbach, Andrea

    2015-01-01

    The Perceived Leadership Communication Questionnaire (PLCQ) is a short, reliable, and valid instrument for measuring leadership communication from both perspectives of the leader and the follower. Drawing on a communication-based approach to leadership and following a theoretical framework of interpersonal communication processes in organizations, this article describes the development and validation of a one-dimensional 6-item scale in four studies (total N = 604). Results from Study 1 and 2 provide evidence for the internal consistency and factorial validity of the PLCQ's self-rating version (PLCQ-SR)-a version for measuring how leaders perceive their own communication with their followers. Results from Study 3 and 4 show internal consistency, construct validity, and criterion validity of the PLCQ's other-rating version (PLCQ-OR)-a version for measuring how followers perceive the communication of their leaders. Cronbach's α had an average of.80 over the four studies. All confirmatory factor analyses yielded good to excellent model fit indices. Convergent validity was established by average positive correlations of.69 with subdimensions of transformational leadership and leader-member exchange scales. Furthermore, nonsignificant correlations with socially desirable responding indicated discriminant validity. Last, criterion validity was supported by a moderately positive correlation with job satisfaction (r =.31).

  4. A Validation Study of the Impression Replica Technique.

    Science.gov (United States)

    Segerström, Sofia; Wiking-Lima de Faria, Johanna; Braian, Michael; Ameri, Arman; Ahlgren, Camilla

    2018-04-17

    To validate the well-known and often-used impression replica technique for measuring fit between a preparation and a crown in vitro. The validation consisted of three steps. First, a measuring instrument was validated to elucidate its accuracy. Second, a specimen consisting of male and female counterparts was created and validated by the measuring instrument. Calculations were made for the exact values of three gaps between the male and female. Finally, impression replicas were produced of the specimen gaps and sectioned into four pieces. The replicas were then measured with the use of a light microscope. The values received from measuring the specimen were then compared with the values received from the impression replicas, and the technique was thereby validated. The impression replica technique overvalued all measured gaps. Depending on location of the three measuring sites, the difference between the specimen and the impression replicas varied from 47 to 130 μm. The impression replica technique overestimates gaps within the range of 2% to 11%. The validation of the replica technique enables the method to be used as a reference when testing other methods for evaluating fit in dentistry. © 2018 by the American College of Prosthodontists.

  5. Guided exploration of physically valid shapes for furniture design

    KAUST Repository

    Umetani, Nobuyuki

    2012-07-01

    Geometric modeling and the physical validity of shapes are traditionally considered independently. This makes creating aesthetically pleasing yet physically valid models challenging. We propose an interactive design framework for efficient and intuitive exploration of geometrically and physically valid shapes. During any geometric editing operation, the proposed system continuously visualizes the valid range of the parameter being edited. When one or more constraints are violated after an operation, the system generates multiple suggestions involving both discrete and continuous changes to restore validity. Each suggestion also comes with an editing mode that simultaneously adjusts multiple parameters in a coordinated way to maintain validity. Thus, while the user focuses on the aesthetic aspects of the design, our computational design framework helps to achieve physical realizability by providing active guidance to the user. We demonstrate our framework on plankbased furniture design with nail-joint and frictional constraints. We use our system to design a range of examples, conduct a user study, and also fabricate a physical prototype to test the validity and usefulness of the system. © 2012 ACM 0730-0301/2012/08- ART86.

  6. Solution Validation for a Double Façade Prototype

    Directory of Open Access Journals (Sweden)

    Pau Fonseca i Casas

    2017-12-01

    Full Text Available A Solution Validation involves comparing the data obtained from the system that are implemented following the model recommendations, as well as the model results. This paper presents a Solution Validation that has been performed with the aim of certifying that a set of computer-optimized designs, for a double façade, are consistent with reality. To validate the results obtained through simulation models, based on dynamic thermal calculation and using Computational Fluid Dynamic techniques, a comparison with the data obtained by monitoring a real implemented prototype has been carried out. The new validated model can be used to describe the system thermal behavior in different climatic zones without having to build a new prototype. The good performance of the proposed double façade solution is confirmed since the validation assures there is a considerable energy saving, preserving and even improving interior comfort. This work shows all the processes in the Solution Validation depicting some of the problems we faced and represents an example of this kind of validation that often is not considered in a simulation project.

  7. Development and validation of the Alcohol Myopia Scale.

    Science.gov (United States)

    Lac, Andrew; Berger, Dale E

    2013-09-01

    Alcohol myopia theory conceptualizes the ability of alcohol to narrow attention and how this demand on mental resources produces the impairments of self-inflation, relief, and excess. The current research was designed to develop and validate a scale based on this framework. People who were alcohol users rated items representing myopic experiences arising from drinking episodes in the past month. In Study 1 (N = 260), the preliminary 3-factor structure was supported by exploratory factor analysis. In Study 2 (N = 289), the 3-factor structure was substantiated with confirmatory factor analysis, and it was superior in fit to an empirically indefensible 1-factor structure. The final 14-item scale was evaluated with internal consistency reliability, discriminant validity, convergent validity, criterion validity, and incremental validity. The alcohol myopia scale (AMS) illuminates conceptual underpinnings of this theory and yields insights for understanding the tunnel vision that arises from intoxication.

  8. CIPS Validation Data Plan

    International Nuclear Information System (INIS)

    Dinh, Nam

    2012-01-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  9. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  10. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  11. Construct validity of the Individual Work Performance Questionnaire.

    OpenAIRE

    Koopmans, L.; Bernaards, C.M.; Hildebrandt, V.H.; Vet, H.C.W. de; Beek, A.J. van der

    2014-01-01

    Objective: To examine the construct validity of the Individual Work Performance Questionnaire (IWPQ). Methods: A total of 1424 Dutch workers from three occupational sectors (blue, pink, and white collar) participated in the study. First, IWPQ scores were correlated with related constructs (convergent validity). Second, differences between known groups were tested (discriminative validity). Results: First, IWPQ scores correlated weakly to moderately with absolute and relative presenteeism, and...

  12. Validation of the Netherlands pacemaker patient registry

    NARCIS (Netherlands)

    Dijk, WA; Kingma, T; Hooijschuur, CAM; Dassen, WRM; Hoorntje, JCA; van Gelder, LM

    1997-01-01

    This paper deals with the validation of the information stored in the Netherlands central pacemaker patient database. At this moment the registry database contains information on more than 70500 patients, 85000 pacemakers and 90000 leads. The validation procedures consisted of an internal

  13. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  14. Development and Validation of Multi-Dimensional Personality ...

    African Journals Online (AJOL)

    This study was carried out to establish the scientific processes for the development and validation of Multi-dimensional Personality Inventory (MPI). The process of development and validation occurred in three phases with five components of Agreeableness, Conscientiousness, Emotional stability, Extroversion, and ...

  15. An information architecture for courseware validation

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    A lack of pedagogy in courseware can lead to learner rejec- tion. It is therefore vital that pedagogy is a central concern of courseware construction. Courseware validation allows the course creator to specify pedagogical rules and principles which courseware must conform to. In this paper we investigate the information needed for courseware valida- tion and propose an information architecture to be used as a basis for validation.

  16. Validation of gamma irradiator controls for quality and regulatory compliance

    International Nuclear Information System (INIS)

    Harding, R.B.; Pinteric, F.J.A.

    1995-01-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic - Validation of Irradiator Controls - is a significant regulatory compliance and operations issues within the irradiator suppliers' and users' community. (author)

  17. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  18. Construct Validity of Neuropsychological Tests in Schizophrenia.

    Science.gov (United States)

    Allen, Daniel N.; Aldarondo, Felito; Goldstein, Gerald; Huegel, Stephen G.; Gilbertson, Mark; van Kammen, Daniel P.

    1998-01-01

    The construct validity of neuropsychological tests in patients with schizophrenia was studied with 39 patients who were evaluated with a battery of six tests assessing attention, memory, and abstract reasoning abilities. Results support the construct validity of the neuropsychological tests in patients with schizophrenia. (SLD)

  19. Validation of Structures in the Protein Data Bank.

    Science.gov (United States)

    Gore, Swanand; Sanz García, Eduardo; Hendrickx, Pieter M S; Gutmanas, Aleksandras; Westbrook, John D; Yang, Huanwang; Feng, Zukang; Baskaran, Kumaran; Berrisford, John M; Hudson, Brian P; Ikegawa, Yasuyo; Kobayashi, Naohiro; Lawson, Catherine L; Mading, Steve; Mak, Lora; Mukhopadhyay, Abhik; Oldfield, Thomas J; Patwardhan, Ardan; Peisach, Ezra; Sahni, Gaurav; Sekharan, Monica R; Sen, Sanchayita; Shao, Chenghua; Smart, Oliver S; Ulrich, Eldon L; Yamashita, Reiko; Quesada, Martha; Young, Jasmine Y; Nakamura, Haruki; Markley, John L; Berman, Helen M; Burley, Stephen K; Velankar, Sameer; Kleywegt, Gerard J

    2017-12-05

    The Worldwide PDB recently launched a deposition, biocuration, and validation tool: OneDep. At various stages of OneDep data processing, validation reports for three-dimensional structures of biological macromolecules are produced. These reports are based on recommendations of expert task forces representing crystallography, nuclear magnetic resonance, and cryoelectron microscopy communities. The reports provide useful metrics with which depositors can evaluate the quality of the experimental data, the structural model, and the fit between them. The validation module is also available as a stand-alone web server and as a programmatically accessible web service. A growing number of journals require the official wwPDB validation reports (produced at biocuration) to accompany manuscripts describing macromolecular structures. Upon public release of the structure, the validation report becomes part of the public PDB archive. Geometric quality scores for proteins in the PDB archive have improved over the past decade. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. User's guide for signal validation software: Final report

    International Nuclear Information System (INIS)

    Swisher, V.I.

    1987-09-01

    Northeast Utilities has implemented a real-time signal validation program into the safety parameter display systems (SPDS) at Millstone Units 2 and 3. Signal validation has been incorporated to improve the reliability of the information being used in the SPDS. Signal validation uses Parity Space Vector Analysis to process SPDS sensor data. The Parity Space algorithm determines consistency among independent, redundant input measurements. This information is then used to calculate a validated estimate of that parameter. Additional logic is incorporated to compare partially redundant measurement data. In both plants the SPDS has been designed to monitor the status of critical safety functions (CSFs) and provide information that can be used with plant-specific emergency operating procedures (EOPs). However the CSF logic, EOPs, and complement of plant sensors vary for these plants due to their different design characteristics (MP2 - 870 MWe Combustion Engineering PWR, MP3 - 1150 MWe Westinghouse PWR). These differences in plant design and information requirements result in a variety of signal validation applications

  1. Validity and Fairness

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…

  2. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  3. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    Science.gov (United States)

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  4. Empirical Validation of Building Simulation Software

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group.......The work described in this report is the result of a collaborative effort of members of the International Energy Agency (IEA), Task 34/43: Testing and validation of building energy simulation tools experts group....

  5. Structural Validation of the Holistic Wellness Assessment

    Science.gov (United States)

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  6. Internal Validity: A Must in Research Designs

    Science.gov (United States)

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  7. French validation of the Foot Function Index (FFI).

    Science.gov (United States)

    Pourtier-Piotte, C; Pereira, B; Soubrier, M; Thomas, E; Gerbaud, L; Coudeyre, E

    2015-10-01

    French validation of the Foot Function Index (FFI), self-questionnaire designed to evaluate rheumatoid foot according to 3 domains: pain, disability and activity restriction. The first step consisted of translation/back translation and cultural adaptation according to the validated methodology. The second stage was a prospective validation on 53 patients with rheumatoid arthritis who filled out the FFI. The following data were collected: pain (Visual Analog Scale), disability (Health Assessment Questionnaire) and activity restrictions (McMaster Toronto Arthritis questionnaire). A test/retest procedure was performed 15 days later. The statistical analyses focused on acceptability, internal consistency (Cronbach's alpha and Principal Component Analysis), test-retest reproducibility (concordance coefficients), external validity (correlation coefficients) and responsiveness to change. The FFI-F is a culturally acceptable version for French patients with rheumatoid arthritis. The Cronbach's alpha ranged from 0.85 to 0.97. Reproducibility was correct (correlation coefficients>0.56). External validity and responsiveness to change were good. The use of a rigorous methodology allowed the validation of the FFI in the French language (FFI-F). This tool can be used in routine practice and clinical research for evaluating the rheumatoid foot. The FFI-F could be used in other pathologies with foot-related functional impairments. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  8. Validation of the reactor dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.

    1994-05-01

    HEXTRAN is a new three-dimensional, hexagonal reactor dynamics code developed in the Technical Research Centre of Finland (VTT) for VVER type reactors. This report describes the validation work of HEXTRAN. The work has been made with the financing of the Finnish Centre for Radiation and Nuclear Safety (STUK). HEXTRAN is particularly intended for calculation of such accidents, in which radially asymmetric phenomena are included and both good neutron dynamics and two-phase thermal hydraulics are important. HEXTRAN is based on already validated codes. The models of these codes have been shown to function correctly also within the HEXTRAN code. The main new model of HEXTRAN, the spatial neutron kinetics model has been successfully validated against LR-0 test reactor and Loviisa plant measurements. Connected with SMABRE, HEXTRAN can be reliably used for calculation of transients including effects of the whole cooling system of VVERs. Further validation plans are also introduced in the report. (orig.). (23 refs., 16 figs., 2 tabs.)

  9. ICP-MS Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  10. 20 CFR 404.727 - Evidence of a deemed valid marriage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of a deemed valid marriage. 404.727... DISABILITY INSURANCE (1950- ) Evidence Evidence of Age, Marriage, and Death § 404.727 Evidence of a deemed valid marriage. (a) General. A deemed valid marriage is a ceremonial marriage we consider valid even...

  11. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    Science.gov (United States)

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  12. The validation of the turnover intention scale

    Directory of Open Access Journals (Sweden)

    Chris F.C. Bothma

    2013-04-01

    Full Text Available Orientation: Turnover intention as a construct has attracted increased research attention in the recent past, but there are seemingly not many valid and reliable scales around to measure turnover intention. Research purpose: This study focused on the validation of a shortened, six-item version of the turnover intention scale (TIS-6. Motivation for the study: The research question of whether the TIS-6 is a reliable and a valid scale for measuring turnover intention and for predicting actual turnover was addressed in this study. Research design, approach and method: The study was based on a census-based sample (n= 2429 of employees in an information, communication and technology (ICT sector company (N= 23 134 where the TIS-6 was used as one of the criterion variables. The leavers (those who left the company in this sample were compared with the stayers (those who remained in the employ of the company in this sample in respect of different variables used in the study. Main findings: It was established that the TIS-6 could measure turnover intentions reliably (α= 0.80. The TIS-6 could significantly distinguish between leavers and stayers (actual turnover, thereby confirming its criterion-predictive validity. The scale also established statistically significant differences between leavers and stayers in respect of a number of the remaining theoretical variables used in the study, thereby also confirming its differential validity. These comparisons were conducted for both the 4-month and the 4-year period after the survey was conducted. Practical/managerial implications: Turnover intention is related to a number of variables in the study which necessitates a reappraisal and a reconceptualisation of existing turnover intention models. Contribution/value-add: The TIS-6 can be used as a reliable and valid scale to assess turnover intentions and can therefore be used in research to validly and reliably assess turnover intentions or to

  13. CIPS Validation Data Plan

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2012-03-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  14. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  15. Validation of the Vanderbilt Holistic Face Processing Test

    OpenAIRE

    Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  16. Validation of the Vanderbilt Holistic Face Processing Test.

    OpenAIRE

    Chao-Chih Wang; Chao-Chih Wang; David Andrew Ross; Isabel Gauthier; Jennifer Joanna Richler

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the ...

  17. The Treatment Validity of Autism Screening Instruments

    Science.gov (United States)

    Livanis, Andrew; Mouzakitis, Angela

    2010-01-01

    Treatment validity is a frequently neglected topic of screening instruments used to identify autism spectrum disorders. Treatment validity, however, should represent an important aspect of these instruments to link the resulting data to the selection of interventions as well as make decisions about treatment length and intensity. Research…

  18. On the Need for Quality Control in Validation Research.

    Science.gov (United States)

    Maier, Milton H.

    1988-01-01

    Validated aptitude tests used to help make personnel decisions about military recruits against hands-on tests of job performance in radio repairers and automotive mechanics. Data were filled with errors, reducing accuracy of validity coefficients. Discusses how validity coefficients can be made more accurate by exercising quality control during…

  19. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  20. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  1. Validation of Visual Caries Activity Assessment

    DEFF Research Database (Denmark)

    Guedes, R S; Piovesan, C; Ardenghi, T M

    2014-01-01

    We evaluated the predictive and construct validity of a caries activity assessment system associated with the International Caries Detection and Assessment System (ICDAS) in primary teeth. A total of 469 children were reexamined: participants of a caries survey performed 2 yr before (follow-up rate...... of 73.4%). At baseline, children (12-59 mo old) were examined with the ICDAS and a caries activity assessment system. The predictive validity was assessed by evaluating the risk of active caries lesion progression to more severe conditions in the follow-up, compared with inactive lesions. We also...... assessed if children with a higher number of active caries lesions were more likely to develop new lesions (construct validity). Noncavitated active caries lesions at occlusal surfaces presented higher risk of progression than inactive ones. Children with a higher number of active lesions and with higher...

  2. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures. This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  3. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures.This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  4. Validity of the Danish Prostate Symptom Score questionnaire in stroke

    DEFF Research Database (Denmark)

    Tibaek, S.; Dehlendorff, Christian

    2009-01-01

    Objective – To determine the content and face validity of the Danish Prostate Symptom Score (DAN-PSS-1) questionnaire in stroke patients. Materials and methods – Content validity was judged among an expert panel in neuro-urology. The judgement was measured by the content validity index (CVI). Face...... validity was indicated in a clinical sample of 482 stroke patients in a hospital-based, cross-sectional survey. Results – I-CVI was rated >0.78 (range 0.94–1.00) for 75% of symptom and bother items corresponding to adequate content validity. The expert panel rated the entire DAN-PSS-1 questionnaire highly...... questionnaire appears to be content and face valid for measuring lower urinary tract symptoms after stroke....

  5. Construct Validity of the Nepalese School Leaving English Reading Test

    Science.gov (United States)

    Dawadi, Saraswati; Shrestha, Prithvi N.

    2018-01-01

    There has been a steady interest in investigating the validity of language tests in the last decades. Despite numerous studies on construct validity in language testing, there are not many studies examining the construct validity of a reading test. This paper reports on a study that explored the construct validity of the English reading test in…

  6. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  7. Validity and Reliability of Turkish Male Breast Self-Examination Instrument.

    Science.gov (United States)

    Erkin, Özüm; Göl, İlknur

    2018-04-01

    This study aims to measure the validity and reliability of Turkish male breast self-examination (MBSE) instrument. The methodological study was performed in 2016 at Ege University, Faculty of Nursing, İzmir, Turkey. The MBSE includes ten steps. For validity studies, face validity, content validity, and construct validity (exploratory factor analysis) were done. For reliability study, Kuder Richardson was calculated. The content validity index was found to be 0.94. Kendall W coefficient was 0.80 (p=0.551). The total variance explained by the two factors was found to be 63.24%. Kuder Richardson 21 was done for reliability study and found to be 0.97 for the instrument. The final instrument included 10 steps and two stages. The Turkish version of MBSE is a valid and reliable instrument for early diagnose. The MBSE can be used in Turkish speaking countries and cultures with two stages and 10 steps.

  8. Experimental validation of Monte Carlo calculations for organ dose

    International Nuclear Information System (INIS)

    Yalcintas, M.G.; Eckerman, K.F.; Warner, G.G.

    1980-01-01

    The problem of validating estimates of absorbed dose due to photon energy deposition is examined. The computational approaches used for the estimation of the photon energy deposition is examined. The limited data for validation of these approaches is discussed and suggestions made as to how better validation information might be obtained

  9. Promoting Rigorous Validation Practice: An Applied Perspective

    Science.gov (United States)

    Mattern, Krista D.; Kobrin, Jennifer L.; Camara, Wayne J.

    2012-01-01

    As researchers at a testing organization concerned with the appropriate uses and validity evidence for our assessments, we provide an applied perspective related to the issues raised in the focus article. Newton's proposal for elaborating the consensus definition of validity is offered with the intention to reduce the risks of inadequate…

  10. GPM GROUND VALIDATION CAMPAIGN REPORTS IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Campaign Reports IFloodS dataset consists of various reports filed by the scientists during the GPM Ground Validation Iowa Flood Studies...

  11. Simulation codes and the impact of validation/uncertainty requirements

    International Nuclear Information System (INIS)

    Sills, H.E.

    1995-01-01

    Several of the OECD/CSNI members have adapted a proposed methodology for code validation and uncertainty assessment. Although the validation process adapted by members has a high degree of commonality, the uncertainty assessment processes selected are more variable, ranaing from subjective to formal. This paper describes the validation and uncertainty assessment process, the sources of uncertainty, methods of reducing uncertainty, and methods of assessing uncertainty.Examples are presented from the Ontario Hydro application of the validation methodology and uncertainty assessment to the system thermal hydraulics discipline and the TUF (1) system thermal hydraulics code. (author)

  12. Validation studies of nursing diagnoses in neonatology

    Directory of Open Access Journals (Sweden)

    Pavlína Rabasová

    2016-03-01

    Full Text Available Aim: The objective of the review was the analysis of Czech and foreign literature sources and professional periodicals to obtain a relevant comprehensive overview of validation studies of nursing diagnoses in neonatology. Design: Review. Methods: The selection criterion was studies concerning the validation of nursing diagnoses in neonatology. To obtain data from relevant sources, the licensed professional databases EBSCO, Web of Science and Scopus were utilized. The search criteria were: date of publication - unlimited; academic periodicals - full text; peer-reviewed periodicals; search language - English, Czech and Slovak. Results: A total of 788 studies were found. Only 5 studies were eligible for content analysis, dealing specifically with validation of nursing diagnoses in neonatology. The analysis of the retrieved studies suggests that authors are most often concerned with identifying the defining characteristics of nursing diagnoses applicable to both the mother (parents and the newborn. The diagnoses were validated in the domains Role Relationship; Coping/Stress tolerance; Activity/Rest, and Elimination and Exchange. Diagnoses represented were from the field of dysfunctional physical needs as well as the field of psychosocial and spiritual needs. The diagnoses were as follows: Parental role conflict (00064; Impaired parenting (00056; Grieving (00136; Ineffective breathing pattern (00032; Impaired gas exchange (00030; and Impaired spontaneous ventilation (00033. Conclusion: Validation studies enable effective planning of interventions with measurable results and support clinical nursing practice.

  13. Worst-case study for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagents: Methodology validation of total organic carbon

    International Nuclear Information System (INIS)

    Porto, Luciana Valeria Ferrari Machado

    2015-01-01

    Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called 'worst case', to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with 99m Tc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case. Worst case product's choice was based on the calculation of an index called 'Worst Case Index' (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated 'worst case' was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu®, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision

  14. Item validity vs. item discrimination index: a redundancy?

    Science.gov (United States)

    Panjaitan, R. L.; Irawati, R.; Sujana, A.; Hanifah, N.; Djuanda, D.

    2018-03-01

    In several literatures about evaluation and test analysis, it is common to find that there are calculations of item validity as well as item discrimination index (D) with different formula for each. Meanwhile, other resources said that item discrimination index could be obtained by calculating the correlation between the testee’s score in a particular item and the testee’s score on the overall test, which is actually the same concept as item validity. Some research reports, especially undergraduate theses tend to include both item validity and item discrimination index in the instrument analysis. It seems that these concepts might overlap for both reflect the test quality on measuring the examinees’ ability. In this paper, examples of some results of data processing on item validity and item discrimination index were compared. It would be discussed whether item validity and item discrimination index can be represented by one of them only or it should be better to present both calculations for simple test analysis, especially in undergraduate theses where test analyses were included.

  15. A validation methodology for fault-tolerant clock synchronization

    Science.gov (United States)

    Johnson, S. C.; Butler, R. W.

    1984-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating an experimental implementation of the Software Implemented Fault Tolerance (SIFT) clock synchronization algorithm. The design proof of the algorithm defines the maximum skew between any two nonfaulty clocks in the system in terms of theoretical upper bounds on certain system parameters. The quantile to which each parameter must be estimated is determined by a combinatorial analysis of the system reliability. The parameters are measured by direct and indirect means, and upper bounds are estimated. A nonparametric method based on an asymptotic property of the tail of a distribution is used to estimate the upper bound of a critical system parameter. Although the proof process is very costly, it is extremely valuable when validating the crucial synchronization subsystem.

  16. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  17. How Developments in Psychology and Technology Challenge Validity Argumentation

    Science.gov (United States)

    Mislevy, Robert J.

    2016-01-01

    Validity is the sine qua non of properties of educational assessment. While a theory of validity and a practical framework for validation has emerged over the past decades, most of the discussion has addressed familiar forms of assessment and psychological framings. Advances in digital technologies and in cognitive and social psychology have…

  18. Failure mode and effects analysis outputs: are they valid?

    Science.gov (United States)

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  19. Ensuring validity in qualitative International Business Research

    DEFF Research Database (Denmark)

    Andersen, Poul Houman

    2004-01-01

    The purpose of this paper is to provide an account for how the validity issue may be gasped within a qualitative apporach to the IB field......The purpose of this paper is to provide an account for how the validity issue may be gasped within a qualitative apporach to the IB field...

  20. The Danish anal sphincter rupture questionnaire: Validity and reliability

    DEFF Research Database (Denmark)

    Due, Ulla; Ottesen, Marianne

    2008-01-01

    Objective. To revise, validate and test for reliability an anal sphincter rupture questionnaire in relation to construct, content and face validity. Setting and background. Since 1996 women with anal sphincter rupture (ASR) at one of the public university hospitals in Copenhagen, Denmark have been...... main questions but one. Two questions needed further explanation. Seven women made minor errors. Conclusion. The validated Danish questionnaire has a good construct, content and face validity. It is a well accepted, reliable, simple and clinically relevant screening tool. It reveals physical problems...... offered pelvic floor muscle examination and instruction by a specialist physiotherapist. In relation to that, a non-validated questionnaire about anal and urinary incontinence was to be answered six months after childbirth. Method. The original questionnaire was revised and a pilot test was performed...

  1. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    Science.gov (United States)

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  2. Validation of the Reflux Disease Questionnaire into Greek

    Directory of Open Access Journals (Sweden)

    Eirini Oikonomidou

    2012-09-01

    Full Text Available Primary care physicians face challenges in diagnosing and managing gastroesophageal reflux disease (GERD. The Reflux Disease Questionnaire (RDQ meets the standards of validity, reliability, and practicability. This paper reports on the validation of the Greek translation of the RDQ. RDQ is a condition specific instrument. For the validation of the questionnaire, the internal consistency of its items was established using the alpha coefficient of Chronbach. The reproducibility (test-retest reliability was measured by kappa correlation coefficient and the criterion of validity was calculated against the diagnosis of another questionnaire already translated and validated into Greek (IDGP using kappa correlation coefficient. A factor analysis was also performed. Greek RDQ showed a high overall internal consistency (alpha value: 0.91 for individual comparison. All 8 items regarding heartburn and regurgitation, GERD, had good reproducibility (Cohen’s κ 0.60-0.79, while the remaining 4 items about dyspepsia had a moderate reproducibility (Cohen’s κ=’ 0.40-0.59 The kappa coefficient for criterion validity for GERD was rather poor (0.20, 95% CI: 0.04, 0.36 and the overall agreement between the results of the RDQ questionnaire and those based on the IDGP questionnaire was 70.5%. Factor analysis indicated 3 factors with Eigenvalue over 1.0, and responsible for 76.91% of variance. Regurgitation items correlated more strongly with the third component but pain behind sternum and upper stomach pain correlated with the second component. The Greek version of RDQ seems to be a reliable and valid instrument following the pattern of the original questionnaire, and could be used in primary care research in Greece.

  3. The Value of Qualitative Methods in Social Validity Research

    Science.gov (United States)

    Leko, Melinda M.

    2014-01-01

    One quality indicator of intervention research is the extent to which the intervention has a high degree of social validity, or practicality. In this study, I drew on Wolf's framework for social validity and used qualitative methods to ascertain five middle schoolteachers' perceptions of the social validity of System 44®--a phonics-based reading…

  4. Validation of Land Surface Temperature from Sentinel-3

    Science.gov (United States)

    Ghent, D.

    2017-12-01

    One of the main objectives of the Sentinel-3 mission is to measure sea- and land-surface temperature with high-end accuracy and reliability in support of environmental and climate monitoring in an operational context. Calibration and validation are thus key criteria for operationalization within the framework of the Sentinel-3 Mission Performance Centre (S3MPC). Land surface temperature (LST) has a long heritage of satellite observations which have facilitated our understanding of land surface and climate change processes, such as desertification, urbanization, deforestation and land/atmosphere coupling. These observations have been acquired from a variety of satellite instruments on platforms in both low-earth orbit and in geostationary orbit. Retrieval accuracy can be a challenge though; surface emissivities can be highly variable owing to the heterogeneity of the land, and atmospheric effects caused by the presence of aerosols and by water vapour absorption can give a bias to the underlying LST. As such, a rigorous validation is critical in order to assess the quality of the data and the associated uncertainties. Validation of the level-2 SL_2_LST product, which became freely available on an operational basis from 5th July 2017 builds on an established validation protocol for satellite-based LST. This set of guidelines provides a standardized framework for structuring LST validation activities. The protocol introduces a four-pronged approach which can be summarised thus: i) in situ validation where ground-based observations are available; ii) radiance-based validation over sites that are homogeneous in emissivity; iii) intercomparison with retrievals from other satellite sensors; iv) time-series analysis to identify artefacts on an interannual time-scale. This multi-dimensional approach is a necessary requirement for assessing the performance of the LST algorithm for the Sea and Land Surface Temperature Radiometer (SLSTR) which is designed around biome

  5. In-Flight Validation of Mid and Thermal Infrared Remotely Sensed Data Using the Lake Tahoe and Salton Sea Automated Validation Sites

    Science.gov (United States)

    Hook, Simon J.

    2008-01-01

    The presentation includes an introduction, Lake Tahoe site layout and measurements, Salton Sea site layout and measurements, field instrument calibration and cross-calculations, data reduction methodology and error budgets, and example results for MODIS. Summary and conclusions are: 1) Lake Tahoe CA/NV automated validation site was established in 1999 to assess radiometric accuracy of satellite and airborne mid and thermal infrared data and products. Water surface temperatures range from 4-25C.2) Salton Sea CA automated validation site was established in 2008 to broaden range of available water surface temperatures and atmospheric water vapor test cases. Water surface temperatures range from 15-35C. 3) Sites provide all information necessary for validation every 2 mins (bulk temperature, skin temperature, air temperature, wind speed, wind direction, net radiation, relative humidity). 4) Sites have been used to validate mid and thermal infrared data and products from: ASTER, AATSR, ATSR2, MODIS-Terra, MODIS-Aqua, Landsat 5, Landsat 7, MTI, TES, MASTER, MAS. 5) Approximately 10 years of data available to help validate AVHRR.

  6. Verification and validation of RADMODL Version 1.0

    International Nuclear Information System (INIS)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V ampersand V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident

  7. Verification and validation of RADMODL Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.

  8. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  9. Brazilian Portuguese version of the Revised Fibromyalgia Impact Questionnaire (FIQR-Br): cross-cultural validation, reliability, and construct and structural validation.

    Science.gov (United States)

    Lupi, Jaqueline Basilio; Carvalho de Abreu, Daniela Cristina; Ferreira, Mariana Candido; Oliveira, Renê Donizeti Ribeiro de; Chaves, Thais Cristina

    2017-08-01

    This study aimed to culturally adapt and validate the Revised Fibromyalgia Impact Questionnaire (FIQR) to Brazilian Portuguese, by the use of analysis of internal consistency, reliability, and construct and structural validity. A total of 100 female patients with fibromyalgia participated in the validation process of the Brazilian Portuguese version of the FIQR (FIQR-Br).The intraclass correlation coefficient (ICC) was used for statistical analysis of reliability (test-retest), Cronbach's alpha for internal consistency, Pearson's rank correlation for construct validity, and confirmatory factor analysis (CFA) for structural validity. It was verified excellent levels of reliability, with ICC greater than 0.75 for all questions and domains of the FIQR-Br. For internal consistency, alpha values greater than 0.70 for the items and domains of the questionnaire were observed. Moderate (0.40  0.70) correlations were observed for the scores of domains and total score between the FIQR-Br and FIQ-Br. The structure of the three domains of the FIQR-Br was confirmed by CFA. The results of this study suggest that that the FIQR-Br is a reliable and valid instrument for assessing fibromyalgia-related impact, and supports its use in clinical settings and research. The structure of the three domains of the FIQR-Br was also confirmed. Implications for Rehabilitation Fibromyalgia is a chronic musculoskeletal disorder characterized by widespread and diffuse pain, fatigue, sleep disturbances, and depression. The disease significantly impairs patients' quality of life and can be highly disabling. To be used in multicenter research efforts, the Revised Fibromyalgia Impact Questionnaire (FIQR) must be cross-culturally validated and psychometrically tested. This paper will make available a new version of the FIQR-Br since another version already exists, but there are concerns about its measurement properties. The availability of an instrument adapted to and validated for Brazilian

  10. Spacecraft early design validation using formal methods

    International Nuclear Information System (INIS)

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  11. Educational testing validity and reliability in pharmacy and medical education literature.

    Science.gov (United States)

    Hoover, Matthew J; Jung, Rose; Jacobs, David M; Peeters, Michael J

    2013-12-16

    To evaluate and compare the reliability and validity of educational testing reported in pharmacy education journals to medical education literature. Descriptions of validity evidence sources (content, construct, criterion, and reliability) were extracted from articles that reported educational testing of learners' knowledge, skills, and/or abilities. Using educational testing, the findings of 108 pharmacy education articles were compared to the findings of 198 medical education articles. For pharmacy educational testing, 14 articles (13%) reported more than 1 validity evidence source while 83 articles (77%) reported 1 validity evidence source and 11 articles (10%) did not have evidence. Among validity evidence sources, content validity was reported most frequently. Compared with pharmacy education literature, more medical education articles reported both validity and reliability (59%; particles in pharmacy education compared to medical education, validity, and reliability reporting were limited in the pharmacy education literature.

  12. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  13. Verification and Validation of a Fingerprint Image Registration Software

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2006-01-01

    Full Text Available The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a the validation of the source code with respect to the system requirements specification; (b the validation of the optimization algorithm, which is in the core of the registration system; and (c the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  14. Validating Measures of Mathematical Knowledge for Teaching

    Science.gov (United States)

    Kane, Michael

    2007-01-01

    According to Schilling, Blunk, and Hill, the set of papers presented in this journal issue had two main purposes: (1) to use an argument-based approach to evaluate the validity of the tests of mathematical knowledge for teaching (MKT), and (2) to critically assess the author's version of an argument-based approach to validation (Kane, 2001, 2004).…

  15. Validation of the Social Appearance Anxiety Scale: Factor, Convergent, and Divergent Validity

    Science.gov (United States)

    Levinson, Cheri A.; Rodebaugh, Thomas L.

    2011-01-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor,…

  16. Italian Validation of Homophobia Scale (HS)

    OpenAIRE

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L.; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A.

    2015-01-01

    Introduction: The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self‐reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. Aim: The aim of this study was to validate the HS in the Italian context. Methods: An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back i...

  17. Verification and Validation in Systems Engineering

    CERN Document Server

    Debbabi, Mourad; Jarraya, Yosr; Soeanu, Andrei; Alawneh, Luay

    2010-01-01

    "Verification and validation" represents an important process used for the quality assessment of engineered systems and their compliance with the requirements established at the beginning of or during the development cycle. Debbabi and his coauthors investigate methodologies and techniques that can be employed for the automatic verification and validation of systems engineering design models expressed in standardized modeling languages. Their presentation includes a bird's eye view of the most prominent modeling languages for software and systems engineering, namely the Unified Model

  18. Reliability and Concurrent Validity of the International Personality ...

    African Journals Online (AJOL)

    Reliability and Concurrent Validity of the International Personality item Pool (IPIP) Big-five Factor Markers in Nigeria. ... Nigerian Journal of Psychiatry ... Aims: The aim of this study was to assess the internal consistency and concurrent validity ...

  19. Measuring Nutrition Literacy in Spanish-Speaking Latinos: An Exploratory Validation Study.

    Science.gov (United States)

    Gibbs, Heather D; Camargo, Juliana M T B; Owens, Sarah; Gajewski, Byron; Cupertino, Ana Paula

    2017-11-21

    Nutrition is important for preventing and treating chronic diseases highly prevalent among Latinos, yet no tool exists for measuring nutrition literacy among Spanish speakers. This study aimed to adapt the validated Nutrition Literacy Assessment Instrument for Spanish-speaking Latinos. This study was developed in two phases: adaptation and validity testing. Adaptation included translation, expert item content review, and interviews with Spanish speakers. For validity testing, 51 participants completed the Short Assessment of Health Literacy-Spanish (SAHL-S), the Nutrition Literacy Assessment Instrument in Spanish (NLit-S), and socio-demographic questionnaire. Validity and reliability statistics were analyzed. Content validity was confirmed with a Scale Content Validity Index of 0.96. Validity testing demonstrated NLit-S scores were strongly correlated with SAHL-S scores (r = 0.52, p internal consistency was excellent (Cronbach's α = 0.92). The NLit-S demonstrates validity and reliability for measuring nutrition literacy among Spanish-speakers.

  20. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    Science.gov (United States)

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  1. Entropy Evaluation Based on Value Validity

    Directory of Open Access Journals (Sweden)

    Tarald O. Kvålseth

    2014-09-01

    Full Text Available Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S* meet the value-validity conditions, certain power functions of S and S* do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

  2. Validity and reliability of the NAB Naming Test.

    Science.gov (United States)

    Sachs, Bonnie C; Rush, Beth K; Pedraza, Otto

    2016-05-01

    Confrontation naming is commonly assessed in neuropsychological practice, but few standardized measures of naming exist and those that do are susceptible to the effects of education and culture. The Neuropsychological Assessment Battery (NAB) Naming Test is a 31-item measure used to assess confrontation naming. Despite adequate psychometric information provided by the test publisher, there has been limited independent validation of the test. In this study, we investigated the convergent and discriminant validity, internal consistency, and alternate forms reliability of the NAB Naming Test in a sample of adults (Form 1: n = 247, Form 2: n = 151) clinically referred for neuropsychological evaluation. Results indicate adequate-to-good internal consistency and alternate forms reliability. We also found strong convergent validity as demonstrated by relationships with other neurocognitive measures. We found preliminary evidence that the NAB Naming Test demonstrates a more pronounced ceiling effect than other commonly used measures of naming. To our knowledge, this represents the largest published independent validation study of the NAB Naming Test in a clinical sample. Our findings suggest that the NAB Naming Test demonstrates adequate validity and reliability and merits consideration in the test arsenal of clinical neuropsychologists.

  3. Guidelines for the verification and validation of expert system software and conventional software: Validation scenarios. Volume 6

    International Nuclear Information System (INIS)

    Mirsky, S.M.; Hayes, J.E.; Miller, L.A.

    1995-03-01

    This report is the sixth volume in a series of reports describing the results of the Expert System Verification and Validation (V ampersand V) project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V ampersand V of expert systems for use in nuclear power applications. This activity was concerned with the development of a methodology for selecting validation scenarios and subsequently applying it to two expert systems used for nuclear utility applications. Validation scenarios were defined and classified into five categories: PLANT, TEST, BASICS, CODE, and LICENSING. A sixth type, REGRESSION, is a composite of the others and refers to the practice of using trusted scenarios to ensure that modifications to software did not change unmodified functions. Rationale was developed for preferring scenarios selected from the categories in the order listed and for determining under what conditions to select scenarios from other types. A procedure incorporating all of the recommendations was developed as a generalized method for generating validation scenarios. The procedure was subsequently applied to two expert systems used in the nuclear industry and was found to be effective, given that an experienced nuclear engineer made the final scenario selections. A method for generating scenarios directly from the knowledge base component was suggested

  4. Guidelines for the verification and validation of expert system software and conventional software: Validation scenarios. Volume 6

    Energy Technology Data Exchange (ETDEWEB)

    Mirsky, S.M.; Hayes, J.E.; Miller, L.A. [Science Applications International Corp., McLean, VA (United States)

    1995-03-01

    This report is the sixth volume in a series of reports describing the results of the Expert System Verification and Validation (V&V) project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V&V of expert systems for use in nuclear power applications. This activity was concerned with the development of a methodology for selecting validation scenarios and subsequently applying it to two expert systems used for nuclear utility applications. Validation scenarios were defined and classified into five categories: PLANT, TEST, BASICS, CODE, and LICENSING. A sixth type, REGRESSION, is a composite of the others and refers to the practice of using trusted scenarios to ensure that modifications to software did not change unmodified functions. Rationale was developed for preferring scenarios selected from the categories in the order listed and for determining under what conditions to select scenarios from other types. A procedure incorporating all of the recommendations was developed as a generalized method for generating validation scenarios. The procedure was subsequently applied to two expert systems used in the nuclear industry and was found to be effective, given that an experienced nuclear engineer made the final scenario selections. A method for generating scenarios directly from the knowledge base component was suggested.

  5. Level validity of self-report whole-family measures.

    Science.gov (United States)

    Manders, Willeke A; Cook, William L; Oud, Johan H L; Scholte, Ron H J; Janssens, Jan M A M; De Bruyn, Eric E J

    2007-12-01

    This article introduces an approach to testing the level validity of family assessment instruments (i.e., whether a family instrument measures family functioning at the level of the system it purports to assess). Two parents and 2 adolescents in 69 families rated the warmth in each of their family relationships and in the family as a whole. Family members' ratings of whole-family warmth assessed family functioning not only at the family level (i.e., characteristics of the family as a whole) but also at the individual level of analysis (i.e., characteristics of family members as raters), indicating a lack of level validity. Evidence was provided for the level validity of a latent variable based on family members' ratings of whole-family warmth. The findings underscore the importance of assessing the level validity of individual ratings of whole-family functioning.

  6. Computer system validation: an overview of official requirements and standards.

    Science.gov (United States)

    Hoffmann, A; Kähny-Simonius, J; Plattner, M; Schmidli-Vckovski, V; Kronseder, C

    1998-02-01

    A brief overview of the relevant documents for companies in the pharmaceutical industry, which are to be taken into consideration to fulfil computer system validation requirements, is presented. We concentrate on official requirements and valid standards in the USA, European Community and Switzerland. There are basically three GMP-guidelines. their interpretations by the associations of interests like APV and PDA as well as the GAMP Suppliers Guide. However, the three GMP-guidelines imply the same philosophy about computer system validation. They describe more a what-to-do approach for validation, whereas the GAMP Suppliers Guide describes a how-to-do validation. Nevertheless, they do not contain major discrepancies.

  7. Flight code validation simulator

    Science.gov (United States)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  8. Failure mode and effects analysis outputs: are they valid?

    Directory of Open Access Journals (Sweden)

    Shebl Nada

    2012-06-01

    Full Text Available Abstract Background Failure Mode and Effects Analysis (FMEA is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies

  9. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  10. Microsoft Visio 2013 business process diagramming and validation

    CERN Document Server

    Parker, David

    2013-01-01

    Microsoft Visio 2013 Business Process Diagramming and Validation provides a comprehensive and practical tutorial including example code and demonstrations for creating validation rules, writing ShapeSheet formulae, and much more.If you are a Microsoft Visio 2013 Professional Edition power user or developer who wants to get to grips with both the essential features of Visio 2013 and the validation rules in this edition, then this book is for you. A working knowledge of Microsoft Visio and optionally .NET for the add-on code is required, though previous knowledge of business process diagramming

  11. Validation of gamma irradiator controls for quality and regulatory compliance

    International Nuclear Information System (INIS)

    Harding, R.B.; Pinteric, F.J.A.

    1995-01-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the current good manufacturing practice (CGMP) regulations in progress to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However it is only recently that FDA audits have focussed on this component of the process validation. (author)

  12. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  13. Anxiety measures validated in perinatal populations: a systematic review.

    Science.gov (United States)

    Meades, Rose; Ayers, Susan

    2011-09-01

    Research and screening of anxiety in the perinatal period is hampered by a lack of psychometric data on self-report anxiety measures used in perinatal populations. This paper aimed to review self-report measures that have been validated with perinatal women. A systematic search was carried out of four electronic databases. Additional papers were obtained through searching identified articles. Thirty studies were identified that reported validation of an anxiety measure with perinatal women. Most commonly validated self-report measures were the General Health Questionnaire (GHQ), State-Trait Anxiety Inventory (STAI), and Hospital Anxiety and Depression Scales (HADS). Of the 30 studies included, 11 used a clinical interview to provide criterion validity. Remaining studies reported one or more other forms of validity (factorial, discriminant, concurrent and predictive) or reliability. The STAI shows criterion, discriminant and predictive validity and may be most useful for research purposes as a specific measure of anxiety. The Kessler 10 (K-10) may be the best short screening measure due to its ability to differentiate anxiety disorders. The Depression Anxiety Stress Scales 21 (DASS-21) measures multiple types of distress, shows appropriate content, and remains to be validated against clinical interview in perinatal populations. Nineteen studies did not report sensitivity or specificity data. The early stages of research into perinatal anxiety, the multitude of measures in use, and methodological differences restrict comparison of measures across studies. There is a need for further validation of self-report measures of anxiety in the perinatal period to enable accurate screening and detection of anxiety symptoms and disorders. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Results from the First Validation Phase of CAP code

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The second stage of Safety Analysis Code Development for Nuclear Power Plants was lunched on Apirl, 2010 and is scheduled to be through 2012, of which the scope of work shall cover from code validation to licensing preparation. As a part of this project, CAP(Containment Analysis Package) will follow the same procedures. CAP's validation works are organized hieratically into four validation steps using; 1) Fundamental phenomena. 2) Principal phenomena (mixing and transport) and components in containment. 3) Demonstration test by small, middle, large facilities and International Standard Problems. 4) Comparison with other containment codes such as GOTHIC or COMTEMPT. In addition, collecting the experimental data related to containment phenomena and then constructing the database is one of the major works during the second stage as a part of this project. From the validation process of fundamental phenomenon, it could be expected that the current capability and the future improvements of CAP code will be revealed. For this purpose, simple but significant problems, which have the exact analytical solution, were selected and calculated for validation of fundamental phenomena. In this paper, some results of validation problems for the selected fundamental phenomena will be summarized and discussed briefly

  15. What is validation

    International Nuclear Information System (INIS)

    Clark, H.K.

    1985-01-01

    Criteria for establishing the validity of a computational method to be used in assessing nuclear criticality safety, as set forth in ''American Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors,'' ANSI/ANS-8.1-1983, are examined and discussed. Application of the criteria is illustrated by describing the procedures followed in deriving subcritical limits that have been incorporated in the Standard

  16. Test validation of nuclear and fossil fuel control operators

    International Nuclear Information System (INIS)

    Moffie, D.J.

    1976-01-01

    To establish job relatedness, one must go through a procedure of concurrent and predictive validation. For concurrent validity a group of employees is tested and the test scores are related to performance concurrently or during the same time period. For predictive validity, individuals are tested but the results of these tests are not used at the time of employment. The tests are sealed and scored at a later date, and then related to job performance. Job performance data include ratings by supervisors, actual job performance indices, turnover, absenteeism, progress in training, etc. The testing guidelines also stipulate that content and construct validity can be used

  17. Italian Validation of Homophobia Scale (HS

    Directory of Open Access Journals (Sweden)

    Giacomo Ciocca, PsyD, PhD

    2015-09-01

    Conclusions: The Italian validation of the HS revealed the use of this self‐report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior. Ciocca G, Capuano N, Tuziak B, Mollaioli D, Limoncin E, Valsecchi D, Carosa E, Gravina GL, Gianfrilli D, Lenzi A, and Jannini EA. Italian validation of Homophobia Scale (HS. Sex Med 2015;3:213–218.

  18. Reliability and validity of risk analysis

    International Nuclear Information System (INIS)

    Aven, Terje; Heide, Bjornar

    2009-01-01

    In this paper we investigate to what extent risk analysis meets the scientific quality requirements of reliability and validity. We distinguish between two types of approaches within risk analysis, relative frequency-based approaches and Bayesian approaches. The former category includes both traditional statistical inference methods and the so-called probability of frequency approach. Depending on the risk analysis approach, the aim of the analysis is different, the results are presented in different ways and consequently the meaning of the concepts reliability and validity are not the same.

  19. Earth Observation for Citizen Science Validation, or Citizen Science for Earth Observation Validation? The Role of Quality Assurance of Volunteered Observations

    Directory of Open Access Journals (Sweden)

    Didier G. Leibovici

    2017-10-01

    Full Text Available Environmental policy involving citizen science (CS is of growing interest. In support of this open data stream of information, validation or quality assessment of the CS geo-located data to their appropriate usage for evidence-based policy making needs a flexible and easily adaptable data curation process ensuring transparency. Addressing these needs, this paper describes an approach for automatic quality assurance as proposed by the Citizen OBservatory WEB (COBWEB FP7 project. This approach is based upon a workflow composition that combines different quality controls, each belonging to seven categories or “pillars”. Each pillar focuses on a specific dimension in the types of reasoning algorithms for CS data qualification. These pillars attribute values to a range of quality elements belonging to three complementary quality models. Additional data from various sources, such as Earth Observation (EO data, are often included as part of the inputs of quality controls within the pillars. However, qualified CS data can also contribute to the validation of EO data. Therefore, the question of validation can be considered as “two sides of the same coin”. Based on an invasive species CS study, concerning Fallopia japonica (Japanese knotweed, the paper discusses the flexibility and usefulness of qualifying CS data, either when using an EO data product for the validation within the quality assurance process, or validating an EO data product that describes the risk of occurrence of the plant. Both validation paths are found to be improved by quality assurance of the CS data. Addressing the reliability of CS open data, issues and limitations of the role of quality assurance for validation, due to the quality of secondary data used within the automatic workflow, are described, e.g., error propagation, paving the route to improvements in the approach.

  20. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    Energy Technology Data Exchange (ETDEWEB)

    SEXTON, R.A.

    2000-03-13

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation.

  1. Spent Nuclear Fuel (SNF) Process Validation Technical Support Plan

    International Nuclear Information System (INIS)

    SEXTON, R.A.

    2000-01-01

    The purpose of Process Validation is to confirm that nominal process operations are consistent with the expected process envelope. The Process Validation activities described in this document are not part of the safety basis, but are expected to demonstrate that the process operates well within the safety basis. Some adjustments to the process may be made as a result of information gathered in Process Validation

  2. The brief negative symptom scale: validation of the German translation and convergent validity with self-rated anhedonia and observer-rated apathy.

    Science.gov (United States)

    Bischof, Martin; Obermann, Caitriona; Hartmann, Matthias N; Hager, Oliver M; Kirschner, Matthias; Kluge, Agne; Strauss, Gregory P; Kaiser, Stefan

    2016-11-22

    Negative symptoms are considered core symptoms of schizophrenia. The Brief Negative Symptom Scale (BNSS) was developed to measure this symptomatic dimension according to a current consensus definition. The present study examined the psychometric properties of the German version of the BNSS. To expand former findings on convergent validity, we employed the Temporal Experience Pleasure Scale (TEPS), a hedonic self-report that distinguishes between consummatory and anticipatory pleasure. Additionally, we addressed convergent validity with observer-rated assessment of apathy with the Apathy Evaluation Scale (AES), which was completed by the patient's primary nurse. Data were collected from 75 in- and outpatients from the Psychiatric Hospital, University Zurich diagnosed with either schizophrenia or schizoaffective disorder. We assessed convergent and discriminant validity, internal consistency and inter-rater reliability. We largely replicated the findings of the original version showing good psychometric properties of the BNSS. In addition, the primary nurses evaluation correlated moderately with interview-based clinician rating. BNSS anhedonia items showed good convergent validity with the TEPS. Overall, the German BNSS shows good psychometric properties comparable to the original English version. Convergent validity extends beyond interview-based assessments of negative symptoms to self-rated anhedonia and observer-rated apathy.

  3. Validating year 2000 compliance

    NARCIS (Netherlands)

    A. van Deursen (Arie); P. Klint (Paul); M.P.A. Sellink

    1997-01-01

    textabstractValidating year 2000 compliance involves the assessment of the correctness and quality of a year 2000 conversion. This entails inspecting both the quality of the conversion emph{process followed, and of the emph{result obtained, i.e., the converted system. This document provides an

  4. Validation and test report

    DEFF Research Database (Denmark)

    Pedersen, Jens Meldgaard; Andersen, T. Bull

    2012-01-01

    . As a consequence of extensive movement artefacts seen during dynamic contractions, the following validation and test report consists of a report that investigates the physiological responses to a static contraction in a standing and a supine position. Eight subjects performed static contractions of the ankle...

  5. Geostatistical validation and cross-validation of magnetometric measurements of soil pollution with Potentially Toxic Elements in problematic areas

    Science.gov (United States)

    Fabijańczyk, Piotr; Zawadzki, Jarosław

    2016-04-01

    Field magnetometry is fast method that was previously effectively used to assess the potential soil pollution. One of the most popular devices that are used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. In this connection, measured values of soil magnetic susceptibility have to be usually validated using more precise, but also much more expensive, chemical measurements. The goal of this study was to analyze validation methods of magnetometric measurements using chemical analyses of a concentration of elements in soil. Additionally, validation of surface measurements of soil magnetic susceptibility was performed using selected parameters of a distribution of magnetic susceptibility in a soil profile. Validation was performed using selected geostatistical measures of cross-correlation. The geostatistical approach was compared with validation performed using the classic statistics. Measurements were performed at selected areas located in the Upper Silesian Industrial Area in Poland, and in the selected parts of Norway. In these areas soil magnetic susceptibility was measured on the soil surface using a MS2D Bartington device and in the soil profile using MS2C Bartington device. Additionally, soil samples were taken in order to perform chemical measurements. Acknowledgment The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013.

  6. Cross-Cultural Adaptation and Validation of SNOT-20 in Portuguese

    Science.gov (United States)

    Bezerra, Thiago Freire Pinto; Piccirillo, Jay F.; Fornazieri, Marco Aurélio; Pilan, Renata R. de M.; Abdo, Tatiana Regina Teles; Pinna, Fabio de Rezende; Padua, Francini Grecco de Melo; Voegels, Richard Louis

    2011-01-01

    Introduction. Chronic rhinosinusitis is a highly prevalent disease, so it is necessary to create valid instruments to assess the quality of life of these patients. The SNOT-20 questionnaire was developed for this purpose as a specific test to evaluate the quality of life related to chronic rhinosinusitis. It was validated in the English language, and it has been used in most studies on this subject. Currently, there is no validated instrument for assessing this disease in Portuguese. Objective. Cross-cultural adaptation and validation of SNOT-20 in Portuguese. Patients and Methods. The SNOT-20 questionnaire underwent a meticulous process of cross-cultural adaptation and was evaluated by assessing its sensitivity, reliability, and validity. Results. The process resulted in an intelligible version of the questionnaire, the SNOT-20p. Internal consistency (Cronbach's alpha = 0.91, P cross-cultural adaptation and validation of the SNOT-20 questionnaire into Portuguese. PMID:21799671

  7. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    Science.gov (United States)

    Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra

    2018-05-29

    Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Methodological and cross sectional study. A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain.

  8. A Practical Approach to Validating a PD Model

    NARCIS (Netherlands)

    Medema, L.; Koning, de R.; Lensink, B.W.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  9. A practical approach to validating a PD model

    NARCIS (Netherlands)

    Medema, Lydian; Koning, Ruud H.; Lensink, Robert; Medema, M.

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  10. Validation of the Organizational Culture Assessment Instrument

    Science.gov (United States)

    Heritage, Brody; Pollock, Clare; Roberts, Lynne

    2014-01-01

    Organizational culture is a commonly studied area in industrial/organizational psychology due to its important role in workplace behaviour, cognitions, and outcomes. Jung et al.'s [1] review of the psychometric properties of organizational culture measurement instruments noted many instruments have limited validation data despite frequent use in both theoretical and applied situations. The Organizational Culture Assessment Instrument (OCAI) has had conflicting data regarding its psychometric properties, particularly regarding its factor structure. Our study examined the factor structure and criterion validity of the OCAI using robust analysis methods on data gathered from 328 (females = 226, males = 102) Australian employees. Confirmatory factor analysis supported a four factor structure of the OCAI for both ideal and current organizational culture perspectives. Current organizational culture data demonstrated expected reciprocally-opposed relationships between three of the four OCAI factors and the outcome variable of job satisfaction but ideal culture data did not, thus indicating possible weak criterion validity when the OCAI is used to assess ideal culture. Based on the mixed evidence regarding the measure's properties, further examination of the factor structure and broad validity of the measure is encouraged. PMID:24667839

  11. Validation of the organizational culture assessment instrument.

    Directory of Open Access Journals (Sweden)

    Brody Heritage

    Full Text Available Organizational culture is a commonly studied area in industrial/organizational psychology due to its important role in workplace behaviour, cognitions, and outcomes. Jung et al.'s [1] review of the psychometric properties of organizational culture measurement instruments noted many instruments have limited validation data despite frequent use in both theoretical and applied situations. The Organizational Culture Assessment Instrument (OCAI has had conflicting data regarding its psychometric properties, particularly regarding its factor structure. Our study examined the factor structure and criterion validity of the OCAI using robust analysis methods on data gathered from 328 (females = 226, males = 102 Australian employees. Confirmatory factor analysis supported a four factor structure of the OCAI for both ideal and current organizational culture perspectives. Current organizational culture data demonstrated expected reciprocally-opposed relationships between three of the four OCAI factors and the outcome variable of job satisfaction but ideal culture data did not, thus indicating possible weak criterion validity when the OCAI is used to assess ideal culture. Based on the mixed evidence regarding the measure's properties, further examination of the factor structure and broad validity of the measure is encouraged.

  12. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  13. Bibliography for Verification and Validation in Computational Simulation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1998-01-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering

  14. Bibliography for Verification and Validation in Computational Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.

    1998-10-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering.

  15. Content Validity of a Tool Measuring Medication Errors.

    Science.gov (United States)

    Tabassum, Nishat; Allana, Saleema; Saeed, Tanveer; Dias, Jacqueline Maria

    2015-08-01

    The objective of this study was to determine content and face validity of a tool measuring medication errors among nursing students in baccalaureate nursing education. Data was collected from the Aga Khan University School of Nursing and Midwifery (AKUSoNaM), Karachi, from March to August 2014. The tool was developed utilizing literature and the expertise of the team members, expert in different areas. The developed tool was then sent to five experts from all over Karachi for ensuring the content validity of the tool, which was measured on relevance and clarity of the questions. The Scale Content Validity Index (S-CVI) for clarity and relevance of the questions was found to be 0.94 and 0.98, respectively. The tool measuring medication errors has an excellent content validity. This tool should be used for future studies on medication errors, with different study populations such as medical students, doctors, and nurses.

  16. The validity and clinical utility of purging disorder.

    Science.gov (United States)

    Keel, Pamela K; Striegel-Moore, Ruth H

    2009-12-01

    To review evidence of the validity and clinical utility of Purging Disorder and examine options for the Diagnostic and Statistical Manual of Mental Disorders fifth edition (DSM-V). Articles were identified by computerized and manual searches and reviewed to address five questions about Purging Disorder: Is there "ample" literature? Is the syndrome clearly defined? Can it be measured and diagnosed reliably? Can it be differentiated from other eating disorders? Is there evidence of syndrome validity? Although empirical classification and concurrent validity studies provide emerging support for the distinctiveness of Purging Disorder, questions remain about definition, diagnostic reliability in clinical settings, and clinical utility (i.e., prognostic validity). We discuss strengths and weaknesses associated with various options for the status of Purging Disorder in the DSM-V ranging from making no changes from DSM-IV to designating Purging Disorder a diagnosis on equal footing with Anorexia Nervosa and Bulimia Nervosa.

  17. Radiochemical verification and validation in the environmental data collection process

    International Nuclear Information System (INIS)

    Rosano-Reece, D.; Bottrell, D.; Bath, R.J.

    1994-01-01

    A credible and cost effective environmental data collection process should produce analytical data which meets regulatory and program specific requirements. Analytical data, which support the sampling and analysis activities at hazardous waste sites, undergo verification and independent validation before the data are submitted to regulators. Understanding the difference between verification and validation and their respective roles in the sampling and analysis process is critical to the effectiveness of a program. Verification is deciding whether the measurement data obtained are what was requested. The verification process determines whether all the requirements were met. Validation is more complicated than verification. It attempts to assess the impacts on data use, especially when requirements are not met. Validation becomes part of the decision-making process. Radiochemical data consists of a sample result with an associated error. Therefore, radiochemical validation is different and more quantitative than is currently possible for the validation of hazardous chemical data. Radiochemical data include both results and uncertainty that can be statistically compared to identify significance of differences in a more technically defensible manner. Radiochemical validation makes decisions about analyte identification, detection, and uncertainty for a batch of data. The process focuses on the variability of the data in the context of the decision to be made. The objectives of this paper are to present radiochemical verification and validation for environmental data and to distinguish the differences between the two operations

  18. Peaminister : Euroopa Liiduga ühinemise otsustab rahvas / Mart Laar ; interv. Dirk Koch, tõlk. Margus Enno

    Index Scriptorium Estoniae

    Laar, Mart, 1960-

    2000-01-01

    Ilmunud ka: Järva Teataja 17. okt. lk. 2. Eesti peaminister vastab Der Spiegeli küsimustele, mis puudutavad Eesti ühinemisläbirääkimisi EL-iga, Eesti rahva osast ühinemisotsuse langetamisel, Eesti soovist saada NATO liikmeks. Avaldatud lühendatult. Autor: Isamaaliit

  19. Competency-Based Training and Simulation: Making a "Valid" Argument.

    Science.gov (United States)

    Noureldin, Yasser A; Lee, Jason Y; McDougall, Elspeth M; Sweet, Robert M

    2018-02-01

    The use of simulation as an assessment tool is much more controversial than is its utility as an educational tool. However, without valid simulation-based assessment tools, the ability to objectively assess technical skill competencies in a competency-based medical education framework will remain challenging. The current literature in urologic simulation-based training and assessment uses a definition and framework of validity that is now outdated. This is probably due to the absence of awareness rather than an absence of comprehension. The following review article provides the urologic community an updated taxonomy on validity theory as it relates to simulation-based training and assessments and translates our simulation literature to date into this framework. While the old taxonomy considered validity as distinct subcategories and focused on the simulator itself, the modern taxonomy, for which we translate the literature evidence, considers validity as a unitary construct with a focus on interpretation of simulator data/scores.

  20. Roll-up of validation results to a target application.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy

    2013-09-01

    Suites of experiments are preformed over a validation hierarchy to test computational simulation models for complex applications. Experiments within the hierarchy can be performed at different conditions and configurations than those for an intended application, with each experiment testing only part of the physics relevant for the application. The purpose of the present work is to develop methodology to roll-up validation results to an application, and to assess the impact the validation hierarchy design has on the roll-up results. The roll-up is accomplished through the development of a meta-model that relates validation measurements throughout a hierarchy to the desired response quantities for the target application. The meta-model is developed using the computation simulation models for the experiments and the application. The meta-model approach is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments.

  1. Detailed validation in PCDDF analysis. ISO17025 data from Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Kernick Carvalhaes, G.; Azevedo, J.A.; Azevedo, G.; Machado, M.; Brooks, P. [Analytical Solutions, Rio de Janeiro (Brazil)

    2004-09-15

    When we define validation method we can use the ISO standard 8402, in reference to this, 'validation' is the 'confirmation by the examination and supplying of objective evidences that the particular requirements for a specific intended use are fulfilled'. This concept is extremely important to guarantee the quality of results. Validation method is based on the combined use of different validation procedures, but in this selection we have to analyze the cost benefit conditions. We must focus on the critical elements, and these critical factors must be the essential elements for providing good properties and results. If we have a solid validation methodology and a research of the source of uncertainty of our analytical method, we can generate results with confidence and veracity. When analyzing these two considerations, validation method and uncertainty calculations, we found out that there are very few articles and papers about these subjects, and it is even more difficult to find such materials on dioxins and furans. This short paper describes a validation and uncertainty calculation methodology using traditional studies with a few adaptations, yet it shows a new idea of recovery study as a source of uncertainty.

  2. Static validation of licence conformance policies

    DEFF Research Database (Denmark)

    Hansen, Rene Rydhof; Nielson, Flemming; Nielson, Hanne Riis

    2008-01-01

    Policy conformance is a security property gaining importance due to commercial interest like Digital Rights Management. It is well known that static analysis can be used to validate a number of more classical security policies, such as discretionary and mandatory access control policies, as well...... as communication protocols using symmetric and asymmetric cryptography. In this work we show how to develop a Flow Logic for validating the conformance of client software with respect to a licence conformance policy. Our approach is sufficiently flexible that it extends to fully open systems that can admit new...

  3. VAlidation STandard antennas: Past, present and future

    DEFF Research Database (Denmark)

    Drioli, Luca Salghetti; Ostergaard, A; Paquay, M

    2011-01-01

    designed for validation campaigns of antenna measurement ranges. The driving requirements of VAST antennas are their mechanical stability over a given operational temperature range and with respect to any orientation of the gravity field. The mechanical design shall ensure extremely stable electrical....../V-band of telecom satellites. The paper will address requirements for future VASTs and possible architecture for multi-frequency Validation Standard antennas....

  4. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  5. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  6. Certification and Validation of Priori Learning and Competences

    DEFF Research Database (Denmark)

    Olesen, Henning Salling

    2004-01-01

    The article examines forms of recognition and validation of prior learning and competences in Europe and Australia, and discusses the need to create validation systems that connect formal education, non-formal training and informal learning as a tool for lifelong learning policies in Colombia...

  7. 15 CFR 995.27 - Format validation software testing.

    Science.gov (United States)

    2010-01-01

    ... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying... specification. These tests may be combined with testing of the conversion software. ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Format validation software testing...

  8. Development and preliminary validation of a screen for ...

    African Journals Online (AJOL)

    Development and preliminary validation of a screen for interpersonal childhood trauma experiences among school-going youth in Durban, South Africa. ... validity in the sense that all scales were significantly correlated with scores on clinical measures of post-traumatic stress disorder (PTSD) and/or complex PTSD.

  9. Assessing the validity of discourse analysis: transdisciplinary convergence

    Science.gov (United States)

    Jaipal-Jamani, Kamini

    2014-12-01

    Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.

  10. Physics validation of detector simulation tools for LHC

    International Nuclear Information System (INIS)

    Beringer, J.

    2004-01-01

    Extensive studies aimed at validating the physics processes built into the detector simulation tools Geant4 and Fluka are in progress within all Large Hardon Collider (LHC) experiments, within the collaborations developing these tools, and within the LHC Computing Grid (LCG) Simulation Physics Validation Project, which has become the primary forum for these activities. This work includes detailed comparisons with test beam data, as well as benchmark studies of simple geometries and materials with single incident particles of various energies for which experimental data is available. We give an overview of these validation activities with emphasis on the latest results

  11. Validation of NAA Method for Urban Particulate Matter

    International Nuclear Information System (INIS)

    Woro Yatu Niken Syahfitri; Muhayatun; Diah Dwiana Lestiani; Natalia Adventini

    2009-01-01

    Nuclear analytical techniques have been applied in many countries for determination of environmental pollutant. Method of NAA (neutron activation analysis) representing one of nuclear analytical technique of that has low detection limits, high specificity, high precision, and accuracy for large majority of naturally occurring elements, and ability of non-destructive and simultaneous determination of multi-elemental, and can handle small sample size (< 1 mg). To ensure quality and reliability of the method, validation are needed to be done. A standard reference material, SRM NIST 1648 Urban Particulate Matter, has been used to validate NAA method. Accuracy and precision test were used as validation parameters. Particulate matter were validated for 18 elements: Ti, I, V, Br, Mn, Na, K, Cl, Cu, Al, As, Fe, Co, Zn, Ag, La, Cr, and Sm,. The result showed that the percent relative standard deviation of the measured elemental concentrations are found to be within ranged from 2 to 14,8% for most of the elements analyzed whereas Hor rat value in range 0,3-1,3. Accuracy test results showed that relative bias ranged from -11,1 to 3,6%. Based on validation results, it can be stated that NAA method is reliable for characterization particulate matter and other similar matrix samples to support air quality monitoring. (author)

  12. Validation of intermediate end points in cancer research.

    Science.gov (United States)

    Schatzkin, A; Freedman, L S; Schiffman, M H; Dawsey, S M

    1990-11-21

    Investigations using intermediate end points as cancer surrogates are quicker, smaller, and less expensive than studies that use malignancy as the end point. We present a strategy for determining whether a given biomarker is a valid intermediate end point between an exposure and incidence of cancer. Candidate intermediate end points may be selected from case series, ecologic studies, and animal experiments. Prospective cohort and sometimes case-control studies may be used to quantify the intermediate end point-cancer association. The most appropriate measure of this association is the attributable proportion. The intermediate end point is a valid cancer surrogate if the attributable proportion is close to 1.0, but not if it is close to 0. Usually, the attributable proportion is close to neither 1.0 nor 0; in this case, valid surrogacy requires that the intermediate end point mediate an established exposure-cancer relation. This would in turn imply that the exposure effect would vanish if adjusted for the intermediate end point. We discuss the relative advantages of intervention and observational studies for the validation of intermediate end points. This validation strategy also may be applied to intermediate end points for adverse reproductive outcomes and chronic diseases other than cancer.

  13. Automatic, semi-automatic and manual validation of urban drainage data.

    Science.gov (United States)

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  14. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  15. Validation of the updated ArthroS simulator: face and construct validity of a passive haptic virtual reality simulator with novel performance metrics.

    Science.gov (United States)

    Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L

    2017-02-01

    To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.

  16. Validation of Autonomous Space Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — System validation addresses the question "Will the system do the right thing?" When system capability includes autonomy, the question becomes more pointed. As NASA...

  17. Initial Reliability and Validity of the Perceived Social Competence Scale

    Science.gov (United States)

    Anderson-Butcher, Dawn; Iachini, Aidyn L.; Amorose, Anthony J.

    2008-01-01

    Objective: This study describes the development and validation of a perceived social competence scale that social workers can easily use to assess children's and youth's social competence. Method: Exploratory and confirmatory factor analyses were conducted on a calibration and a cross-validation sample of youth. Predictive validity was also…

  18. Development, standardization and validation of social anxiety scale ...

    African Journals Online (AJOL)

    Little attention has been given to social anxiety in Nigeria despite its debilitating effects on the sufferers. The objective of this study was to develop, standardize and validate an instrument (Social Anxiety Scale) with high coefficients of Cronbach Alpha Internal Consistency Split-half reliability and construct validity.

  19. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  20. The Validity of Attribute-Importance Measurement: A Review

    NARCIS (Netherlands)

    Ittersum, van K.; Pennings, J.M.E.; Wansink, B.; Trijp, van J.C.M.

    2007-01-01

    A critical review of the literature demonstrates a lack of validity among the ten most common methods for measuring the importance of attributes in behavioral sciences. The authors argue that one of the key determinants of this lack of validity is the multi-dimensionality of attribute importance.

  1. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  2. 9 CFR 78.43 - Validated brucellosis-free States.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Validated brucellosis-free States. 78.43 Section 78.43 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT... BRUCELLOSIS Designation of Brucellosis Areas § 78.43 Validated brucellosis-free States. Alabama, Alaska...

  3. Radiobrightness validation on different spatial scales during the SMOS validation campaign 2010 in the Rur catchment, germany

    DEFF Research Database (Denmark)

    Montzka, C.; Bogena, H.; Weihermueller, L.

    2011-01-01

    aircraft as well as ground- based mobile measurements with the JULBARA radiometer mounted on a truck are analyzed in a qualitative comparison for different crop stands. These data can be used for validation of the SMOS sensor by giving valuable information about parameters for the radiative transfer......ESA's Soil Moisture and Ocean Salinity (SMOS) mission has been launched in November 2009 and delivers now brightness temperature and soil moisture products over terrestrial areas on a regular three day basis. In 2010 several airborne campaigns were conducted to validate the SMOS products...

  4. Validation of the TEXSAN thermal-hydraulic analysis program

    International Nuclear Information System (INIS)

    Burns, S.P.; Klein, D.E.

    1992-01-01

    The TEXSAN thermal-hydraulic analysis program has been developed by the University of Texas at Austin (UT) to simulate buoyancy driven fluid flow and heat transfer in spent fuel and high level nuclear waste (HLW) shipping applications. As part of the TEXSAN software quality assurance program, the software has been subjected to a series of test cases intended to validate its capabilities. The validation tests include many physical phenomena which arise in spent fuel and HLW shipping applications. This paper describes some of the principal results of the TEXSAN validation tests and compares them to solutions available in the open literature. The TEXSAN validation effort has shown that the TEXSAN program is stable and consistent under a range of operating conditions and provides accuracy comparable with other heat transfer programs and evaluation techniques. The modeling capabilities and the interactive user interface employed by the TEXSAN program should make it a useful tool in HLW transportation analysis

  5. A cross-validation package driving Netica with python

    Science.gov (United States)

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  6. Validating Future Force Performance Measures (Army Class): End of Training Longitudinal Validation

    Science.gov (United States)

    2009-09-01

    Caramagno, John Fisher, Patricia Keenan, Julisara Mathew, Alicia Sawyer, Jim Takitch, Shonna Waters, and Elise Weaver Drasgow Consulting Group...promise for enhancing the classification of entry-level Soldiers (Ingerick, Diaz , & Putka, 2009). In Year 2 (2007), the emphasis of the Army...Social Sciences. Ingerick, M., Diaz , T., & Putka, D. (2009). Investigations into Army enlisted classification systems: Concurrent validation report

  7. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  8. The Legality and Validity of Administrative Enforcement

    Directory of Open Access Journals (Sweden)

    Sergei V. Iarkovoi

    2018-01-01

    Full Text Available The article discusses the concept and content of the validity of adopted by the executive authorities and other bodies of public administration legal acts and committed by them legal actions as an important characteristic of law enforcement by these bodies. The Author concludes that the validity of the administrative law enforcement is not an independent requirement for it, and acts as an integral part of its legal requirements.

  9. Identification and Validation of ESP Teacher Competencies: A Research Design

    Science.gov (United States)

    Venkatraman, G.; Prema, P.

    2013-01-01

    The paper presents the research design used for identifying and validating a set of competencies required of ESP (English for Specific Purposes) teachers. The identification of the competencies and the three-stage validation process are also discussed. The observation of classes of ESP teachers for field-testing the validated competencies and…

  10. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  11. Guided exploration of physically valid shapes for furniture design

    KAUST Repository

    Umetani, Nobuyuki; Igarashi, Takeo; Mitra, Niloy J.

    2012-01-01

    Geometric modeling and the physical validity of shapes are traditionally considered independently. This makes creating aesthetically pleasing yet physically valid models challenging. We propose an interactive design framework for efficient

  12. Method validation in pharmaceutical analysis: from theory to practical optimization

    Directory of Open Access Journals (Sweden)

    Jaqueline Kaleian Eserian

    2015-01-01

    Full Text Available The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC analysis.   Type: Commentary

  13. Validation of the Physician Teaching Motivation Questionnaire (PTMQ).

    Science.gov (United States)

    Dybowski, Christoph; Harendza, Sigrid

    2015-10-02

    Physicians play a major role as teachers in undergraduate medical education. Studies indicate that different forms and degrees of motivation can influence work performance in general and that teachers' motivation to teach can influence students' academic achievements in particular. Therefore, the aim of this study was to develop and to validate an instrument measuring teaching motivations in hospital-based physicians. We chose self-determination theory as a theoretical framework for item and scale development. It distinguishes between different dimensions of motivation depending on the amount of self-regulation and autonomy involved and its empirical evidence has been demonstrated in other areas of research. To validate the new instrument (PTMQ = Physician Teaching Motivation Questionnaire), we used data from a sample of 247 physicians from internal medicine and surgery at six German medical faculties. Structural equation modelling was conducted to confirm the factorial structure, correlation analyses and linear regressions were performed to examine concurrent and incremental validity. Structural equation modelling confirmed a good global fit for the factorial structure of the final instrument (RMSEA = .050, TLI = .957, SRMR = .055, CFI = .966). Cronbach's alphas indicated good internal consistencies for all scales (α = .75 - .89) except for the identified teaching motivation subscale with an acceptable internal consistency (α = .65). Tests of concurrent validity with global work motivation, perceived teaching competence, perceived teaching involvement and voluntariness of lesson allocation delivered theory-consistent results with slight deviations for some scales. Incremental validity over global work motivation in predicting perceived teaching involvement was also confirmed. Our results indicate that the PTMQ is a reliable, valid and therefore suitable instrument for assessing physicians' teaching motivation.

  14. Serial album validation for promotion of infant body weight control

    Directory of Open Access Journals (Sweden)

    Nathalia Costa Gonzaga Saraiva

    2018-05-01

    Full Text Available ABSTRACT Objective: to validate the content and appearance of a serial album for children aged from 7 to 10 years addressing the topic of prevention and control of body weight. Method: methodological study with descriptive nature. The validation process was attended by 33 specialists in educational technologies and/or in excess of infantile weight. The agreement index of 80% was the minimum considered to guarantee the validation of the material. Results: most of the specialists had a doctoral degree and a graduate degree in nursing. Regarding content, illustrations, layout and relevance, all items were validated and 69.7% of the experts considered the album as great. The overall agreement validation index for the educational technology was 0.88. Only the script-sheet 3 did not reach the cutoff point of the content validation index. Changes were made to the material, such as title change, inclusion of the school context and insertion of nutritionist and physical educator in the story narrated in the album. Conclusion: the proposed serial album was considered valid by experts regarding content and appearance, suggesting that this technology has the potential to contribute in health education by promoting healthy weight in the age group of 7 to 10 years.

  15. Somatic Sensitivity and Reflexivity as Validity Tools in Qualitative Research

    Science.gov (United States)

    Green, Jill

    2015-01-01

    Validity is a key concept in qualitative educational research. Yet, it is often not addressed in methodological writing about dance. This essay explores validity in a postmodern world of diverse approaches to scholarship, by looking at the changing face of validity in educational qualitative research and at how new understandings of the concept…

  16. 20 CFR 219.33 - Evidence of a deemed valid marriage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Evidence of a deemed valid marriage. 219.33... EVIDENCE REQUIRED FOR PAYMENT Evidence of Relationship § 219.33 Evidence of a deemed valid marriage. (a) Preferred evidence. Preferred evidence of a deemed valid marriage is— (1) Evidence of a ceremonial marriage...

  17. 20 CFR 404.725 - Evidence of a valid ceremonial marriage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Evidence of a valid ceremonial marriage. 404... DISABILITY INSURANCE (1950- ) Evidence Evidence of Age, Marriage, and Death § 404.725 Evidence of a valid ceremonial marriage. (a) General. A valid ceremonial marriage is one that follows procedures set by law in...

  18. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  19. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    Directory of Open Access Journals (Sweden)

    Hazel Ekin Akmaz

    2018-05-01

    Full Text Available Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance

  20. The Chimera of Validity

    Science.gov (United States)

    Baker, Eva L.

    2013-01-01

    Background/Context: Education policy over the past 40 years has focused on the importance of accountability in school improvement. Although much of the scholarly discourse around testing and assessment is technical and statistical, understanding of validity by a non-specialist audience is essential as long as test results drive our educational…

  1. [Evaluation of Suicide Risk Levels in Hospitals: Validity and Reliability Tests].

    Science.gov (United States)

    Macagnino, Sandro; Steinert, Tilman; Uhlmann, Carmen

    2018-05-01

    Examination of in-hospital suicide risk levels concerning their validity and their reliability. The internal suicide risk levels were evaluated in a cross sectional study of in 163 inpatients. A reliability check was performed via determining interrater-reliability of senior physician, therapist and the responsible nurse. Within the scope of the validity check, we conducted analyses of criterion validity and construct validity. For the total sample an "acceptable" to "good" interrater-reliability (Kendalls W = .77) of suicide risk levels were obtained. Schizophrenic disorders showed the lowest values, for personality disorders we found the highest level of interrater-reliability. When examining the criterion validity, Item-9 of the BDI-II is substantial correlated to our suicide risk levels (ρ m  = .54, p validity check, affective disorders showed the highest correlation (ρ = .77), compatible also with "convergent validity". They differed with schizophrenic disorders which showed the least concordance (ρ = .43). In-hospital suicide risk levels may represent an important contribution to the assessment of suicidal behavior of inpatients experiencing psychiatric treatment due to their overall good validity and reliability. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Validation of method in instrumental NAA for food products sample

    International Nuclear Information System (INIS)

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  3. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  4. Assessing students' communication skills: validation of a global rating.

    Science.gov (United States)

    Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose

    2008-12-01

    Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.

  5. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  6. Validation of comprehensive space radiation transport code

    International Nuclear Information System (INIS)

    Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.

    1998-01-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation

  7. Procedure for Validation of Aggregators Providing Demand Response

    DEFF Research Database (Denmark)

    Bondy, Daniel Esteban Morales; Gehrke, Oliver; Thavlov, Anders

    2016-01-01

    of small heterogeneous resources that are geographically distributed. Therefore, a new test procedure must be designed for the aggregator validation. This work proposes such a procedure and exemplifies is with a study case. The validation of aggregators is essential if aggregators are to be integrated...... succesfully into the power system....

  8. Validation Aspects of Water Treatment Systems for Pharmaceutical ...

    African Journals Online (AJOL)

    The goal of conducting validation is to demonstrate that a process, when operated within established limits, produces a product of consistent and specified quality with a high degree of assurance. Validation of water treatment systems is necessary to obtain water with all desired quality attributes. This also provides a ...

  9. Man-in-the-loop validation plan for the Millstone Unit 3 SPDS

    International Nuclear Information System (INIS)

    Blanch, P.M.; Wilkinson, C.D.

    1985-01-01

    This paper describes the man-in-the-loop validation plan for the Millstone Point Unit 3 (MP3) Safety Parameter Display System (SPDS). MP3 is a pressurized water reactor scheduled to load fuel November, 1985. The SPDS is being implemented as part of plant construction. This paper provides an overview of the validation process. Detailed validation procedures, scenarios, and evaluation forms will be incorporated into the validation plan to produce the detailed validation program. The program document will provide all of the new detailed instructions necessary to perform the man-in-the-loop validation

  10. Validation of Housing Standards Addressing Accessibility

    DEFF Research Database (Denmark)

    Helle, Tina

    2013-01-01

    The aim was to explore the use of an activity-based approach to determine the validity of a set of housing standards addressing accessibility. This included examination of the frequency and the extent of accessibility problems among older people with physical functional limitations who used...... participant groups were examined. Performing well-known kitchen activities was associated with accessibility problems for all three participant groups, in particular those using a wheelchair. The overall validity of the housing standards examined was poor. Observing older people interacting with realistic...... environments while performing real everyday activities seems to be an appropriate method for assessing accessibility problems....

  11. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  12. How Mathematicians Determine if an Argument Is a Valid Proof

    Science.gov (United States)

    Weber, Keith

    2008-01-01

    The purpose of this article is to investigate the mathematical practice of proof validation--that is, the act of determining whether an argument constitutes a valid proof. The results of a study with 8 mathematicians are reported. The mathematicians were observed as they read purported mathematical proofs and made judgments about their validity;…

  13. Parent- and Self-Reported Dimensions of Oppositionality in Youth: Construct Validity, Concurrent Validity, and the Prediction of Criminal Outcomes in Adulthood

    Science.gov (United States)

    Aebi, Marcel; Plattner, Belinda; Metzke, Christa Winkler; Bessler, Cornelia; Steinhausen, Hans-Christoph

    2013-01-01

    Background: Different dimensions of oppositional defiant disorder (ODD) have been found as valid predictors of further mental health problems and antisocial behaviors in youth. The present study aimed at testing the construct, concurrent, and predictive validity of ODD dimensions derived from parent- and self-report measures. Method: Confirmatory…

  14. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  15. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  16. Validation of asthma recording in electronic health records: a systematic review

    Directory of Open Access Journals (Sweden)

    Nissen F

    2017-12-01

    Full Text Available Francis Nissen,1 Jennifer K Quint,2 Samantha Wilkinson,1 Hana Mullerova,3 Liam Smeeth,1 Ian J Douglas1 1Department of Non-Communicable Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK; 2National Heart and Lung Institute, Imperial College, London, UK; 3RWD & Epidemiology, GSK R&D, Uxbridge, UK Objective: To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background: Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research.Methods: We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV] were summarized in two tables.Results: Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%. Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion: Attaining high PPVs (>80% is possible using each of the discussed validation

  17. Towards natural language question generation for the validation of ontologies and mappings.

    Science.gov (United States)

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  18. TEOLOGIA PÚBLICA NO BRASIL: UM PRIMEIRO BALANÇO

    Directory of Open Access Journals (Sweden)

    Rudolf von Sinner

    2012-01-01

    Full Text Available À luz de desafios atuais presentes no espaço público brasileiro, a discussão sobre a presença de crucifixos em tribunais gaúchos e a atuação de políticos evangélicos no Congresso, o artigo propõe-se fazer um primeiro balanço da reflexão sobre uma teologia pública no Brasil. Assim, procura responder à pergunta “o que é teologia pública?” não de forma definitória, inequívoca, uniformizante. Antes, mostra uma variedade de origens do termo e de oportunidades, bem como de perigos contidos neste conceito. Num primeiro passo, o artigo apresenta quatro linhas de abordagem presentes na emergente discussão brasileira. Em seguida, recorrendo ao sul-africano Dirk Smit, mostra a diversidade de origens e usos do conceito em várias partes do mundo. Por fim, procura evidenciar a pertinência e o potencial de uma teologia pública no Brasil – com ousadia e humildade. ABSTRACT: In view of actual challenges present in the Brazilian public space, the discussion on the presence of crosses in courthouses in the state of Rio Grande do Sul, as well as on the activities of evangelical Congressmen, this article ventures into a first balance of reflection on a public theology in Brazil. It seeks to respond to the question “what is public theology?” not with a clear and uniform definition. Rather, it shows a variety of origins and opportunities, as well as dangers contained in the concept. In a first step, the article presents four lines of thought present in the emerging Brazilian discussion. Then, with reference to the South African theologian Dirk Smit, it shows the diversity of origins and uses of the concept in different parts of the world. Finally, it seeks to show the pertinence and the potential of a public theology in Brazil – both with boldness and humility.

  19. Hanford Environmental Restoration data validation process for chemical and radiochemical analyses

    International Nuclear Information System (INIS)

    Adams, M.R.; Bechtold, R.A.; Clark, D.E.; Angelos, K.M.; Winter, S.M.

    1993-10-01

    Detailed procedures for validation of chemical and radiochemical data are used to assure consistent application of validation principles and support a uniform database of quality environmental data. During application of these procedures, it was determined that laboratory data packages were frequently missing certain types of documentation causing subsequent delays in meeting critical milestones in the completion of validation activities. A quality improvement team was assembled to address the problems caused by missing documentation and streamline the entire process. The result was the development of a separate data package verification procedure and revisions to the data validation procedures. This has resulted in a system whereby deficient data packages are immediately identified and corrected prior to validation and revised validation procedures which more closely match the common analytical reporting practices of laboratory service vendors

  20. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    Science.gov (United States)

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Development and validation of the Stirling Eating Disorder Scales.

    Science.gov (United States)

    Williams, G J; Power, K G; Miller, H R; Freeman, C P; Yellowlees, A; Dowds, T; Walker, M; Parry-Jones, W L

    1994-07-01

    The development and reliability/validity check of an 80-item, 8-scale measure for use with eating disorder patients is presented. The Stirling Eating Disorder Scales (SEDS) assess anorexic dietary behavior, anorexic dietary cognitions, bulimic dietary behavior, bulimic dietary cognitions, high perceived external control, low assertiveness, low self-esteem, and self-directed hostility. The SEDS were administered to 82 eating disorder patients and 85 controls. Results indicate that the SEDS are acceptable in terms of internal consistency, reliability, group validity, and concurrent validity.

  2. Cross Cultural Validation Of Perceived Workfamily Facilitation Scale ...

    African Journals Online (AJOL)

    The work family interface contains four unique factors based on studies from western countries. However, some of these studies have questioned the cross cultural adoption of psychological concept, and called for a re-validation prior to adoption. The main purpose of this study is to re-validate the four factor structure that ...

  3. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  4. Validation of New Cancer Biomarkers

    DEFF Research Database (Denmark)

    Duffy, Michael J; Sturgeon, Catherine M; Söletormos, Georg

    2015-01-01

    BACKGROUND: Biomarkers are playing increasingly important roles in the detection and management of patients with cancer. Despite an enormous number of publications on cancer biomarkers, few of these biomarkers are in widespread clinical use. CONTENT: In this review, we discuss the key steps...... in advancing a newly discovered cancer candidate biomarker from pilot studies to clinical application. Four main steps are necessary for a biomarker to reach the clinic: analytical validation of the biomarker assay, clinical validation of the biomarker test, demonstration of clinical value from performance...... of the biomarker test, and regulatory approval. In addition to these 4 steps, all biomarker studies should be reported in a detailed and transparent manner, using previously published checklists and guidelines. Finally, all biomarker studies relating to demonstration of clinical value should be registered before...

  5. Drive: Theory and Construct Validation.

    Science.gov (United States)

    Siegling, Alex B; Petrides, K V

    2016-01-01

    This article explicates the theory of drive and describes the development and validation of two measures. A representative set of drive facets was derived from an extensive corpus of human attributes (Study 1). Operationalised using an International Personality Item Pool version (the Drive:IPIP), a three-factor model was extracted from the facets in two samples and confirmed on a third sample (Study 2). The multi-item IPIP measure showed congruence with a short form, based on single-item ratings of the facets, and both demonstrated cross-informant reliability. Evidence also supported the measures' convergent, discriminant, concurrent, and incremental validity (Study 3). Based on very promising findings, the authors hope to initiate a stream of research in what is argued to be a rather neglected niche of individual differences and non-cognitive assessment.

  6. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  7. The Management Advisory Committee of the Inspection Validation Centre. 2nd report

    International Nuclear Information System (INIS)

    1985-06-01

    The document is the second report of the Management Advisory Committee of the Inspection Validation Centre (I.V.C.). The IVC is concerned with the ultrasonic inspection of the CEGB's proposed PWR reactor pressure vessel, and other components. The report deals with the technical progress since May 1984, and includes: interim validation, retrospective validation, examination of procedures, test assembly manufacture, interim validation of manual forging inspections, and validation facilities. (U.K.)

  8. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    Science.gov (United States)

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  9. Reliable and valid assessment of Lichtenstein hernia repair skills

    DEFF Research Database (Denmark)

    Carlsen, C G; Lindorff Larsen, Karen; Funch-Jensen, P

    2014-01-01

    PURPOSE: Lichtenstein hernia repair is a common surgical procedure and one of the first procedures performed by a surgical trainee. However, formal assessment tools developed for this procedure are few and sparsely validated. The aim of this study was to determine the reliability and validity...... of an assessment tool designed to measure surgical skills in Lichtenstein hernia repair. METHODS: Key issues were identified through a focus group interview. On this basis, an assessment tool with eight items was designed. Ten surgeons and surgical trainees were video recorded while performing Lichtenstein hernia...... a significant difference between the three groups which indicates construct validity, p skills can be assessed blindly by a single rater in a reliable and valid fashion with the new procedure-specific assessment tool. We recommend this tool for future assessment...

  10. Translation and validation of ICIQ-FLUTS for Tamil-speaking women.

    Science.gov (United States)

    Ekanayake, Chanil D; Pathmeswaran, Arunasalam; Nishad, A A Nilanga; Samaranayake, Kanishka U; Wijesinghe, Prasantha S

    2017-12-01

    Research in to lower urinary tract symptoms (LUTS) in women in South Asia is hampered by lack of validated tools. Our aim was to validate the International Consultation on Incontinence Modular Questionnaire on Female Lower Urinary Tract Symptoms (ICIQ-FLUTS) from English to Tamil. After translation to Tamil, a validation study was carried out among women attending the gynecology clinic at District General Hospital-Mannar. Content validity assessed by the level of missing data was Tamil translation of ICIQ-FLUTS retained the psychometric properties of the original English questionnaire and will be an invaluable tool to detect LUTS among Tamil-speaking women.

  11. Mercury and Cyanide Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  12. Magnetic Signature Analysis & Validation System

    National Research Council Canada - National Science Library

    Vliet, Scott

    2001-01-01

    The Magnetic Signature Analysis and Validation (MAGSAV) System is a mobile platform that is used to measure, record, and analyze the perturbations to the earth's ambient magnetic field caused by object such as armored vehicles...

  13. [Comparison of the Wechsler Memory Scale-III and the Spain-Complutense Verbal Learning Test in acquired brain injury: construct validity and ecological validity].

    Science.gov (United States)

    Luna-Lario, P; Pena, J; Ojeda, N

    2017-04-16

    To perform an in-depth examination of the construct validity and the ecological validity of the Wechsler Memory Scale-III (WMS-III) and the Spain-Complutense Verbal Learning Test (TAVEC). The sample consists of 106 adults with acquired brain injury who were treated in the Area of Neuropsychology and Neuropsychiatry of the Complejo Hospitalario de Navarra and displayed memory deficit as the main sequela, measured by means of specific memory tests. The construct validity is determined by examining the tasks required in each test over the basic theoretical models, comparing the performance according to the parameters offered by the tests, contrasting the severity indices of each test and analysing their convergence. The external validity is explored through the correlation between the tests and by using regression models. According to the results obtained, both the WMS-III and the TAVEC have construct validity. The TAVEC is more sensitive and captures not only the deficits in mnemonic consolidation, but also in the executive functions involved in memory. The working memory index of the WMS-III is useful for predicting the return to work at two years after the acquired brain injury, but none of the instruments anticipates the disability and dependence at least six months after the injury. We reflect upon the construct validity of the tests and their insufficient capacity to predict functionality when the sequelae become chronic.

  14. DDML Schema Validation

    Science.gov (United States)

    2016-02-08

    XML schema govern DDML instance documents. For information about XML, refer to RCC 125-15, XML Style Guide.2 Figure 4 provides an XML snippet of a...we have documented three main types of information .  User Stories: A user story describes a specific requirement of the schema in the terms of a...instance document is a schema -valid XML file that completely describes the information in the test case in a manner that satisfies the user story

  15. Automated ensemble assembly and validation of microbial genomes

    Science.gov (United States)

    2014-01-01

    Background The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. Results To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Conclusions Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to

  16. Simulation Based Studies in Software Engineering: A Matter of Validity

    Directory of Open Access Journals (Sweden)

    Breno Bernard Nicolau de França

    2015-04-01

    Full Text Available Despite the possible lack of validity when compared with other science areas, Simulation-Based Studies (SBS in Software Engineering (SE have supported the achievement of some results in the field. However, as it happens with any other sort of experimental study, it is important to identify and deal with threats to validity aiming at increasing their strength and reinforcing results confidence. OBJECTIVE: To identify potential threats to SBS validity in SE and suggest ways to mitigate them. METHOD: To apply qualitative analysis in a dataset resulted from the aggregation of data from a quasi-systematic literature review combined with ad-hoc surveyed information regarding other science areas. RESULTS: The analysis of data extracted from 15 technical papers allowed the identification and classification of 28 different threats to validity concerned with SBS in SE according Cook and Campbell’s categories. Besides, 12 verification and validation procedures applicable to SBS were also analyzed and organized due to their ability to detect these threats to validity. These results were used to make available an improved set of guidelines regarding the planning and reporting of SBS in SE. CONCLUSIONS: Simulation based studies add different threats to validity when compared with traditional studies. They are not well observed and therefore, it is not easy to identify and mitigate all of them without explicit guidance, as the one depicted in this paper.

  17. International Harmonization and Cooperation in the Validation of Alternative Methods.

    Science.gov (United States)

    Barroso, João; Ahn, Il Young; Caldeira, Cristiane; Carmichael, Paul L; Casey, Warren; Coecke, Sandra; Curren, Rodger; Desprez, Bertrand; Eskes, Chantra; Griesinger, Claudius; Guo, Jiabin; Hill, Erin; Roi, Annett Janusch; Kojima, Hajime; Li, Jin; Lim, Chae Hyung; Moura, Wlamir; Nishikawa, Akiyoshi; Park, HyeKyung; Peng, Shuangqing; Presgrave, Octavio; Singer, Tim; Sohn, Soo Jung; Westmoreland, Carl; Whelan, Maurice; Yang, Xingfen; Yang, Ying; Zuang, Valérie

    The development and validation of scientific alternatives to animal testing is important not only from an ethical perspective (implementation of 3Rs), but also to improve safety assessment decision making with the use of mechanistic information of higher relevance to humans. To be effective in these efforts, it is however imperative that validation centres, industry, regulatory bodies, academia and other interested parties ensure a strong international cooperation, cross-sector collaboration and intense communication in the design, execution, and peer review of validation studies. Such an approach is critical to achieve harmonized and more transparent approaches to method validation, peer-review and recommendation, which will ultimately expedite the international acceptance of valid alternative methods or strategies by regulatory authorities and their implementation and use by stakeholders. It also allows achieving greater efficiency and effectiveness by avoiding duplication of effort and leveraging limited resources. In view of achieving these goals, the International Cooperation on Alternative Test Methods (ICATM) was established in 2009 by validation centres from Europe, USA, Canada and Japan. ICATM was later joined by Korea in 2011 and currently also counts with Brazil and China as observers. This chapter describes the existing differences across world regions and major efforts carried out for achieving consistent international cooperation and harmonization in the validation and adoption of alternative approaches to animal testing.

  18. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  19. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  20. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Planet Candidate Validation in K2 Crowded Fields

    Science.gov (United States)

    Rampalli, Rayna; Vanderburg, Andrew; Latham, David; Quinn, Samuel

    2018-01-01

    In just three years, the K2 mission has yielded some remarkable outcomes with the discovery of over 100 confirmed planets and 500 reported planet candidates to be validated. One challenge with this mission is the search for planets located in star-crowded regions. Campaign 13 is one such example, located towards the galactic plane in the constellation of Taurus. We subject the potential planetary candidates to a validation process involving spectroscopy to derive certain stellar parameters. Seeing-limited on/off imaging follow-up is also utilized in order to rule out false positives due to nearby eclipsing binaries. Using Markov chain Monte Carlo analysis, the best-fit parameters for each candidate are generated. These will be suitable for finding a candidate’s false positive probability through methods including feeding such parameters into the Validation of Exoplanet Signals using a Probabilistic Algorithm (VESPA). These techniques and results serve as important tools for conducting candidate validation and follow-up observations for space-based missions such as the upcoming TESS mission since TESS’s large camera pixels resemble K2’s star-crowded fields.

  2. Verification and Validation of RADTRAN 5.5.

    Energy Technology Data Exchange (ETDEWEB)

    Osborn, Douglas.; Weiner, Ruth F.; Mills, George Scott; Hamp, Steve C.

    2005-02-01

    This document contains a description of the verification and validation process used for the RADTRAN 5.5 code. The verification and validation process ensured the proper calculational models and mathematical and numerical methods were used in the RADTRAN 5.5 code for the determination of risk and consequence assessments. The differences between RADTRAN 5 and RADTRAN 5.5 are the addition of tables, an expanded isotope library, and the additional User-Defined meteorological option for accident dispersion. 3

  3. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  4. Validity and reliability of eating disorder assessments used with athletes: A review

    Directory of Open Access Journals (Sweden)

    Zachary Pope

    2015-09-01

    Conclusion: Only seven studies calculated validity coefficients within the study whereas 47 cited the validity coefficient. Twenty-six calculated a reliability coefficient whereas 47 cited the reliability of the ED measures. Four studies found validity evidence for the EAT, EDI, BULIT-R, QEDD, and EDE-Q in an athlete population. Few studies reviewed calculated validity and reliability coefficients of ED measures. Cross-validation of these measures in athlete populations is clearly needed.

  5. CONCURRENT VALIDITY OF THE STUDENT TEACHER PROFESSIONAL IDENTITY SCALE

    Directory of Open Access Journals (Sweden)

    Predrag Živković

    2018-04-01

    Full Text Available The main purpose of study was to examine concurrent validity of the Student Teachers Professional Identity Scale–STPIS (Fisherman and Abbot, 1998 that was for the first time used in Serbia. Indicators of concurrent validity was established by correlation with student teacher self-reported well-being, self-esteem, burnout stress and resilience. Based on the results we can conclude that the STPIS meets the criterion of concurrent validity. The implications of these results are important for researchers and decisions makers in teacher education

  6. Development and validation of sodium fire analysis code ASSCOPS

    International Nuclear Information System (INIS)

    Ohno, Shuji

    2001-01-01

    A version 2.1 of the ASSCOPS sodium fire analysis code was developed to evaluate the thermal consequences of a sodium leak and consequent fire in LMFBRs. This report describes the computational models and the validation studies using the code. The ASSCOPS calculates sodium droplet and pool fire, and consequential heat/mass transfer behavior. Analyses of sodium pool or spray fire experiments confirmed that this code and parameters used in the validation studies gave valid results on the thermal consequences of sodium leaks and fires. (author)

  7. DTU PMU Laboratory Development - Testing and Validation

    DEFF Research Database (Denmark)

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE...... standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to follow known patterns and provide confirmation about the test system to confirm the design and settings....... In a nutshell, having 2 PMUs that observe same signals provides validation of the operation and flags questionable results with more certainty. Moreover, the performance and accuracy of the DTU-PMU is tested acquiring good and precise results, when compared with a commercial phasor measurement device, PMU-1....

  8. Audit Validation Using Ontologies

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2015-01-01

    Full Text Available Requirements to increase quality audit processes in enterprises are defined. It substantiates the need for assessment and management audit processes using ontologies. Sets of rules, ways to assess the consistency of rules and behavior within the organization are defined. Using ontologies are obtained qualifications that assess the organization's audit. Elaboration of the audit reports is a perfect algorithm-based activity characterized by generality, determinism, reproducibility, accuracy and a well-established. The auditors obtain effective levels. Through ontologies obtain the audit calculated level. Because the audit report is qualitative structure of information and knowledge it is very hard to analyze and interpret by different groups of users (shareholders, managers or stakeholders. Developing ontology for audit reports validation will be a useful instrument for both auditors and report users. In this paper we propose an instrument for validation of audit reports contain a lot of keywords that calculates indicators, a lot of indicators for each key word there is an indicator, qualitative levels; interpreter who builds a table of indicators, levels of actual and calculated levels.

  9. Overview of SCIAMACHY validation: 2002 2004

    Science.gov (United States)

    Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.

    2005-08-01

    SCIAMACHY, on board Envisat, is now in operation for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. The actual validation of the operational SCIAMACHY processors established at DLR on behalf of ESA has been hampered by data distribution and processor problems. Since first data releases in summer 2002, operational processors were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, remaining processor problems cause major errors preventing from scientific usability in other periods and domains. Untied to the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products (both columns and profiles) already have acceptable, if not excellent, quality

  10. Italian Validation of Homophobia Scale (HS).

    Science.gov (United States)

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A

    2015-09-01

    The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self-reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. The aim of this study was to validate the HS in the Italian context. An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back into the English language. A psychologist and sexologist checked the translated items from a clinical point of view. We recruited 100 subjects aged18-65 for the Italian validation of the HS. The Pearson coefficient and Cronbach's α coefficient were performed to test the test-retest reliability and internal consistency. A sociodemographic questionnaire including the main information as age, geographic distribution, partnership status, education, religious orientation, and sex orientation was administrated together with the translated version of HS. The analysis of the internal consistency showed an overall Cronbach's α coefficient of 0.92. In the four domains, the Cronbach's α coefficient was 0.90 in behavior/negative affect, 0.94 in affect/behavioral aggression, and 0.92 in negative cognition, whereas in the total score was 0.86. The test-retest reliability showed the following results: the HS total score was r = 0.93 (P cognition was r = 0.75 (P validation of the HS revealed the use of this self-report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior.

  11. Belgium customizes care

    International Nuclear Information System (INIS)

    Roberts, M.

    1992-01-01

    Since endorsing Responsible Care on May 6, 1991, the Federation des Industries Chimiques de Belgique (FIC; Brussels) has obtained commitments from 580 of its 730 member companies, and it expects to have the rest signed up by the end of this year. 'But there will probably always be some small companies that will not commit,' says Dirk Clotman, public relations advisor at FIC. FIC has no plans to make compliance mandatory for existing members - although since March 1992 all new members are required to adopt Responsible Care. Clotman notes, however, that FIC's members range from basic chemical producers, plastics processors, and rubber transformers to distributors and wholesalers. This widens our problems because Responsible Care is very different for a wholesaler and a basic chemical producer. It slows down the process, he says

  12. R & D PROJECTS ARTICULATING SCIENTIFIC SYSTEM WITH ENTREPRENEUR SECTOR, AN ANALYSIS FROM THE DISTANCE COGNITIVA APPROACH

    Directory of Open Access Journals (Sweden)

    Álvarez, Francisco José

    2016-12-01

    Full Text Available This paper aims to study the interaction between universities and nearby firms from the research and development projects executed from the School of Engineering of the National University of Mar del Plata, analyzing the characteristics of companies linked contractually. The working hypothesis proposes to explain, through the concept of "cognitive distance" (Boschma, 2005; Dirk Fornahl et al., 2011, the competitive advantage presented by medium and large companies from mature sectors and small companies from technological sectors, to engage with the scientific and technological institutions. The results indicate that the company is contractually associated with the School of Engineering have demanded further developments of technological innovation, and reduced demand for services. While the origin of the application responds to a pattern adjusted to the proposed hypothesis.

  13. Validation of the Intestinal Part of the Prostate Cancer Questionnaire 'QUFW94': Psychometric Properties, Responsiveness, and Content Validity

    International Nuclear Information System (INIS)

    Reidunsdatter, Randi J.; Lund, Jo-Asmund; Fransson, Per; Widmark, Anders

    2010-01-01

    Purpose: Several treatment options are available for patients with prostate cancer. Applicable and valid self-assessment instruments for assessing health-related quality of life (HRQOL) are of paramount importance. The aim of this study was to explore the validity and responsiveness of the intestinal part of the prostate cancer-specific questionnaire QUFW94. Methods and Materials: The content of the intestinal part of QUFW94 was examined by evaluation of experienced clinicians and reviewing the literature. The psychometric properties and responsiveness were assessed by analyzing HRQOL data from the randomized study Scandinavian Prostate Cancer Group 7 (SPCG)/Swedish Association for Urological Oncology 3 (SFUO). Subscales were constructed by means of exploratory factor analyses. Internal consistency was assessed by Cronbach's alpha. Responsiveness was investigated by comparing baseline scores with the 4-year posttreatment follow-up. Results: The content validity was found acceptable, but some amendments were proposed. The factor analyses revealed two symptom scales. The first scale comprised five items regarding general stool problems, frequency, incontinence, need to plan toilet visits, and daily activity. Cronbach's alpha at 0.83 indicated acceptable homogeneity. The second scale was less consistent with a Cronbach's alpha at 0.55. The overall responsiveness was found to be very satisfactory. Conclusion: Two scales were identified in the bowel dimension of the QUFW94; the first one had good internal consistency. The responsiveness was excellent, and some modifications are suggested to strengthen the content validity.

  14. Validation of the Chinese Expanded Euthanasia Attitude Scale

    Science.gov (United States)

    Chong, Alice Ming-Lin; Fok, Shiu-Yeu

    2013-01-01

    This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be…

  15. Validating the Interpretations and Uses of Test Scores

    Science.gov (United States)

    Kane, Michael T.

    2013-01-01

    To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…

  16. GPM Ground Validation: Pre to Post-Launch Era

    Science.gov (United States)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation

  17. Developing a validation for environmental sustainability

    Science.gov (United States)

    Adewale, Bamgbade Jibril; Mohammed, Kamaruddeen Ahmed; Nawi, Mohd Nasrun Mohd; Aziz, Zulkifli

    2016-08-01

    One of the agendas for addressing environmental protection in construction is to reduce impacts and make the construction activities more sustainable. This important consideration has generated several research interests within the construction industry, especially considering the construction damaging effects on the ecosystem, such as various forms of environmental pollution, resource depletion and biodiversity loss on a global scale. Using Partial Least Squares-Structural Equation Modeling technique, this study validates environmental sustainability (ES) construct in the context of large construction firms in Malaysia. A cross-sectional survey was carried out where data was collected from Malaysian large construction firms using a structured questionnaire. Results of this study revealed that business innovativeness and new technology are important in determining environmental sustainability (ES) of the Malaysian construction firms. It also established an adequate level of internal consistency reliability, convergent validity and discriminant validity for each of this study's constructs. And based on this result, it could be suggested that the indicators for organisational innovativeness dimensions (business innovativeness and new technology) are useful to measure these constructs in order to study construction firms' tendency to adopt environmental sustainability (ES) in their project execution.

  18. SPECIFICITY OF MANIFACTURING PROCESS VALIDATION FOR DIAGNOSTIC SEROLOGICAL DEVICES

    Directory of Open Access Journals (Sweden)

    O. Yu. Galkin

    2018-02-01

    Full Text Available The aim of this research was to analyze recent scientific literature, as well as national and international legislature on manifacturing process validation of biopharmaceutical production, in particular devices for serological diagnostics. Technology validation in the field of medical devices for serological diagnostics is most influenced by the Technical Regulation for Medical Devices for in vitro Diagnostics State Standards of Ukraine – SSU EN ISO 13485:2015 “Medical devices. Quality management system. Requirements for regulation”, SSU EN ISO 14971:2015 “Medical devices. Instructions for risk management”, Instruction ST-N of the Ministry of Healthcare of Ukraine 42-4.0:2014 “Medications. Suitable industrial practice”, State Pharmacopoeia of Ukraine and Instruction ICH Q9 on risk management. Current recommendations for validations of drugs manufacturing process, including biotechnological manufacturing, can not be directly applied to medical devices for in vitro diagnostics. It was shown that the specifics of application and raw materials require individual validation parameters and process validations for serological diagnostics devices. Critical parameters to consider in validation plans were provided for every typical stage of production of in vitro diagnostics devices on the example of immunoassay kits, such as obtaining protein antigens, including recombinant ones, preparations of mono- and polyclonal antibodies, immunoenzyme conjugates and immunosorbents, chemical reagents etc. The bottlenecks of technologies for in vitro diagnostics devices were analyzed from the bioethical and biosafety points of view.

  19. Practical procedure for method validation in INAA- A tutorial

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2015-01-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  20. IMatter: validation of the NHS Scotland Employee Engagement Index.

    Science.gov (United States)

    Snowden, Austyn; MacArthur, Ewan

    2014-11-08

    Employee engagement is a fundamental component of quality healthcare. In order to provide empirical data of engagement in NHS Scotland an Employee Engagement Index was co-constructed with staff. 'iMatter' consists of 25 Likert questions developed iteratively from the literature and a series of validation events with NHS Scotland staff. The aim of this study was to test the face, content and construct validity of iMatter. Cross sectional survey of NHS Scotland staff. In January 2013 iMatter was sent to 2300 staff across all disciplines in NHS Scotland. 1280 staff completed it. Demographic data were collected. Internal consistency of the scale was calculated. Construct validity consisted of concurrent application of factor analysis and Rasch analysis. Face and content validity were checked using 3 focus groups. The sample was representative of the NHSScotland population. iMatter showed very strong reliability (α = 0.958). Factor analysis revealed a four-factor structure consistent with the following interpretation: iMatter showed evidence of high reliability and validity. It is a popular measure of staff engagement in NHS Scotland. Implications for practice focus on the importance of coproduction in psychometric development.

  1. Practical procedure for method validation in INAA- A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: robsonpetroni@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  2. Validation of Theory: Exploring and Reframing Popper’s Worlds

    Directory of Open Access Journals (Sweden)

    Steven E. Wallis

    2008-12-01

    Full Text Available Popper’s well-known arguments describe the need for advancing social theory through a process of falsification. Despite Popper’s call, there has been little change in the academic process of theory development and testing. This paper builds on Popper’s lesser-known idea of “three worlds” (physical, emotional/conceptual, and theoretical to investigate the relationship between knowledge, theory, and action. In this paper, I explore his three worlds to identify alternative routes to support the validation of theory. I suggest there are alternative methods for validation, both between, and within, the three worlds and that a combination of validation and falsification methods may be superior to any one method. Integral thinking is also put forward to support the validation process. Rather than repeating the call for full Popperian falsification, this paper recognizes that the current level of social theorizing provides little opportunity for such falsification. Rather than sidestepping the goal of Popperian falsification, the paths suggested here may be seen as providing both validation and falsification as stepping-stones toward the goal of more effective social and organizational theory.

  3. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  5. Evaluation of the Gratitude Questionnaire in a Chinese Sample of Adults: Factorial Validity, Criterion-Related Validity, and Measurement Invariance Across Sex.

    Science.gov (United States)

    Kong, Feng; You, Xuqun; Zhao, Jingjing

    2017-01-01

    The Gratitude Questionnaire (GQ; McCullough et al., 2002) is one of the most widely used instruments to assess dispositional gratitude. The purpose of this study was to validate a Chinese version of the GQ by examining internal consistency, factor structure, convergent validity, and measurement invariance across sex. A total of 1151 Chinese adults were recruited to complete the GQ, Positive Affect and Negative Affect Scales, and Satisfaction with Life Scale. Confirmatory factor analysis indicated that the original unidimensional model fitted well, which is in accordance with the findings in Western populations. Furthermore, the GQ had satisfactory composite reliability and criterion-related validity with measures of life satisfaction and affective well-being. Evidence of configural, metric and scalar invariance across sex was obtained. Tests of the latent mean differences found females had higher latent mean scores than males. These findings suggest that the Chinese version of GQ is a reliable and valid tool for measuring dispositional gratitude and can generally be utilized across sex in the Chinese context.

  6. The validity of knock-for-knock clauses in comparative perspective

    DEFF Research Database (Denmark)

    Cavaleri, Sylvie Cécile

    This article discusses the validity of so-called knock-for-knock clauses, by which parties to offshore oil and gas or maritime contracts agree that each of them will cover its own losses regardless of who caused them. The issue of validity of such clauses and of the liability exclusions they cont......This article discusses the validity of so-called knock-for-knock clauses, by which parties to offshore oil and gas or maritime contracts agree that each of them will cover its own losses regardless of who caused them. The issue of validity of such clauses and of the liability exclusions...... criteria used to promote or dismiss knock-for-knock clauses in case law and academic literature, the article reaches the conclusion that the question of whether knock-for-knock clauses should be held valid depends on whose interests are being considered, and that further research is warranted...

  7. The validity of the Michigan Alcoholism Screening Test (MAST)

    DEFF Research Database (Denmark)

    Storgaard, H; Nielsen, S D; Gluud, C

    1994-01-01

    This review examines the validity of the Michigan Alcoholism Screening Test (MAST) as a screening instrument for alcohol problems. Studies that compare the MAST-questionnaire with other defined diagnostic criteria of alcohol problems were retrieved through MEDLINE and a cross-bibliographic check....... A total of 20 validity studies were included. The studies varied considerably regarding the prevalence of alcohol problems, the diagnostic criteria, and the examined patient categories. The MAST compared with other diagnostic criteria of alcohol problems gave validity measures with the following span...... and the specificities show substantial variations. The variables that seem to have the largest influence on the PVpos seem to be the prevalence of alcohol problems, the diagnostic method against which the MAST-questionnaire is validated, and the populations on which the MAST is applied. The MAST should in the future...

  8. Development and validation of a premature ejaculation diagnostic tool.

    Science.gov (United States)

    Symonds, Tara; Perelman, Michael A; Althof, Stanley; Giuliano, François; Martin, Mona; May, Kathryn; Abraham, Lucy; Crossland, Anna; Morris, Mark

    2007-08-01

    Diagnosis of premature ejaculation (PE) for clinical trial purposes has typically relied on intravaginal ejaculation latency time (IELT) for entry, but this parameter does not capture the multidimensional nature of PE. Therefore, the aim was to develop a brief, multidimensional, psychometrically validated instrument for diagnosing PE status. The questionnaire development involved three stages: (1) Five focus groups and six individual interviews were conducted to develop the content; (2) psychometric validation using three different groups of men; and (3) generation of a scoring system. For psychometric validation/scoring system development, data was collected from (1) men with PE based on clinician diagnosis, using DSM-IV-TR, who also had IELTs or =11 PE. The development and validation of this new PE diagnostic tool has resulted in a new, user-friendly, and brief self-report questionnaire for use in clinical trials to diagnose PE.

  9. The reliability and validity of a sexual functioning questionnaire.

    Science.gov (United States)

    Corty, E W; Althof, S E; Kurit, D M

    1996-01-01

    The present study assessed the reliability and validity of a measure of sexual functioning, the CMSH-SFQ, for male patients and their partners. The CMSH-SFQ measures erectile and orgasmic functioning, sexual drive, frequency of sexual behavior, and sexual satisfaction. Test-retest reliability was assessed with 19 males and 19 females for the baseline CMSH-SFQ. Criterion validity was measured by comparing the answers of 25 male patients to those of their partners at baseline and follow-up. The majority of items had acceptable levels of reliability and validity. The CMSH-SFQ provides a reliable and valid device that can be used to measure global sexual functioning in men and their partners and may be used to evaluate the efficacy of treatments for sexual dysfunctions. Limitations and suggestions for use of the CMSH-SFQ are addressed.

  10. Reliability and validity of the McDonald Play Inventory.

    Science.gov (United States)

    McDonald, Ann E; Vigen, Cheryl

    2012-01-01

    This study examined the ability of a two-part self-report instrument, the McDonald Play Inventory, to reliably and validly measure the play activities and play styles of 7- to 11-yr-old children and to discriminate between the play of neurotypical children and children with known learning and developmental disabilities. A total of 124 children ages 7-11 recruited from a sample of convenience and a subsample of 17 parents participated in this study. Reliability estimates yielded moderate correlations for internal consistency, total test intercorrelations, and test-retest reliability. Validity estimates were established for content and construct validity. The results suggest that a self-report instrument yields reliable and valid measures of a child's perceived play performance and discriminates between the play of children with and without disabilities. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  11. Validation of heat transfer models for gap cooling

    International Nuclear Information System (INIS)

    Okano, Yukimitsu; Nagae, Takashi; Murase, Michio

    2004-01-01

    For severe accident assessment of a light water reactor, models of heat transfer in a narrow annular gap between overheated core debris and a reactor pressure vessel are important for evaluating vessel integrity and accident management. The authors developed and improved the models of heat transfer. However, validation was not sufficient for applicability of the gap heat flux correlation to the debris cooling in the vessel lower head and applicability of the local boiling heat flux correlations to the high-pressure conditions. Therefore, in this paper, we evaluated the validity of the heat transfer models and correlations by analyses for ALPHA and LAVA experiments where molten aluminum oxide (Al 2 O 3 ) at about 2700 K was poured into the high pressure water pool in a small-scale simulated vessel lower head. In the heating process of the vessel wall, the calculated heating rate and peak temperature agreed well with the measured values, and the validity of the heat transfer models and gap heat flux correlation was confirmed. In the cooling process of the vessel wall, the calculated cooling rate was compared with the measured value, and the validity of the nucleate boiling heat flux correlation was confirmed. The peak temperatures of the vessel wall in ALPHA and LAVA experiments were lower than the temperature at the minimum heat flux point between film boiling and transition boiling, so the minimum heat flux correlation could not be validated. (author)

  12. Validation of the Adolescent Meta-cognition Questionnaire Version

    Directory of Open Access Journals (Sweden)

    Kazem Khoramdel

    2012-03-01

    Full Text Available Background: The role and importance of meta-cognitive beliefs in creating and retaining of anxiety disorders were explained initially in meta-cognitive theory. The purpose of this study was to validate the Meta-cognitions Questionnaire-Adolescent version (MCQ-A in normal Iranian people and compare of meta-cognitive beliefs between adolescents with anxiety disorders and normal individuals.Materials and Method: This was a standardized study. First of all, the original version was translated into Persian then administered to 204 (101 boys and 103 girls adolescent aged 13 through 17 years. Theyhave been clustered randomly. They were selected from the schools of Isfahan, together with mood and feelings questionnaire and revised children's manifest anxiety scale. In order to assess reliability, method of internal consistency (Chronbach’s alpha and split-half coefficient was used, and also in order to assess validity, convergent validity, criterion validity and confirmatory factor analysis were used. Results: The results of correlation coefficient of convergent validity showed a relation between total score of (MCQ-A and its components with anxiety and depression except cognitive self-consciousness. Data were indicative of appropriate level of Coranbach’s alpha and split-half reliability coefficients of the MCQ-A and extracted factors. The results of factor analysis by principle components analysis and using varimax rotation showed 5 factors that account for 0.45% of the variance. Conclusion: MCQ-A has satisfactory psychometric properties in Iranian people

  13. TWO CRITERIA FOR GOOD MEASUREMENTS IN RESEARCH: VALIDITY AND RELIABILITY

    Directory of Open Access Journals (Sweden)

    Haradhan Kumar Mohajan

    2017-12-01

    Full Text Available Reliability and validity are two most important and fundamental features in the evaluation of any measurement instrument or toll for a good research. The purpose of this research is to discuss the validity and reliability of measurement instruments that are used in research. Validity concerns what an instrument measures, and how well it does so. Reliability concerns the faith that one can have in the data obtained from use of an instrument, that is, the degree to which any measuring tool controls for random error. An attempt has been taken here to review the reliability and validity, and threat to them in some details.

  14. Method validation for strobilurin fungicides in cereals and fruit

    DEFF Research Database (Denmark)

    Christensen, Hanne Bjerre; Granby, Kit

    2001-01-01

    Strobilurins are a new class of fungicides that are active against a broad spectrum of fungi. In the present work a GC method for analysis of strobilurin fungicides was validated. The method was based on extraction with ethyl acetate/cyclohexane, clean-up by gel permeation chromatography (GPC......) and determination of the content by gas chromatography (GC) with electron capture (EC-), nitrogen/phosphorous (NP-), and mass spectrometric (MS-) detection. Three strobilurins, azoxystrobin, kresoxim-methyl and trifloxystrobin were validated on three matrices, wheat, apple and grapes. The validation was based...

  15. 78 FR 5866 - Pipeline Safety: Annual Reports and Validation

    Science.gov (United States)

    2013-01-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0319] Pipeline Safety: Annual Reports and Validation AGENCY: Pipeline and Hazardous Materials... 2012 gas transmission and gathering annual reports, remind pipeline owners and operators to validate...

  16. Reliability and Validity of the Korean Version of the Cancer Stigma Scale.

    Science.gov (United States)

    So, Hyang Sook; Chae, Myeong Jeong; Kim, Hye Young

    2017-02-01

    In this study the reliability and validity of the Korean version of the Cancer Stigma Scale (KCSS) was evaluated. The KCSS was formed through translation and modification of Cataldo Lung Cancer Stigma Scale. The KCSS, Psychological Symptom Inventory (PSI), and European Organization for Research and Treatment of Cancer Quality of Life Questionnaire - Core 30 (EORTC QLQ-C30) were administered to 247 men and women diagnosed with one of the five major cancers. Construct validity, item convergent and discriminant validity, concurrent validity, known-group validity, and internal consistency reliability of the KCSS were evaluated. Exploratory factor analysis supported the construct validity with a six-factor solution; that explained 65.7% of the total variance. The six-factor model was validated by confirmatory factor analysis (Q (χ²/df)= 2.28, GFI=.84, AGFI=.81, NFI=.80, TLI=.86, RMR=.03, and RMSEA=.07). Concurrent validity was demonstrated with the QLQ-C30 (global: r=-.44; functional: r=-.19; symptom: r=.42). The KCSS had known-group validity. Cronbach's alpha coefficient for the 24 items was .89. The results of this study suggest that the 24-item KCSS has relatively acceptable reliability and validity and can be used in clinical research to assess cancer stigma and its impacts on health-related quality of life in Korean cancer patients. © 2017 Korean Society of Nursing Science

  17. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  18. The validated sun exposure questionnaire

    DEFF Research Database (Denmark)

    Køster, B; Søndergaard, J; Nielsen, J B

    2017-01-01

    Few questionnaires used in monitoring sun-related behavior have been tested for validity. We established criteria validity of a developed questionnaire for monitoring population sun-related behavior. During May-August 2013, 664 Danes wore a personal electronic UV-dosimeter for one week...... that measured the outdoor time and dose of erythemal UVR exposure. In the following week, they answered a questionnaire on their sun-related behavior in the measurement week. Outdoor time measured by dosimetry correlated strongly with both outdoor time and the developed exposure scale measured...... in the questionnaire. Exposure measured in SED by dosimetry correlated strongly with the exposure scale. In a linear regression model of UVR (SED) received, 41 percent of the variation was explained by skin type, age, week of participation and the exposure scale, with the exposure scale as the main contributor...

  19. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  20. DTU PMU Laboratory Development - Testing and Validation

    OpenAIRE

    Garcia-Valle, Rodrigo; Yang, Guang-Ya; Martin, Kenneth E.; Nielsen, Arne Hejde; Østergaard, Jacob

    2010-01-01

    This is a report of the results of phasor measurement unit (PMU) laboratory development and testing done at the Centre for Electric Technology (CET), Technical University of Denmark (DTU). Analysis of the PMU performance first required the development of tools to convert the DTU PMU data into IEEE standard, and the validation is done for the DTU-PMU via a validated commercial PMU. The commercial PMU has been tested from the authors' previous efforts, where the response can be expected to foll...

  1. Extending the validity of the Feeding Practices and Structure Questionnaire.

    Science.gov (United States)

    Jansen, Elena; Mallan, Kimberley M; Daniels, Lynne A

    2015-06-30

    Feeding practices are commonly examined as potentially modifiable determinants of children's eating behaviours and weight status. Although a variety of questionnaires exist to assess different feeding aspects, many lack thorough reliability and validity testing. The Feeding Practices and Structure Questionnaire (FPSQ) is a tool designed to measure early feeding practices related to non-responsive feeding and structure of the meal environment. Face validity, factorial validity, internal reliability and cross-sectional correlations with children's eating behaviours have been established in mothers with 2-year-old children. The aim of the present study was to further extend the validity of the FPSQ by examining factorial, construct and predictive validity, and stability. Participants were from the NOURISH randomised controlled trial which evaluated an intervention with first-time mothers designed to promote protective feeding practices. Maternal feeding practices (FP) and child eating behaviours were assessed when children were aged 2 years and 3.7 years (n = 388). Confirmatory Factor analysis, group differences, predictive relationships, and stability were tested. The original 9-factor structure was confirmed when children were aged 3.7 ± 0.3 years. Cronbach's alpha was above the recommended 0.70 cut-off for all factors except Structured Meal Timing, Over Restriction and Distrust in Appetite which were 0.58, 0.67 and 0.66 respectively. Allocated group differences reflected behaviour consistent with intervention content and all feeding practices were stable across both time points (range of r = 0.45-0.70). There was some evidence for the predictive validity of factors with 2 FP showing expected relationships, 2 FP showing expected and unexpected relationships and 5 FP showing no relationship. Reliability and validity was demonstrated for most subscales of the FPSQ. Future validation is warranted with culturally diverse samples and with fathers and

  2. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    Science.gov (United States)

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.

  3. [Data validation methods and discussion on Chinese materia medica resource survey].

    Science.gov (United States)

    Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing

    2013-07-01

    From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.

  4. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  5. Orthorexia nervosa: validation of a diagnosis questionnaire.

    Science.gov (United States)

    Donini, L M; Marsili, D; Graziani, M P; Imbriale, M; Cannella, C

    2005-06-01

    To validate a questionnaire for the diagnosis of orhorexia oervosa, an eating disorder defined as "maniacal obsession for healthy food". 525 subjects were enrolled. Then they were randomized into two samples (sample of 404 subjects for the construction of the test for the diagnosis of orthorexia ORTO-15; sample of 121 subjects for the validation of the test). The ORTO-15 questionnaire, validated for the diagnosis of orthorexia, is made-up of 15 multiple-choice items. The test we proposed for the diagnosis of orthorexia (ORTO 15) showed a good predictive capability at a threshold value of 40 (efficacy 73.8%, sensitivity 55.6% and specificity 75.8%) also on verification with a control sample. However, it has a limit in identifying the obsessive disorder. For this reason we maintain that further investigation is necessary and that new questions useful for the evaluation of the obsessive-compulsive behavior should be added to the ORTO-15 questionnaire.

  6. Command Leadership DEOCS 4.1 Construct Validity Summary

    Science.gov (United States)

    2017-08-01

    Command Leadership DEOCS 4.1 Construct Validity Summary DEFENSE EQUAL OPPORTUNITY MANAGEMENT INSTITUTE...Report #15-18 1 Command Leadership DEOCS 4.1 Construct Validity Summary Background In 2014, DEOMI released DEOCS 4.0 for Department of Defense...individual items on the DEOCS. The following paper details the work conducted to modify the factor of Leadership Cohesion so that it focuses more

  7. Validation of a rubric to assess innovation competence

    Directory of Open Access Journals (Sweden)

    Frances Watts

    2012-06-01

    Full Text Available This paper addresses the development and validation of rubrics, materials and situations for the assessment of innovation competence. Research was carried out to verify the viability of the first draft of the assessment criteria, which led to refinement of the criteria and proposals to enhance the ensuing validation process that will include students and raters of different language backgrounds.

  8. Validity, Reliability, and the Questionable Role of Psychometrics in Plastic Surgery

    Science.gov (United States)

    2014-01-01

    Summary: This report examines the meaning of validity and reliability and the role of psychometrics in plastic surgery. Study titles increasingly include the word “valid” to support the authors’ claims. Studies by other investigators may be labeled “not validated.” Validity simply refers to the ability of a device to measure what it intends to measure. Validity is not an intrinsic test property. It is a relative term most credibly assigned by the independent user. Similarly, the word “reliable” is subject to interpretation. In psychometrics, its meaning is synonymous with “reproducible.” The definitions of valid and reliable are analogous to accuracy and precision. Reliability (both the reliability of the data and the consistency of measurements) is a prerequisite for validity. Outcome measures in plastic surgery are intended to be surveys, not tests. The role of psychometric modeling in plastic surgery is unclear, and this discipline introduces difficult jargon that can discourage investigators. Standard statistical tests suffice. The unambiguous term “reproducible” is preferred when discussing data consistency. Study design and methodology are essential considerations when assessing a study’s validity. PMID:25289354

  9. Validity, Reliability, and the Questionable Role of Psychometrics in Plastic Surgery

    Directory of Open Access Journals (Sweden)

    Eric Swanson, MD

    2014-06-01

    Full Text Available Summary: This report examines the meaning of validity and reliability and the role of psychometrics in plastic surgery. Study titles increasingly include the word “valid” to support the authors’ claims. Studies by other investigators may be labeled “not validated.” Validity simply refers to the ability of a device to measure what it intends to measure. Validity is not an intrinsic test property. It is a relative term most credibly assigned by the independent user. Similarly, the word “reliable” is subject to interpretation. In psychometrics, its meaning is synonymous with “reproducible.” The definitions of valid and reliable are analogous to accuracy and precision. Reliability (both the reliability of the data and the consistency of measurements is a prerequisite for validity. Outcome measures in plastic surgery are intended to be surveys, not tests. The role of psychometric modeling in plastic surgery is unclear, and this discipline introduces difficult jargon that can discourage investigators. Standard statistical tests suffice. The unambiguous term “reproducible” is preferred when discussing data consistency. Study design and methodology are essential considerations when assessing a study’s validity.

  10. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    Science.gov (United States)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  11. Site characterization and validation - Tracer migration experiment in the validation drift, report 2, Part 2: breakthrough curves in the validation drift appendices 5-9

    International Nuclear Information System (INIS)

    Birgersson, L.; Widen, H.; Aagren, T.; Neretnieks, I.; Moreno, L.

    1992-01-01

    Flowrate curves for the 53 sampling areas in the validation drift with measureable flowrates are given. The sampling area 267 is treated as three separate sampling areas; 267:1, 267:2 and 267:3. The total flowrate for these three sampling areas is given in a separate plot. The flowrates are given in ml/h. The time is given in hours since April 27 00:00, 1990. Disturbances in flowrates are observed after 8500 hours due to opening of boreholes C1 and W1. Results from flowrate measurements after 8500 hours are therefore excluded. The tracer breakthrough curves for 38 sampling areas in the validation drift are given as concentration values versus time. The sampling area 267 is treated as three separate sampling areas; 267:1, 267:2 and 267:3. This gives a total of 40 breakthrough curves for each tracer. (au)

  12. Elaboration and Validation of the Medication Prescription Safety Checklist 1

    Science.gov (United States)

    Pires, Aline de Oliveira Meireles; Ferreira, Maria Beatriz Guimarães; do Nascimento, Kleiton Gonçalves; Felix, Márcia Marques dos Santos; Pires, Patrícia da Silva; Barbosa, Maria Helena

    2017-01-01

    ABSTRACT Objective: to elaborate and validate a checklist to identify compliance with the recommendations for the structure of medication prescriptions, based on the Protocol of the Ministry of Health and the Brazilian Health Surveillance Agency. Method: methodological research, conducted through the validation and reliability analysis process, using a sample of 27 electronic prescriptions. Results: the analyses confirmed the content validity and reliability of the tool. The content validity, obtained by expert assessment, was considered satisfactory as it covered items that represent the compliance with the recommendations regarding the structure of the medication prescriptions. The reliability, assessed through interrater agreement, was excellent (ICC=1.00) and showed perfect agreement (K=1.00). Conclusion: the Medication Prescription Safety Checklist showed to be a valid and reliable tool for the group studied. We hope that this study can contribute to the prevention of adverse events, as well as to the improvement of care quality and safety in medication use. PMID:28793128

  13. Validating database constraints and updates using automated reasoning techniques

    NARCIS (Netherlands)

    Feenstra, Remco; Wieringa, Roelf J.

    1995-01-01

    In this paper, we propose a new approach to the validation of formal specifications of integrity constraints. The validation problem of fornmal specifications consists of assuring whether the formal specification corresponds with what the domain specialist intends. This is distinct from the

  14. Extending the validity of the Feeding Practices and Structure Questionnaire

    OpenAIRE

    Jansen, Elena; Mallan, Kimberley M.; Daniels, Lynne A.

    2015-01-01

    Background Feeding practices are commonly examined as potentially modifiable determinants of children?s eating behaviours and weight status. Although a variety of questionnaires exist to assess different feeding aspects, many lack thorough reliability and validity testing. The Feeding Practices and Structure Questionnaire (FPSQ) is a tool designed to measure early feeding practices related to non-responsive feeding and structure of the meal environment. Face validity, factorial validity, inte...

  15. [Development and validity of workplace bullying in nursing-type inventory (WPBN-TI)].

    Science.gov (United States)

    Lee, Younju; Lee, Mihyoung

    2014-04-01

    The purpose of this study was to develop an instrument to assess bullying of nurses, and test the validity and reliability of the instrument. The initial thirty items of WPBN-TI were identified through a review of the literature on types bullying related to nursing and in-depth interviews with 14 nurses who experienced bullying at work. Sixteen items were developed through 2 content validity tests by 9 experts and 10 nurses. The final WPBN-TI instrument was evaluated by 458 nurses from five general hospitals in the Incheon metropolitan area. SPSS 18.0 program was used to assess the instrument based on internal consistency reliability, construct validity, and criterion validity. WPBN-TI consisted of 16 items with three distinct factors (verbal and nonverbal bullying, work-related bullying, and external threats), which explained 60.3% of the total variance. The convergent validity and determinant validity for WPBN-TI were 100.0%, 89.7%, respectively. Known-groups validity of WPBN-TI was proven through the mean difference between subjective perception of bullying. The satisfied criterion validity for WPBN-TI was more than .70. The reliability of WPBN-TI was Cronbach's α of .91. WPBN-TI with high validity and reliability is suitable to determine types of bullying in nursing workplace.

  16. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    Science.gov (United States)

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  17. Development and validation of the coping with terror scale.

    Science.gov (United States)

    Stein, Nathan R; Schorr, Yonit; Litz, Brett T; King, Lynda A; King, Daniel W; Solomon, Zahava; Horesh, Danny

    2013-10-01

    Terrorism creates lingering anxiety about future attacks. In prior terror research, the conceptualization and measurement of coping behaviors were constrained by the use of existing coping scales that index reactions to daily hassles and demands. The authors created and validated the Coping with Terror Scale to fill the measurement gap. The authors emphasized content validity, leveraging the knowledge of terror experts and groups of Israelis. A multistep approach involved construct definition and item generation, trimming and refining the measure, exploring the factor structure underlying item responses, and garnering evidence for reliability and validity. The final scale comprised six factors that were generally consistent with the authors' original construct specifications. Scores on items linked to these factors demonstrate good reliability and validity. Future studies using the Coping with Terror Scale with other populations facing terrorist threats are needed to test its ability to predict resilience, functional impairment, and psychological distress.

  18. Four tenets of modern validity theory for medical education assessment and evaluation.

    Science.gov (United States)

    Royal, Kenneth D

    2017-01-01

    Validity is considered by many to be the most important criterion for evaluating a set of scores, yet few agree on what exactly the term means. Since the mid-1800s, scholars have been concerned with the notion of validity, but over time, the term has developed a variety of meanings across academic disciplines and contexts. Accordingly, when scholars with different academic backgrounds, many of whom hold deeply entrenched perspectives about validity conceptualizations, converge in the field of medical education assessment, it is a recipe for confusion. Thus, it is important to work toward a consensus about validity in the context of medical education assessment. Thus, the purpose of this work was to present four fundamental tenets of modern validity theory in an effort to establish a framework for scholars in the field of medical education assessment to follow when conceptualizing validity, interpreting validity evidence, and reporting research findings.

  19. Validity of Linder Hypothesis in Bric Countries

    Directory of Open Access Journals (Sweden)

    Rana Atabay

    2016-03-01

    Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.

  20. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.