WorldWideScience

Sample records for validation gv program

  1. Rainfall Product Evaluation for the TRMM Ground Validation Program

    Science.gov (United States)

    Amitai, E.; Wolff, D. B.; Robinson, M.; Silberstein, D. S.; Marks, D. A.; Kulie, M. S.; Fisher, B.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Evaluation of the Tropical Rainfall Measuring Mission (TRMM) satellite observations is conducted through a comprehensive Ground Validation (GV) Program. Standardized instantaneous and monthly rainfall products are routinely generated using quality-controlled ground based radar data from four primary GV sites. As part of the TRMM GV program, effort is being made to evaluate these GV products and to determine the uncertainties of the rainfall estimates. The evaluation effort is based on comparison to rain gauge data. The variance between the gauge measurement and the true averaged rain amount within the radar pixel is a limiting factor in the evaluation process. While monthly estimates are relatively simple to evaluate, the evaluation of the instantaneous products are much more of a challenge. Scattegrams of point comparisons between radar and rain gauges are extremely noisy for several reasons (e.g. sample volume discrepancies, timing and navigation mismatches, variability of Z(sub e)-R relationships), and therefore useless for evaluating the estimates. Several alternative methods, such as the analysis of the distribution of rain volume by rain rate as derived from gauge intensities and from reflectivities above the gauge network will be presented. Alternative procedures to increase the accuracy of the estimates and to reduce their uncertainties also will be discussed.

  2. The GPM Ground Validation Program: Pre to Post-Launch

    Science.gov (United States)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi

  3. 76 FR 73483 - Airworthiness Directives; Gulfstream Aerospace Corporation Model GV and GV-SP Airplanes

    Science.gov (United States)

    2011-11-29

    ... Management Branch, ACE-102A, FAA, Atlanta Aircraft Certification Office (ACO), 1701 Columbia Avenue, College... Airworthiness Directives; Gulfstream Aerospace Corporation Model GV and GV-SP Airplanes AGENCY: Federal Aviation... certain Gulfstream Aerospace Corporation Model GV and GV-SP airplanes. This AD was prompted by...

  4. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  5. 76 FR 36392 - Airworthiness Directives; Gulfstream Aerospace Corporation Model GV and GV-SP Airplanes

    Science.gov (United States)

    2011-06-22

    ... Aircraft Certification Office (ACO) 1701 Columbia Avenue, College Park, Georgia 30337; phone: 404-474-5566..., Atlanta Aircraft Certification Office (ACO) 1701 Columbia Avenue, College Park, Georgia 30337; telephone... Aerospace Corporation Model GV and GV-SP Airplanes AGENCY: Federal Aviation Administration (FAA), DOT...

  6. Clustering gene expression data based on predicted differential effects of GV interaction.

    Science.gov (United States)

    Pan, Hai-Yan; Zhu, Jun; Han, Dan-Fu

    2005-02-01

    Microarray has become a popular biotechnology in biological and medical research. However, systematic and stochastic variabilities in microarray data are expected and unavoidable, resulting in the problem that the raw measurements have inherent "noise" within microarray experiments. Currently, logarithmic ratios are usually analyzed by various clustering methods directly, which may introduce bias interpretation in identifying groups of genes or samples. In this paper, a statistical method based on mixed model approaches was proposed for microarray data cluster analysis. The underlying rationale of this method is to partition the observed total gene expression level into various variations caused by different factors using an ANOVA model, and to predict the differential effects of GV (gene by variety) interaction using the adjusted unbiased prediction (AUP) method. The predicted GV interaction effects can then be used as the inputs of cluster analysis. We illustrated the application of our method with a gene expression dataset and elucidated the utility of our approach using an external validation.

  7. Giant vesicles (GV) in colloidal system under the optical polarization microscope (OPM).

    Science.gov (United States)

    Khalid, Khalisanni; Noh, Muhammad Azri Mohd; Khan, M Niyaz; Ishak, Ruzaina; Penney, Esther; Chowdhury, Zaira Zaman; Hamzah, Mohammad Hafiz; Othman, Maizatulnisa

    2017-09-01

    This paper discusses the unprecedented microscopic findings of micellar growth in colloidal system (CS) of catalyzed piperidinolysis of ionized phenyl salicylate (PS - ). The giant vesicles (GV) was observed under the optical polarization microscope (OPM) at [NaX]=0.1M where X=3-isopropC 6 H 4 O - . The conditions were rationalized from pseudo-first-order rate constant, k obs of PS - of micellar phase at 31.1×10 -3 s -1 reported in previous publication. The overall diameter of GV (57.6μm) in CS (CTABr/NaX/H 2 O)-catalyzed piperidinolysis (where X=3-isopropC 6 H 4 O) of ionized phenyl salicylate were found as giant unilamellar vesicles (GUV) and giant multilamellar vesicles (GMV). The findings were also validated by means of rheological analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A Multi-Faceted View of GPM GV in the Post-Launch Era

    Science.gov (United States)

    Petersen, W. A.

    2015-12-01

    NASA GPM Ground Validation (GV) activities in the early post-launch era have focused on: a) intercomparison of early version satellite products to GV data originating from NOAA Q3, WSR-88D, and Tier-1 research ground radar (GR) and instrument networks; b) continued physical validation of GPM algorithms using recent field campaign and site-specific datasets (warm and cold season); and c) development and use of rainfall products for hydrologic validation and bridging-validation of Level II and Level III satellite products (IMERG). Intercomparisons of GPM products with Q3 rainfall and WSR-88D ground-radar (GR) data over CONUS exhibit reasonable agreement. For example, DPR radar reflectivities geo-matched to reflectivity profiles from ~60 GRs generally differ by 2 dB or less. Occasional low-biases do appear in the rainwater portion of DPR Ku-Band convective reflectivity profiles. In stratiform precipitation, DPR-diagnosed reflectivity and rain drop size distributions are frequently very similar to those retrieved from GR products. DPR and Combined algorithm rainrate products compare reasonably well to each other and to Q3 at CONUS scales. GPROF2014 radiometer-based rain rates compare well to Q3 in a spatial sense (correlations of ~0.6); but, GMI estimates appear to be slightly low-biased relative to Q3 and to DPR and Combined algorithm products. The last NASA GPM GV-led field effort, OLYMPEX, will occur in Nov 2015 to Jan 2016. OLYMPEX is designed to study cold-season precipitation processes and hydrology in the orographic and oceanic domains of western Washington State. In addition to occasional field campaigns like OLYMPEX, continuous field measurements using multi-parameter radar and instrument networks targeted to direct validation and specific problems in physical validation (e.g., path-integrated attenuation and the impacts of drop size distribution, non-uniform beam filling and multiple scattering) are also being collected under coincident GPM core overpasses at

  9. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  10. G.V. Yusupov and problems of Tatar epigraphy

    Directory of Open Access Journals (Sweden)

    Mukhametshin Dzhamil G.

    2014-12-01

    Full Text Available The valuable contribution made by Garun Valeyevich Yusupov to the development of the Tatar epigraphy, the basic achievements and main objectives of contemporary epigraphic science in Tatarstan are considered in the article. The 1946-1957s expeditions conducted by G.V. Yusupov covered the majority of Tatarstan, Chuvashia and Bashkortostan regions (cca. 150 settlements and revealed a few hundred Tatar epigraphic monuments (mostly epitaphs dating to the period from the 13-14th to the 19the – early 20th centuries. His publications were distinguished by their precision and highly informative character, becoming a model for further research. On the basis of collected and accumulated materials G.V. Yusupov developed a new classification of the 13-19th-century epigraphy resources. Contemporary studies have considerably expanded the geography of the findings by including Astrakhan, Penza and Irkutsk Regions and North Kazakhstan; they allowed tracing the peculiarities of the gravestone epitaph tradition development. The main objective of Tatarstan epigraphists is reveal and completely publish the gravestone research epitaphs, and to preserve this kind of source that provides unique information on the language, art and history of the Tatar people.

  11. Validation of the TEXSAN thermal-hydraulic analysis program

    International Nuclear Information System (INIS)

    Burns, S.P.; Klein, D.E.

    1992-01-01

    The TEXSAN thermal-hydraulic analysis program has been developed by the University of Texas at Austin (UT) to simulate buoyancy driven fluid flow and heat transfer in spent fuel and high level nuclear waste (HLW) shipping applications. As part of the TEXSAN software quality assurance program, the software has been subjected to a series of test cases intended to validate its capabilities. The validation tests include many physical phenomena which arise in spent fuel and HLW shipping applications. This paper describes some of the principal results of the TEXSAN validation tests and compares them to solutions available in the open literature. The TEXSAN validation effort has shown that the TEXSAN program is stable and consistent under a range of operating conditions and provides accuracy comparable with other heat transfer programs and evaluation techniques. The modeling capabilities and the interactive user interface employed by the TEXSAN program should make it a useful tool in HLW transportation analysis

  12. Validation of a Poison Prevention Program.

    Science.gov (United States)

    Gill, Noel C.; Braden, Barbara T.

    Two way analyses of variance and cross-group descriptive comparisons assessed the effectiveness of the Siop Poison Prevention Program, which included an educational program and the use of warning labels, on improving verbal and visual discrimination of poisonous and nonpoisonous products for preschool children. The study sample consisted of 156…

  13. Validating High-Stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    2002-01-01

    Makes the point that the interpretations and use of high-stakes test scores rely on policy assumptions about what should be taught and the content standards and performance standards that should be applied. The assumptions built into an assessment need to be subjected to scrutiny and criticism if a strong case is to be made for the validity of the…

  14. Electroacupuncture at Dazhui (GV14 and Mingmen (GV4 protects against spinal cord injury: the role of the Wnt/β-catenin signaling pathway

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2016-01-01

    Full Text Available Electroacupuncture at Dazhui (GV14 and Mingmen (GV4 on the Governor Vessel has been shown to exhibit curative effects on spinal cord injury; however, the underlying mechanism remains poorly understood. In this study, we established rat models of spinal cord injury using a modified Allen's weight-drop method. Ninety-nine male Sprague-Dawley rats were randomly divided into three equal groups: sham (only laminectomy, SCI (induction of spinal cord injury at T10, and EA (induction of spinal cord injury at T10 and electroacupuncture intervention at GV14 and GV4 for 20 minutes once a day. Rats in the SCI and EA groups were further randomly divided into the following subgroups: 1-day (n = 11, 7-day (n = 11, and 14-day (n = 11. At 1, 7, and 14 days after electroacupuncture treatment, the Basso, Beattie and Bresnahan locomotor rating scale showed obvious improvement in rat hind limb locomotor function, hematoxylin-eosin staining showed that the histological change of injured spinal cord tissue was obviously alleviated, and immunohistochemistry and western blot analysis showed that Wnt1, Wnt3a, β-catenin immunoreactivity and protein expression in the injured spinal cord tissue were greatly increased compared with the sham and SCI groups. These findings suggest that electroacupuncture at GV14 and GV4 upregulates Wnt1, Wnt3a, and β-catenin expression in the Wnt/β-catenin signaling pathway, exhibiting neuroprotective effects against spinal cord injury.

  15. WARM HCN IN THE PLANET FORMATION ZONE OF GV TAU N

    Energy Technology Data Exchange (ETDEWEB)

    Fuente, Asuncion [Observatorio Astronomico Nacional (OAN,IGN), Apdo 112, E-28803 Alcala de Henares (Spain); Cernicharo, Jose; Agundez, Marcelino, E-mail: a.fuente@oan.es [Centro de Astrobiologia (CSIC/INTA), Laboratory of Molecular Astrophysics, Ctra. Ajalvir km. 4, E-28850 Torrejon de Ardoz (Spain)

    2012-07-20

    The Plateau de Bure Interferometer has been used to map the continuum emission at 3.4 mm and 1.1 mm together with the J = 1{yields}0 and J = 3{yields}2 lines of HCN and HCO{sup +} toward the binary star GV Tau. The 3.4 mm observations did not resolve the binary components, and the HCN J = 1{yields}0 and HCO{sup +} J 1{yields}0 line emissions trace the circumbinary disk and the flattened envelope. However, the 1.1 mm observations resolved the individual disks of GV Tau N and GV Tau S and allowed us to study their chemistry. We detected the HCN 3{yields}2 line only toward the individual disk of GV Tau N, and the emission of the HCO{sup +} 3{yields}2 line toward GV Tau S. Simple calculations indicate that the 3{yields}2 line of HCN is formed in the inner R < 12 AU of the disk around GV Tau N where the HCN/HCO{sup +} abundance ratio is >300. On the contrary, this ratio is <1.6 in the disk around GV Tau S. The high HCN abundance measured in GV Tau N is well explained by photochemical processes in the warm (>400 K) and dense (n > 10{sup 7} cm{sup -3}) disk surface.

  16. gb4gv: a genome browser for geminivirus

    Directory of Open Access Journals (Sweden)

    Eric S. Ho

    2017-04-01

    Full Text Available Background Geminiviruses (family Geminiviridae are prevalent plant viruses that imperil agriculture globally, causing serious damage to the livelihood of farmers, particularly in developing countries. The virus evolves rapidly, attributing to its single-stranded genome propensity, resulting in worldwide circulation of diverse and viable genomes. Genomics is a prominent approach taken by researchers in elucidating the infectious mechanism of the virus. Currently, the NCBI Viral Genome website is a popular repository of viral genomes that conveniently provides researchers a centralized data source of genomic information. However, unlike the genome of living organisms, viral genomes most often maintain peculiar characteristics that fit into no single genome architecture. By imposing a unified annotation scheme on the myriad of viral genomes may downplay their hallmark features. For example, the viron of begomoviruses prevailing in America encapsulates two similar-sized circular DNA components and both are required for systemic infection of plants. However, the bipartite components are kept separately in NCBI as individual genomes with no explicit association in linking them. Thus, our goal is to build a comprehensive Geminivirus genomics database, namely gb4gv, that not only preserves genomic characteristics of the virus, but also supplements biologically relevant annotations that help to interrogate this virus, for example, the targeted host, putative iterons, siRNA targets, etc. Methods We have employed manual and automatic methods to curate 508 genomes from four major genera of Geminiviridae, and 161 associated satellites obtained from NCBI RefSeq and PubMed databases. Results These data are available for free access without registration from our website. Besides genomic content, our website provides visualization capability inherited from UCSC Genome Browser. Discussion With the genomic information readily accessible, we hope that our database

  17. Building Technologies Program Multi-Year Program Plan Technology Validation and Market Introduction 2008

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-01-01

    Building Technologies Program Multi-Year Program Plan 2008 for technology validation and market introduction, including ENERGY STAR, building energy codes, technology transfer application centers, commercial lighting initiative, EnergySmart Schools, EnergySmar

  18. Forty Cases of Insomnia Treated by Suspended Moxibustion at Baihui (GV 20)

    Institute of Scientific and Technical Information of China (English)

    JU Yan-li; CHI Xu; LIU Jian-xin

    2009-01-01

    Objective:To observe the therapeutic effect of suspended moxibustion at Baihui (GV 20) for insomnia.Methods: 75 cases were divided randomly into two groups, with 40 cases in the treatment group treated by suspended moxibustion over Baihui (GV 20) and 35 cases in the control group treated by oral administration of Estazolam. Results: The difference in therapeutic effect between the two groups was not statistically significant (P>0.1). Conclusion: It was concluded that suspended moxibustion at Baihui (GV 20) is as effective as Estazolam for insomnia.

  19. Anomalous increase of solar anisotropy above 150GV in 1981-1983

    International Nuclear Information System (INIS)

    Ueno, H.; Fujii, Z.; Mori, S.; Morishita, I.; Nagashima, K.

    1985-01-01

    An analysis was carried out of the observed data with Nagoya (surface). Misato (34mwe) and Sakashita (80mwe) multi-directional muon telescopes for the solar activity maximum period of 1978-1983. These data respond to primaries extending over the median rigidity range 60GV to 600GV. The amplitude observed at the Sakashita station in 1981-1983 increased, especially in 1982; the amplitude is twice as large as that in 1978-1980, when those at Nagoya and Misato stations are nearly the same as those in 1978-1980. Uni-directional anisotropy is derived by the best fit method by assuming a flat rigidity spectrum with the upper coutoff rigidity Pu. The value of Pu obtained is 270GV in 1981-1983 and 150GV in 1978-1980

  20. Validation of a proposal for evaluating hospital infection control programs.

    Science.gov (United States)

    Silva, Cristiane Pavanello Rodrigues; Lacerda, Rúbia Aparecida

    2011-02-01

    To validate the construct and discriminant properties of a hospital infection prevention and control program. The program consisted of four indicators: technical-operational structure; operational prevention and control guidelines; epidemiological surveillance system; and prevention and control activities. These indicators, with previously validated content, were applied to 50 healthcare institutions in the city of São Paulo, Southeastern Brazil, in 2009. Descriptive statistics were used to characterize the hospitals and indicator scores, and Cronbach's α coefficient was used to evaluate the internal consistency. The discriminant validity was analyzed by comparing indicator scores between groups of hospitals: with versus without quality certification. The construct validity analysis was based on exploratory factor analysis with a tetrachoric correlation matrix. The indicators for the technical-operational structure and epidemiological surveillance presented almost 100% conformity in the whole sample. The indicators for the operational prevention and control guidelines and the prevention and control activities presented internal consistency ranging from 0.67 to 0.80. The discriminant validity of these indicators indicated higher and statistically significant mean conformity scores among the group of institutions with healthcare certification or accreditation processes. In the construct validation, two dimensions were identified for the operational prevention and control guidelines: recommendations for preventing hospital infection and recommendations for standardizing prophylaxis procedures, with good correlation between the analysis units that formed the guidelines. The same was found for the prevention and control activities: interfaces with treatment units and support units were identified. Validation of the measurement properties of the hospital infection prevention and control program indicators made it possible to develop a tool for evaluating these programs

  1. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  2. WARM HCN IN THE PLANET FORMATION ZONE OF GV TAU N

    International Nuclear Information System (INIS)

    Fuente, Asunción; Cernicharo, José; Agúndez, Marcelino

    2012-01-01

    The Plateau de Bure Interferometer has been used to map the continuum emission at 3.4 mm and 1.1 mm together with the J = 1→0 and J = 3→2 lines of HCN and HCO + toward the binary star GV Tau. The 3.4 mm observations did not resolve the binary components, and the HCN J = 1→0 and HCO + J 1→0 line emissions trace the circumbinary disk and the flattened envelope. However, the 1.1 mm observations resolved the individual disks of GV Tau N and GV Tau S and allowed us to study their chemistry. We detected the HCN 3→2 line only toward the individual disk of GV Tau N, and the emission of the HCO + 3→2 line toward GV Tau S. Simple calculations indicate that the 3→2 line of HCN is formed in the inner R + abundance ratio is >300. On the contrary, this ratio is 400 K) and dense (n > 10 7 cm –3 ) disk surface.

  3. GPM Ground Validation: Pre to Post-Launch Era

    Science.gov (United States)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation

  4. Effect of acupuncture on regional cerebral blood flow at acupoints GV 20, GV. 26, LI, 4. ST. 36, SP. 6 evaluated by Tc-99m ECD brain SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ho Chun; Bom, Hee Seung; Kang, Hwa Jeong; Kim, Seong Min; Jeong, Hwan Jeong; Kim, Ji Yeul [College of Medicine, Dongshin Univ., Naju (Korea, Republic of); Ahn, Soo Gi [College of Medicine, Wonkwang Univ., Iksan (Korea, Republic of)

    2000-12-01

    To evaluate the effect of acupuncture on regional cerebral blood flow (rCBF) at acupoints suggested by oriental medicine to be related to the treatment of cerebrovascular diseases. Rest/acupuncture-stimulation Tc-99m ECD brain SPECT using a same-dose subtraction method was performed on 54 normal volunteers (34 males, 20 females, age range from 18 to 62 years) using six paradigms: acupuncture at acupoints GV. 20, GV. 26, LI. 4, ST. 36 and SP. 6. In the control study, needle location was chosen on a non-meridian focus 1 cm posterior to the right fibular head. All images were spatially normalized, and the differences between rest and acupuncture stimulation were statistically analyzed using SPM for Windows. Acupuncture applied at acupoint GV. 20 increased rCBF in both the anterior frontal lobes, the right frontotemporal lobes, and the left anterior temporal lobe and the left cerebellar hemisphere. Acupuncture at GV. 26 increased rCBF in the left prefrontal cortex. Acupuncture at LI. 4 increased rCBF in the left prefrontal and both the inferior frontal lobes, and the left anterior temporal lobe and the left cerebellar hemisphere. Acupuncture at ST. 36 increased rCBF in the left anterior temporal lobe, the right inferior frontal lobes, and the left cerebellum. Acupuncture at SP 6 increased rCBF in the left inferior frontal and anterior temporal lobes. In the control stimulation, no significant rCBF increase was observed. The results demonstrated a correlation between stimuation at each acupoint with increase in rCBF to the corresponding brain areas.

  5. Effect of acupuncture on regional cerebral blood flow at acupoints GV 20, GV. 26, LI, 4. ST. 36, SP. 6 evaluated by Tc-99m ECD brain SPECT

    International Nuclear Information System (INIS)

    Song, Ho Chun; Bom, Hee Seung; Kang, Hwa Jeong; Kim, Seong Min; Jeong, Hwan Jeong; Kim, Ji Yeul; Ahn, Soo Gi

    2000-01-01

    To evaluate the effect of acupuncture on regional cerebral blood flow (rCBF) at acupoints suggested by oriental medicine to be related to the treatment of cerebrovascular diseases. Rest/acupuncture-stimulation Tc-99m ECD brain SPECT using a same-dose subtraction method was performed on 54 normal volunteers (34 males, 20 females, age range from 18 to 62 years) using six paradigms: acupuncture at acupoints GV. 20, GV. 26, LI. 4, ST. 36 and SP. 6. In the control study, needle location was chosen on a non-meridian focus 1 cm posterior to the right fibular head. All images were spatially normalized, and the differences between rest and acupuncture stimulation were statistically analyzed using SPM for Windows. Acupuncture applied at acupoint GV. 20 increased rCBF in both the anterior frontal lobes, the right frontotemporal lobes, and the left anterior temporal lobe and the left cerebellar hemisphere. Acupuncture at GV. 26 increased rCBF in the left prefrontal cortex. Acupuncture at LI. 4 increased rCBF in the left prefrontal and both the inferior frontal lobes, and the left anterior temporal lobe and the left cerebellar hemisphere. Acupuncture at ST. 36 increased rCBF in the left anterior temporal lobe, the right inferior frontal lobes, and the left cerebellum. Acupuncture at SP 6 increased rCBF in the left inferior frontal and anterior temporal lobes. In the control stimulation, no significant rCBF increase was observed. The results demonstrated a correlation between stimuation at each acupoint with increase in rCBF to the corresponding brain areas

  6. Assessment of Factors Associated with the Safety Depth of GV15 Yamen

    Directory of Open Access Journals (Sweden)

    Park Soo-Jung

    2014-03-01

    Full Text Available Objectives: Yamen is the fifteenth acupoint of the Governor Vessel Meridian (GV15. It is anatomically close to the medulla oblongata, so finding the safety depth of the acupoint is very important. However, few studies on the safety depth of GV15 have been done. Methods: This study tried to measure the safety depth of GV15 by using magnetic resonance imaging (MRI scans and to analyze the factors affecting the safety depth through multiple regression analyses. This study was carried out for patients who had a brain MRI scan while visiting Jeonju Wonkwang Hospital, Korea. The shortest distance between the glabella and the occipital protuberance (DGO, the horizontal distance between the glabella and the back of the head (DGB and the dangerous depth (DD were measured from the sagittal views of the MRI images. The DD is the horizontal distance from the skin’s surface at GV15 to the spinal dura mater. Results: The model suggested that the safety depth (SD was significantly associated with gender (β = 0.474, P < 0.0001, DGO (β = 0.272, P = 0.027, and BMI (β = 0.249, P = 0.005 and the combination of three variables can explain the SD, with R2 = 0.571 (Table 3 A longer SD was associated with males and with greater BMI and DGO. Conclusion: This study suggests that gender, BMI and DGO may be important factors when the SD of GV15 is considered clinically through a multiple regression analysis of GV15.

  7. Uso di gvSIG e SEXTANTE per la perimetrazione degli ambiti periurbani

    Directory of Open Access Journals (Sweden)

    Gabriele Nolè

    2010-03-01

    Full Text Available Use of gvSIG and SEXTANTE for the perimetration of periurban areasThe periurban fringe is the portion of land with characteristics of urbanization that cannot be considered neither urban nor rural. These areas are often characterized by a building expectancy, whose detection requires careful consideration of several territorial and environmental variables. It was implemented using a model of spatial analysis based on kernel Density Estimation (KDE for the detection of periurban areas. The model is tested in the province of Potenza using gvSIG and SEXTANTE on Ubuntu Linux.

  8. Uso di gvSIG e SEXTANTE per la perimetrazione degli ambiti periurbani

    Directory of Open Access Journals (Sweden)

    Gabriele Nolè

    2010-03-01

    Full Text Available Use of gvSIG and SEXTANTE for the perimetration of periurban areas The periurban fringe is the portion of land with characteristics of urbanization that cannot be considered neither urban nor rural. These areas are often characterized by a building expectancy, whose detection requires careful consideration of several territorial and environmental variables. It was implemented using a model of spatial analysis based on kernel Density Estimation (KDE for the detection of periurban areas. The model is tested in the province of Potenza using gvSIG and SEXTANTE on Ubuntu Linux.

  9. INL Experimental Program Roadmap for Thermal Hydraulic Code Validation

    Energy Technology Data Exchange (ETDEWEB)

    Glenn McCreery; Hugh McIlroy

    2007-09-01

    Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role of expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related

  10. Validation experience with the core calculation program karate

    International Nuclear Information System (INIS)

    Hegyi, Gy.; Hordosy, G.; Kereszturi, A.; Makai, M.; Maraczy, Cs.

    1995-01-01

    A relatively fast and easy-to-handle modular code system named KARATE-440 has been elaborated for steady-state operational calculations of VVER-440 type reactors. It is built up from cell, assembly and global calculations. In the frame of the program neutron physical and thermohydraulic process of the core at normal startup, steady and slow transient can be simulated. The verification and validation of the global code have been prepared recently. The test cases include mathematical benchmark and measurements on operating VVER-440 units. Summary of the results, such as startup parameters, boron letdown curves, radial and axial power distributions of some cycles of Paks NPP is presented. (author)

  11. Correction: Graillot, B.; et al. Progressive Adaptation of a CpGV Isolate to Codling Moth Populations Resistant to CpGV-M. Viruses 2014, 6, 5135–5144

    Directory of Open Access Journals (Sweden)

    Benoît Graillot

    2015-12-01

    Full Text Available In our article “Progressive Adaptation of a CpGV Isolate to Codling Moth Populations Resistant to CpGV-M.” (Viruses 2014, 6, 5135–5144; doi:10.3390/v6125135 [1] we obtained resistance values of the codling moth, Cydia pomonella, RGV laboratory colony [2], when challenged with Cydia pomonella Granulovirus, Mexican Isolate (CpGV-M, that were lower than those previously published [2]. Careful analysis of both the RGV colony and the CpGV-M virus stock used led to the realization that a low level contamination of this virus stock with CpGV-R5 occurred. We have made new tests with a verified stock, and the results are now in agreement with those previously published.

  12. Constraining the disk masses of the class I binary protostar GV Tau

    Energy Technology Data Exchange (ETDEWEB)

    Sheehan, Patrick D.; Eisner, Josh A., E-mail: psheehan@email.arizona.edu [Steward Observatory, University of Arizona 933 North Cherry Avenue, Tucson, AZ 85721 (United States)

    2014-08-10

    We present new spatially resolved 1.3 mm imaging with CARMA of the GV Tau system. GV Tau is a Class I binary protostar system in the Taurus Molecular Cloud, the components of which are separated by 1.''2. Each protostar is surrounded by a protoplanetary disk, and the pair may be surrounded by a circumbinary envelope. We analyze the data using detailed radiative transfer modeling of the system. We create synthetic protostar model spectra, images, and visibilities and compare them with CARMA 1.3 mm visibilities, a Hubble Space Telescope near-infrared scattered light image, and broadband spectral energy distributions from the literature to study the disk masses and geometries of the GV Tau disks. We show that the protoplanetary disks around GV Tau fall near the lower end of estimates of the Minimum Mass Solar Nebula, and may have just enough mass to form giant planets. When added to the sample of Class I protostars from Eisner, we confirm that Class I protostars are on average more massive than their Class II counterparts. This suggests that substantial dust grain processing occurs between the Class I and Class II stages, and may help to explain why the Class II protostars do not appear to have, on average, enough mass in their disks to form giant planets.

  13. Effect of Nanoparticles on the Survival and Development of Vitrified Porcine GV Oocytes.

    Science.gov (United States)

    Li, W J; Zhou, X L; Liu, B L; Dai, J J; Song, P; Teng, Y

    BACKGROUND: Some mammalian oocytes have been successfully cryopreserved by vitrification. However, the survival and developmental rate of vitrified oocytes is still low. The incorporation of nanoparticles into cryoprotectant (CPA) may improve the efficiency of vitrification by changing the properties of solutions. The toxicity of different concentrations of hydroxy apatite (HA), silica dioxide (SO 2 ), aluminum oxide (Al 2 O 3 ) and titanium dioxide (TiO 2 ) nanoparticles (20 nm in diameter) to oocytes was tested and the toxicity threshold value of each nanoparticle was determined. Porcine GV oocytes were vitrified in optimized nano-CPA, and effects of diameter and concentration of nanoparticles on the survival rate and developmental rate of porcine GV oocytes were compared. HA nanoparticles have demonstrated the least toxicity among four nanoparticles and the developmental rate of GV-stage porcine oocytes was 100% when its concentration was lower than 0.5%. By adding 0.1% HA into VS, the developmental rate of GV-stage porcine oocytes (22%) was significantly higher than other groups. The effect of vitrification in nano-CPA on oocytes was related to the concentration of HA nanoparticles rather than their size. By adding 0.05% HA nanoparticles (60nm in diameter), the developmental rate increased dramatically from 14.7% to 30.4%. Nano-cryopreservation offers a new way to improve the effect of survival and development of oocytes, but the limitation of this technology shall not be ignored.

  14. Development and validation of the computer program TNHXY

    International Nuclear Information System (INIS)

    Xolocostli M, V.; Valle G, E. del; Alonso V, G.

    2003-01-01

    This work describes the development and validation of the computer program TNHXY (Neutron Transport with Nodal Hybrid schemes in X Y geometry), which solves the discrete-ordinates neutron transport equations using a discontinuous Bi-Linear (DBiL) nodal hybrid method. One of the immediate applications of TNHXY is in the analysis of nuclear fuel assemblies, in particular those of BWRs. Its validation was carried out by reproducing some results for test or benchmark problems that some authors have solved using other numerical techniques. This allows to ensure that the program will provide results with similar accuracy for other problems of the same type. To accomplish this two benchmark problems have been solved. The first problem consists in a BWR fuel assembly in a 7x7 array without and with control rod. The results obtained with TNHXY are consistent with those reported for the TWOTRAN code. The second benchmark problem is a Mixed Oxide (MOX) fuel assembly in a 10x10 array. This last problem is known as the WPPR benchmark problem of the NEA Data Bank and the results are compared with those obtained with commercial codes like HELIOS, MCNP-4B and CPM-3. (Author)

  15. hTERT peptide fragment GV1001 demonstrates radioprotective and antifibrotic effects through suppression of TGF‑β signaling.

    Science.gov (United States)

    Chen, Wei; Shin, Ki-Hyuk; Kim, Sangjae; Shon, Won-Jun; Kim, Reuben H; Park, No-Hee; Kang, Mo K

    2018-06-01

    GV1001 is a 16‑amino acid peptide derived from the human telomerase reverse transcriptase (hTERT) protein (616‑626; EARPALLTSRLRFIPK), which lies within the reverse transcriptase domain. Originally developed as an anticancer vaccine, GV1001 demonstrates diverse cellular effects, including anti‑inflammatory, tumor suppressive and antiviral effects. In the present study, the radioprotective and antifibrotic effects of GV1001 were demonstrated through suppressing transforming growth factor‑β (TGF‑β) signaling. Proliferating human keratinocytes underwent premature senescence upon exposure to ionizing radiation (IR), however, treatment of cells with GV1001 allowed the cells to proliferate and showed a reduction in senescent phenotype. GV1001 treatment notably increased the levels of Grainyhead‑like 2 and phosphorylated (p‑)Akt (Ser473), and reduced the activation of p53 and the level of p21/WAF1 in irradiated keratinocytes. It also markedly suppressed the level of TGF‑β signaling molecules, including p‑small mothers against decapentaplegic (Smad)2/3 and Smad4, and TGF‑β target genes, including zinc finger E‑box binding homeobox 1, fibronectin, N‑cadharin and Snail, in irradiated keratinocytes. Furthermore, GV1001 suppressed TGF‑β signaling in primary human fibroblasts and inhibited myofibroblast differentiation. Chromatin immunoprecipitation revealed that GV1001 suppressed the binding of Smad2 on the promoter regions of collagen type III α1 chain (Col3a1) and Col1a1. In a dermal fibrosis model in vivo, GV1001 treatment notably reduced the thickness of fibrotic lesions and the synthesis of Col3a1. These data indicated that GV1001 ameliorated the IR‑induced senescence phenotype and tissue fibrosis by inhibiting TGF‑β signaling and may have therapeutic effects on radiation‑induced tissue damage.

  16. Desarrollo de un SIG para dispositivos móviles utilizando gvSIG Mobile

    OpenAIRE

    Pérez Álvarez, Francisco

    2012-01-01

    Este proyecto se inicia con la introducción al mundo de los SIG basados en software de código abierto, como es el caso del gvSIG Mobile y gvSIG Desktop. Basándonos en estos dos programas, hemos creado una aplicación SIG para dispositivos móviles (PDA’s y smartphones) gracias a la cual, será posible actualizar cartografía en tiempo real directamente en campo. Para introducirse de pleno en el tema del proyecto, se realizó un análisis detallado sobre las necesidades que podrían existir a l...

  17. libLocation: acceso a dispositivos de localización para gvSIG Desktop y Mobile

    OpenAIRE

    Jordán Aldasorro, Juan G.; Planells Jiménez, Manuel

    2009-01-01

    Inicialmente integrada en el piloto de gvSIG Mobile, la librería libLocation tiene como objetivo dotar a los proyectos gvSIG Desktop y gvSIG Mobile un acceso transparente a fuentes de localización. La librería se fundamenta en las especificaciones JSR-179 -API de localización para J2ME- y JSR-293 -API de localización para J2ME v2.0-, proporcionando una interfaz uniforme a diferentes fuentes de localización, mediante funciones de alto nivel. Asimismo, se extiende la funcionalida...

  18. [Acupuncture at Baihui(GV 20) and Shenting(GV 24) combined with basic treatment and regular rehabilitation for post-stroke cognitive impairment:a randomized controlled trial].

    Science.gov (United States)

    Zhan, Jie; Pan, Ruihuan; Guo, Youhua; Zhan, Lechang; He, Mingfeng; Wang, Qiuchun; Chen, Hongxia

    2016-08-12

    To observe the clinical effect of acupuncture at Baihui(GV 20) and Shenting(GV 24) combined with rehabilitation for post-stroke cognitive impairment(PSCI). Fifty patients with PSCI were randomly assigned to an observation group and a control group,25 cases in each one. In the control group,basic treatment and regular rehabilitation were applied. In the observation group,acupuncture at Baihui(GV 20) and Shenting(GV 24) and the same therapies as the control group were used for continuous four weeks,once a day and five times a week. Mini-mental state examination(MMSE) and Montreal cognitive assessment(MoCA) were observed before and after treatment in the two groups. After treatment,the scores of MMSE and MoCA were improved apparently(both P rehabilitation can obviously improve the cognitive function of PSCI,and the effect is superior to that of basic treatment and regular rehabilitation.

  19. The generalized algebraic modal combination (GAC) rule validation program

    International Nuclear Information System (INIS)

    Mertens, P.G.; Culot, M.V.; Sahgal, S.; Tinic, S.

    1991-01-01

    With R.G. 1.92 the NRC imposes to use the absolute values of the modal responses when performing Response Spectra modal combination with coupling factors derived from the current heuristic, stationary or pseudo-stationary random vibration models. This results in overly conservative calculations in the case of closely spaced modes of opposite signs, a case frequently encountered in dynamic analyses in particular when systems with close modal frequencies have a small mass ratio. A new generalised algebraic combination (GAC) formula and its associated coupling factor have been theoretically derived by the first author. It is based on a non-stationary, non-white noise random vibration model which fully accounts for all the time and frequency dependent aspects of the time histories. This should allow the conservative use of algebraic signs in the modal combination over the whole frequency range, and allow a derogation to the current NRC R.G. 1.92 practice to use absolute signs. The use of the industry wide accepted RS method with the GAC rule will result in more economical and safer NPPs through the reduction of an excessive and unrealistic number of seismic restraints and avoidance of prematurely fatigued plants. It is envisaged to use the GAC seismic response combination method for the evaluation of the seismic response of auxiliary class one lines attached to the primary coolant loop piping of the Beznau 1 and 2 nuclear power plants. Since the plant is in operation, it is imperative to use a methodology which is conservative but still as realistic as possible. The paper presents an introduction to the GAC rule and some aspects of the validation program, which will jointly be undertaken by WESI and NOK for obtaining acceptance by the Swiss Safety Authorities for a seismic qualification program. (author)

  20. Generalizability of GMAT[R] Validity to Programs outside the U.S.

    Science.gov (United States)

    Talento-Miller, Eileen

    2008-01-01

    This study explores the predictive validity of GMAT[R] scores for predicting performance in graduate management programs outside the United States. Results suggest that the validity estimates based on the combination of GMAT[R] scores were about a third of a standard deviation higher for non-U.S. programs compared with existing data on U.S.…

  1. [Clinical efficacy observation of acupuncture at suliao (GV 25) on improving regain of consciousness from coma in severe craniocerebral injury].

    Science.gov (United States)

    Xu, Kai-Sheng; Song, Jian-Hua; Huang, Tiao-Hua; Huang, Zhi-Hua; Yu, Lu-Chang; Zheng, Wei-Ping; Chen, Xiao-Shan; Liu, Chuan

    2014-06-01

    To compare the clinical therapeutic effects differences between acupuncture at Suliao (GV 25) and Shuigou (GV 26) on promoting regain of consciousness from coma in severe craniocerebral injury. Based on regular emergency treatments of neurosurgery, eighty-two cases of craniocerebral injury who were under stable condition were randomly divided into an observation group (42 cases) and a control group (40 cases). Suliao (GV 25) was selected as main aupoint, while Laogong (PC 8) and Yongquan (KI 1), etc. were selected as adjuvant acupoints and Neiguan (PC 6), Sanyinjiao (SP 6), Yifeng (TE 17) and Wangu (GB 12), etc. were selected as matching acupoints in the observation group where a strong needle manipulation was applied to improve the regain of consciousness. The main acupoint of Shuigou (GV 26) along with identical adjuvant acupoints and matching acupoints in the observation group were selected in the control group with identical strong needle manipulation. The treatment was given once a day in both groups, five times per week and ten times were considered as one session. The immediate clinical symptoms after acupuncture at Suliao (GV 25) and Shuigou (GV 26) were observed as well as Glasgow coma scale (GCS) before the treatment, after 45 days and 90 days of treatment to assess the resuscitation time and rate. Also the clinical efficacy was compared between both groups. The occurrence rate of sneezing reflex was 85.7% (36/42) in the observation group, which was higher than 25.0% (10/40) in the control group (P 0.05). Compared before the treatment, GCS were both improved after the treatment in two groups (both P coma in severe craniocerebral injury is positive. It could specifically improve sneezing reflex and stimulate respiratory center, which has more obvious effect than acupuncture at Shuigou (GV 26).

  2. Sustained Implementation Support Scale: Validation of a Measure of Program Characteristics and Workplace Functioning for Sustained Program Implementation.

    Science.gov (United States)

    Hodge, Lauren M; Turner, Karen M T; Sanders, Matthew R; Filus, Ania

    2017-07-01

    An evaluation measure of enablers and inhibitors to sustained evidence-based program (EBP) implementation may provide a useful tool to enhance organizations' capacity. This paper outlines preliminary validation of such a measure. An expert informant and consumer feedback approach was used to tailor constructs from two existing measures assessing key domains associated with sustained implementation. Validity and reliability were evaluated for an inventory composed of five subscales: Program benefits, Program burden, Workplace support, Workplace cohesion, and Leadership style. Exploratory and confirmatory factor analysis with a sample of 593 Triple P-Positive Parenting Program-practitioners led to a 28-item scale with good reliability and good convergent, discriminant, and predictive validity. Practitioners sustaining implementation at least 3 years post-training were more likely to have supervision/peer support, reported higher levels of program benefit, workplace support, and positive leadership style, and lower program burden compared to practitioners who were non-sustainers.

  3. A Novel Visual Data Mining Module for the Geographical Information System gvSIG

    Directory of Open Access Journals (Sweden)

    Romel Vázquez-Rodríguez

    2013-01-01

    Full Text Available The exploration of large GIS models containing spatio-temporal information is a challenge. In this paper we propose the integration of scientific visualization (ScVis techniques into geographic information systems (GIS as an alternative for the visual analysis of data. Providing GIS with such tools improves the analysis and understanding of datasets with very low spatial density and allows to find correlations between variables in time and space. In this regard, we present a new visual data mining tool for the GIS gvSIG. This tool has been implemented as a gvSIG module and contains several ScVis techniques for multiparameter data with a wide range of possibilities to explore interactively the data. The developed module is a powerful visual data mining and data visualization tool to obtain knowledge from multiple datasets in time and space. A real case study with meteorological data from Villa Clara province (Cuba is presented, where the implemented visualization techniques were used to analyze the available datasets. Although it is tested with meteorological data, the developed module is of general application in the sense that it can be used in multiple application fields related with Earth Sciences.

  4. Sector structure of the interplanetary magnetic field and anisotropy of 50-1000 GV cosmic radiation

    International Nuclear Information System (INIS)

    Erdoes, G.; Kota, J.

    1978-12-01

    It is demonstrated that the main features of high-rigidity solar originated anisotropy can be explained in terms of regular particle motion - without diffusion being involved - in the large scale interplanetary magnetic field (IMF). A simple model of the IMF is adopted with a corotating warped current sheet separating the two polarities. The warped shape of the current sheet is essential in producing anisotropy. By calculating energy loss along various computed trajectories, the resulting sidereal, solar and antisidereal variations are determined for both the pre- and post-1969 epochs. The predicted variations turn out fairly stable against changing the parameters of the IMF model. The sense and amplitude of the polarity dependent sidereal vectors are compatible with those established experimentally. Also reproduced is the prediction of corotation as well as the 3 hr phase of the semidiurnal wave. The corotation is found to be near perfect at 50 GV while it decreases at 100 GV. The model presented accounts for the change of solar daily variation taking place in 1969. (author)

  5. Long-term safety and efficacy of autologous platelet lysate drops for treatment of ocular GvHD.

    Science.gov (United States)

    Pezzotta, S; Del Fante, C; Scudeller, L; Rossi, G C; Perotti, C; Bianchi, P E; Antoniazzi, E

    2017-01-01

    Current ocular GvHD (oGvHD) treatments are suboptimal. We investigated the safety and efficacy of long-term continuous treatment with autologous platelet lysate (PL) drops in patients with oGvHD Dry Eye Syndrome (DES) score 2-3 refractory to topical conventional therapy. Ophthalmic evaluation was performed at 6 month intervals. Symptoms were assessed using the Glaucoma Symptom Scale (GSS). Patients were defined 'responders' when showing a reduction at least one grade on National Institutes of Health Eye Score from baseline at the 6 month visit. Thirty-one patients were included, and 16 (51%) completed 36 months of follow-up (range 6.5-72.7). At 6 months all patients were classified as responders: median GSS symptom score decreased from 70 to 41 (33 at 36 months), median GSS function score reduced from 68 to 46 (33 at 36 months) (all P<0.001). Median Tear Break Up Time improved from 3 to 6 s after 6 months and was maintained over time. All signs improved at 6 and 36 months (clinical and statistical significance). No severe adverse events occurred. Long-term treatment with PL drops is secure and effective for oGvHD and can be an efficient therapy option from initial stages of oGvHD to prevent permanent ocular impairment and improving quality of life.

  6. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    Science.gov (United States)

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  7. An introduction to use of the USACE HTRW program's data validation guidelines engineering manual

    International Nuclear Information System (INIS)

    Becker, L.D.; Coats, K.H.

    1994-01-01

    Data validation has been defined by regulatory agencies as a systematic process (consisting of data editing, screening, checking, auditing, verification, certification, and review) for comparing data to established criteria in order to provide assurance that data are adequate for their intended use. A problem for the USACE HTRW Program was that clearly defined data validation guidelines were available only for analytical data quality level IV. These functional data validation guidelines were designed for validation of data produced using protocols from the US E.P.A.'s Contract Laboratory Program (CLP). Unfortunately, USACE experience demonstrates that these level IV functional data validation guidelines were being used to validate data not produced under the CLP. The resulting data validation product was less than satisfactory for USACE HTRW needs. Therefore, the HTRW-MCX initiated an Engineering Manual (EM) for validation of analytical data quality levels other than IV. This EM is entitle ''USACE HTRW Data Validation Guidelines.'' Use of the EM is required for validation of analytical data relating to projects under the jurisdiction of the Department of the Army, Corps of Engineers, Hazardous, Toxic, and Radioactive Waste Program. These data validation guidelines include procedures and checklists for technical review of analytical data at quality levels I, II, III, and V

  8. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    Science.gov (United States)

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  9. AVal: an Extensible Attribute-Oriented Programming Validator for Java

    OpenAIRE

    Noguera , Carlos; Pawlak , Renaud

    2007-01-01

    International audience; Attribute Oriented Programming (@OP ) permits programmers to extend the semantics of a base program by annotating it with attributes that are related to a set of concerns. Examples of this are applications that rely on XDoclet (such as Hibernate) or, with the release of Java5's annotations, EJB3. The set of attributes that implements a concern defines a Domain Specific Language, and as such, imposes syntactic and semantic rules on the way that attributes are included i...

  10. Validation of Linguistic and Communicative Oral Language Tests for Spanish-English Bilingual Programs.

    Science.gov (United States)

    Politzer, Robert L.; And Others

    1983-01-01

    The development, administration, and scoring of a communicative test and its validation with tests of linguistic and sociolinguistic competence in English and Spanish are reported. Correlation with measures of home language use and school achievement are also presented, and issues of test validation for bilingual programs are discussed. (MSE)

  11. Intelligent Testing of Traffic Light Programs: Validation in Smart Mobility Scenarios

    OpenAIRE

    Javier Ferrer; José García-Nieto; Enrique Alba; Francisco Chicano

    2016-01-01

    In smart cities, the use of intelligent automatic techniques to find efficient cycle programs of traffic lights is becoming an innovative front for traffic flow management. However, this automatic programming of traffic lights requires a validation process of the generated solutions, since they can affect the mobility (and security) of millions of citizens. In this paper, we propose a validation strategy based on genetic algorithms and feature models for the automatic generation of different ...

  12. The period analysis of V418 AQL, SU BOO, RV CVn, CR CAS, GV CYG, V432 PER, and BD+42 2782

    International Nuclear Information System (INIS)

    Zasche, P.; Wolf, M.; Kučáková, H.; Uhlař, R.

    2014-01-01

    The minimum timings of eclipsing binaries V418 Aql, SU Boo, RV CVn, CR Cas, GV Cyg, V432 Per, and BD+42 2782 were collected and analyzed. Their long-term behavior was studied via period analysis, revealing a periodic term in eclipse times. We derived 576 new times of minimum. Hence, to describe the periodic variation, a third-body hypothesis was proposed and the resulting orbital periods are as follows: 70, 7.4, 53, 37, 27, 53, and 18 yr, respectively. For the system V432 Per an additional 9.5 yr variation was also found. The predicted minimum masses of these distant bodies were calculated and their detectability discussed. The light curves of SU Boo and RV CVn were analyzed using the PHOEBE program, resulting in physical parameters of the components. New variable stars in the field of V418 Aql were discovered.

  13. Immunological Effect of aGV Rabies Vaccine Administered Using the Essen and Zagreb Regimens: A Double-Blind, Randomized Clinical Trial.

    Science.gov (United States)

    Miao, Li; Shi, Liwei; Yang, Yi; Yan, Kunming; Sun, Hongliang; Mo, Zhaojun; Li, Li

    2018-04-01

    This study evaluated the immunological effect of an aGV rabies virus strain using the Essen and Zagreb immunization programs. A total of 1,944 subjects were enrolled and divided into three groups: the Essen test group, Essen control group, and Zagreb test group. Neutralizing antibody levels and antibody seroconversion rates were determined at 7 and 14 days after the initial inoculations and then 14 days after the final inoculation in all of the subjects. The seroconversion rates for the Essen test group, Essen control group, and Zagreb test group, which were assessed 7 days after the first dosing in a susceptible population, were 35.74%, 26.92%, and 45.49%, respectively, and at 14 days, the seroconversion rates in this population were 100%, 100%, and 99.63%, respectively. At 14 days after the final dosing, the seroconversion rates were 100% in all three of the groups. The neutralizing serum antibody levels of the Essen test group, Essen control group, and Zagreb test group at 7 days after the first dosing in the susceptible population were 0.37, 0.26, and 0.56 IU/mL, respectively, and at 14 days after the initial dosing, these levels were 16.71, 13.85, and 16.80 IU/mL. At 14 days after the final dosing, the neutralizing antibody levels were 22.9, 16.3, and 18.62 IU/mL, respectively. The results of this study suggested that the aGV rabies vaccine using the Essen program resulted in a good serum immune response, and the seroconversion rates and the neutralizing antibody levels generated with the Zagreb regimen were higher than those with the Essen regimen when measured 7 days after the first dose.

  14. Refinement and Validation of a Military Emotional Intelligence Training Program

    Science.gov (United States)

    2017-05-01

    program development is the Bodily Reaction Tool. This is an activity that helps learners to understand that experiencing an emotion can induce differing... emotions throughout our bodies. Figure 3 shows the evidence based body map figure and a preliminary beta version of the tool that is being developed...and an emotion assessment grid (see Figure 5). The Behavior Mind Map is is an activity that helps learners to see that there are many different

  15. Validation studies of the DOE-2 Building Energy Simulation Program. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, R.; Winkelmann, F.

    1998-06-01

    This report documents many of the validation studies (Table 1) of the DOE-2 building energy analysis simulation program that have taken place since 1981. Results for several versions of the program are presented with the most recent study conducted in 1996 on version DOE-2.1E and the most distant study conducted in 1981 on version DOE-1.3. This work is part of an effort related to continued development of DOE-2, particularly in its use as a simulation engine for new specialized versions of the program such as the recently released RESFEN 3.1. RESFEN 3.1 is a program specifically dealing with analyzing the energy performance of windows in residential buildings. The intent in providing the results of these validation studies is to give potential users of the program a high degree of confidence in the calculated results. Validation studies in which calculated simulation data is compared to measured data have been conducted throughout the development of the DOE-2 program. Discrepancies discovered during the course of such work has resulted in improvements in the simulation algorithms. Table 2 provides a listing of additions and modifications that have been made to various versions of the program since version DOE-2.1A. One of the most significant recent changes in the program occurred with version DOE-2.1E. An improved algorithm for calculating the outside surface film coefficient was implemented. In addition, integration of the WINDOW 4 program was accomplished resulting in improved ability in analyzing window energy performance. Validation and verification of a program as sophisticated as DOE-2 must necessarily be limited because of the approximations inherent in the program. For example, the most accurate model of the heat transfer processes in a building would include a three-dimensional analysis. To justify such detailed algorithmic procedures would correspondingly require detailed information describing the building and/or HVAC system and energy plant parameters

  16. EPINEPHRINE OR GV-26 ELECTRICAL STIMULATION REDUCES INHALANT ANESTHESTIC RECOVERY TIME IN COMMON SNAPPING TURTLES (CHELYDRA SERPENTINA).

    Science.gov (United States)

    Goe, Alexandra; Shmalberg, Justin; Gatson, Bonnie; Bartolini, Pia; Curtiss, Jeff; Wellehan, James F X

    2016-06-01

    Prolonged anesthetic recovery times are a common clinical problem in reptiles following inhalant anesthesia. Diving reptiles have numerous adaptations that allow them to submerge and remain apneic for extended periods. An ability to shunt blood away from pulmonary circulation, possibly due to changes in adrenergic tone, may contribute to their unpredictable inhalant anesthetic recovery times. Therefore, the use of epinephrine could antagonize this response and reduce recovery time. GV-26, an acupuncture point with reported β-adrenergic and respiratory effects, has reduced anesthetic recovery times in other species. In this prospective randomized crossover study, six common snapping turtles (Chelydra serpentina) were anesthetized with inhalant isoflurane for 90 min. Turtles were assigned one of three treatments, given immediately following discontinuation of isoflurane: a control treatment (0.9% saline, at 0.1 ml/kg i.m.), epinephrine (0.1 mg/kg i.m.), or acupuncture with electrical stimulation at GV-26. Each turtle received all treatments, and treatments were separated by 48 hr. Return of spontaneous ventilation was 55% faster in turtles given epinephrine and 58% faster in the GV-26 group versus saline (P < 0.001). The times to movement and to complete recovery were also significantly faster for both treatments than for saline (P < 0.02). Treated turtles displayed increases in temperature not documented in the control (P < 0.001). Turtles administered epinephrine showed significantly increased heart rates and end-tidal CO(2) (P < 0.001). No adverse effects were noted in the study animals. The mechanisms of action were not elucidated in the present investigation. Nevertheless, the use of parenteral epinephrine or GV-26 stimulation in the immediate postanesthetic period produces clinically relevant reductions in anesthetic recovery time in common snapping turtle. Further research is necessary to evaluate the effects of concurrent GV-26 and epinephrine administration

  17. The OECD validation program of the H295R steroidogenesis assay: Phase 3. Final inter-laboratory validation study

    DEFF Research Database (Denmark)

    Hecker, Markus; Hollert, Henner; Cooper, Ralph

    2011-01-01

    In response to increasing concerns regarding the potential of chemicals to interact with the endocrine system of humans and wildlife, various national and international programs have been initiated with the aim to develop new guidelines for the screening and testing of these chemicals in vertebra......In response to increasing concerns regarding the potential of chemicals to interact with the endocrine system of humans and wildlife, various national and international programs have been initiated with the aim to develop new guidelines for the screening and testing of these chemicals...... in vertebrates. Here, we report on the validation of an in vitro assay, the H295R steroidogenesis assay, to detect chemicals with the potential to inhibit or induce the production of the sex steroid hormones testosterone (T) and 17β-estradiol (E2) in preparation for the development of an Organization...... for Economic Cooperation and Development (OECD) test guideline.A previously optimized and pre-validated protocol was used to assess the potential of 28 chemicals of diverse structures and properties to validate the H295R steroidogenesis assay. These chemicals are comprised of known endocrine-active chemicals...

  18. Validation of FORTRAN emulators for the G2 varian control programs

    International Nuclear Information System (INIS)

    Delorme, G.

    1996-01-01

    The extensive use of the Gentilly full scope simulator for training and verification of plant procedures, forced the development of a reliable desktop simulator for software maintenance purposes. For that we needed emulators for the control programs which run on the DCC Varian computers in the full scope simulator. This paper presents the validation results for the Reactor Regulating System (RRS) program. This emulator was programmed in a modular fashion providing ease of maintenance and of transportation to another environment. The results obtained with specific tests or with integrated testing involving complex control rule interactions, compared favorably with the ones obtained using the actual plant control programs running on the full scope simulator, which constitutes an irrefutable validation procedure. This RRS package along with the other emulators being validated In this manner could be used in safety codes with confidence. (author)

  19. Brazilian Irradiation Project: CAFE-MOD1 validation experimental program

    International Nuclear Information System (INIS)

    Mattos, Joao Roberto Loureiro de; Costa, Antonio Carlos L. da; Esteves, Fernando Avelar; Dias, Marcio Soares

    1999-01-01

    The Brazilian Irradiation Project whose purpose is to provide Brazil with a minimal structure to qualify the design, fabrication and quality procedures of nuclear fuels, consists of three main facilities: IEA-R1 reactor of IPEN-CNEN/SP, CAFE-MOD1 irradiation device and a unit of hot cells. The CAFE-MOD1 is based on concepts successfully used for more than 20 years in the main nuclear institutes around the world. Despite these concepts are already proved it should be adapted to each reactor condition. For this purpose, there is an ongoing experimental program aiming at the certification of the criteria and operational limits of the CAFE-MOD1 in order to get the allowance for its installation at the IEA-R1 reactor. (author)

  20. Design and validation of general biology learning program based on scientific inquiry skills

    Science.gov (United States)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  1. NPOESS Preparatory Project Validation Program for Atmsophere Data Products from VIIRS

    Science.gov (United States)

    Starr, D.; Wong, E.

    2009-12-01

    The National Polar-orbiting Operational Environmental Satellite Suite (NPOESS) Program, in partnership with National Aeronautical Space Administration (NASA), will launch the NPOESS Preparatory Project (NPP), a risk reduction and data continuity mission, prior to the first operational NPOESS launch. The NPOESS Program, in partnership with Northrop Grumman Aerospace Systems (NGAS), will execute the NPP Validation program to ensure the data products comply with the requirements of the sponsoring agencies. Data from the NPP Visible/Infrared Imager/Radiometer Suite (VIIRS) will be used to produce Environmental Data Records (EDR's) for aerosol and clouds, specifically Aerosol Optical Thickness (AOT), Aerosol Particle Size Parameter (APSP), and Suspended Matter (SM); and Cloud Optical Thickness (COT), Cloud Effective Particle Size (CEPS), Cloud Top Temperature (CTT), Height (CTH) and Pressure (CTP), and Cloud Base Height (CBH). The Aerosol and Cloud EDR Validation Program is a multifaceted effort to characterize and validate these data products. The program involves systematic comparison to heritage data products, e.g., MODIS, and ground-based correlative data, such as AERONET and ARM data products, and potentially airborne field measurements. To the extent possible, the domain is global. The program leverages various investments that have and are continuing to be made by national funding agencies in such resources, as well as the operational user community and the broad Earth science user community. This presentation will provide an overview of the approaches, data and schedule for the validation of the NPP VIIRS Aerosol and Cloud environmental data products.

  2. Reconstructed plutonium fallout in the GV7 firn core from Northern Victoria Land, East Antarctica

    Science.gov (United States)

    Hwang, H.; Han, Y.; Kang, J.; Lee, K.; Hong, S.; Hur, S. D.; Narcisi, B.; Frezzotti, M.

    2017-12-01

    Atmospheric nuclear explosions during the period from the 1940s to the 1980s are the major anthropogenic source of plutonium (Pu) in the environment. In this work, we analyzed fg g-1 levels of artificial Pu, released predominantly by atmospheric nuclear weapons tests. We measured 351 samples which collected a 78 m-depth fire core at the site of GV7 (S 70°41'17.1", E 158°51'48.9", 1950 m a.s.l.), Northern Victoria Land, East Antarctica. To determine the Pu concentration in the samples, we used an inductively coupled plasma sector field mass spectrometry coupled with an Apex high-efficiency sample introduction system, which has the advantages of small sample consumption and simple sample preparation. We reconstructed the firn core Pu fallout record for the period after 1954 CE shows a significant fluctuation in agreement with past atmospheric nuclear testing. These data will contribute to ice core research by providing depth-age information.

  3. VizieR Online Data Catalog: GV galaxies UV-optical radial color profiles (Pan+, 2014)

    Science.gov (United States)

    Pan, Z.; Li, J.; Lin, W.; Wang, J.; Kong, X.

    2017-04-01

    Our parent sample is drawn from Schawinski et al. (2014MNRAS.440..889S), which contains ~46000 galaxies at the redshift range of z = [0.02, 0.05]. This sample is magnitude completed to Mz, Petro = -19.5 AB mag and with Galaxy Zoo (Lintott 2008MNRAS.389.1179L; 2011, J/MNRAS/410/166) visual morphological classifications (http://data.galaxyzoo.org/). The stellar masses are derived by fitting the five SDSS photometric bands to a library of 6.8x106 models of star formation histories generated from Maraston et al. (1998MNRAS.300..872M; 2005MNRAS.362..799M) stellar models. We follow the process of Schawinski et al. (2014MNRAS.440..889S) to select GV galaxies. First, the galaxies are k-corrected to z = 0 using the KCORRECT code of Blanton & Roweis (2007AJ....133..734B) with the SDSS five broadband photometry. Then, the magnitudes are corrected for dust reddening using estimates of internal extinction from the stellar continuum fits by Oh et al. (2011ApJS..195...13O), applying the Cardelli et al. (1989ApJ...345..245C) law. (2 data files).

  4. DETECTION OF CH{sub 4} IN THE GV TAU N PROTOPLANETARY DISK

    Energy Technology Data Exchange (ETDEWEB)

    Gibb, Erika L. [Department of Physics and Astronomy, University of Missouri -St Louis, 503 Benton Hall, One University Blvd, St Louis, MO 63121 (United States); Horne, David, E-mail: gibbe@umsl.edu [Department of Physics, Marietta College, Marietta, OH 45750 (United States)

    2013-10-20

    T Tauri stars are low mass young stars that may serve as analogs to the early solar system. Observations of organic molecules in the protoplanetary disks surrounding T Tauri stars are important for characterizing the chemical and physical processes that lead to planet formation. Searches for undetected molecules, particularly in the inner, planet forming regions of these disks are important for testing protoplanetary disk chemical models and for understanding the evolution of volatiles through the star and planet formation process. We used NIRSPEC on Keck 2 to perform a high resolution (λ/Δλ ∼ 25,000) L-band survey of T Tauri star GV Tau N. This object is one of two in which the simple organic molecules HCN and C{sub 2}H{sub 2} have been reported in absorption in the warm molecular layer of the protoplanetary disk. In this Letter, we report the first detection of methane, CH{sub 4}, in a protoplanetary disk. Specifically, we detected the ν{sub 3} band in absorption. We determined a rotational temperature of 750 ± 50 K and column density of (2.8 ± 0.2) × 10{sup 17} cm{sup –2}. Our results imply that CH{sub 4} originates in the warm molecular layer of the inner protoplanetary disk.

  5. Research program to develop and validate conceptual models for flow and transport through unsaturated, fractured rock

    International Nuclear Information System (INIS)

    Glass, R.J.; Tidwell, V.C.

    1991-09-01

    As part of the Yucca Mountain Project, our research program to develop and validate conceptual models for flow and transport through unsaturated fractured rock integrates fundamental physical experimentation with conceptual model formulation and mathematical modeling. Our research is directed toward developing and validating macroscopic, continuum-based models and supporting effective property models because of their widespread utility within the context of this project. Success relative to the development and validation of effective property models is predicted on a firm understanding of the basic physics governing flow through fractured media, specifically in the areas of unsaturated flow and transport in a single fracture and fracture-matrix interaction

  6. Research program to develop and validate conceptual models for flow and transport through unsaturated, fractured rock

    International Nuclear Information System (INIS)

    Glass, R.J.; Tidwell, V.C.

    1991-01-01

    As part of the Yucca Mountain Project, our research program to develop and validate conceptual models for flow and transport through unsaturated fractured rock integrates fundamental physical experimentation with conceptual model formulation and mathematical modeling. Our research is directed toward developing and validating macroscopic, continuum-based models and supporting effective property models because of their widespread utility within the context of this project. Success relative to the development and validation of effective property models is predicated on a firm understanding of the basic physics governing flow through fractured media, specifically in the areas of unsaturated flow and transport in a single fracture and fracture-matrix interaction. 43 refs

  7. Research program to develop and validate conceptual models for flow and transport through unsaturated, fractured rock

    International Nuclear Information System (INIS)

    Glass, R.J.; Tidwell, V.C.

    1991-01-01

    As part of the Yucca Mountain Project, our research program to develop and validate conceptual models for flow and transport through unsaturated fractured rock integrates fundamental physical experimentation with conceptual model formulation and mathematical modeling. Our research is directed toward developing and validating macroscopic, continuum-based models and supporting effective property models because of their widespread utility within the context of this project. Success relative to the development and validation of effective property models is predicted on a firm understanding of the basic physics governing flow through fractured media, specifically in the areas of unsaturated flow and transport in a single fracture and fracture-matrix interaction

  8. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  9. Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences.

    Science.gov (United States)

    Harman, Elena; Azzam, Tarek

    2018-02-01

    This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant's experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  11. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  12. An integrated approach to validation of safeguards and security program performance

    International Nuclear Information System (INIS)

    Altman, W.D.; Hunt, J.S.; Hockert, J.W.

    1988-01-01

    Department of Energy (DOE) requirements for safeguards and security programs are becoming increasingly performance oriented. Master Safeguards and Security Agreemtns specify performance levels for systems protecting DOE security interests. In order to measure and validate security system performance, Lawrence Livermore National Laboratory (LLNL) has developed cost effective validation tools and a comprehensive validation approach that synthesizes information gained from different activities such as force on force exercises, limited scope performance tests, equipment testing, vulnerability analyses, and computer modeling; into an overall assessment of the performance of the protection system. The analytic approach employs logic diagrams adapted from the fault and event trees used in probabilistic risk assessment. The synthesis of the results from the various validation activities is accomplished using a method developed by LLNL, based upon Bayes' theorem

  13. Validation of an age-modified caries risk assessment program (Cariogram) in preschool children

    DEFF Research Database (Denmark)

    Holgerson, Pernilla Lif; Twetman, Svante; Stecksèn-Blicks, Christina

    2009-01-01

    OBJECTIVES: (i) To validate caries risk profiles assessed with a computer program against actual caries development in preschool children, (ii) to study the possible impact of a preventive program on the risk profiles, and (iii) to compare the individual risk profiles longitudinally. MATERIAL...... of sugar. The majority of the children who changed category displayed a lowered risk at 7 years. The intervention program seemed to impair the predictive abilities of Cariogram. CONCLUSION: A modified Cariogram applied on preschool children was not particularly useful in identifying high caries risk...

  14. Intelligent Testing of Traffic Light Programs: Validation in Smart Mobility Scenarios

    Directory of Open Access Journals (Sweden)

    Javier Ferrer

    2016-01-01

    Full Text Available In smart cities, the use of intelligent automatic techniques to find efficient cycle programs of traffic lights is becoming an innovative front for traffic flow management. However, this automatic programming of traffic lights requires a validation process of the generated solutions, since they can affect the mobility (and security of millions of citizens. In this paper, we propose a validation strategy based on genetic algorithms and feature models for the automatic generation of different traffic scenarios checking the robustness of traffic light cycle programs. We have concentrated on an extensive urban area in the city of Malaga (in Spain, in which we validate a set of candidate cycle programs generated by means of four optimization algorithms: Particle Swarm Optimization for Traffic Lights, Differential Evolution for Traffic Lights, random search, and Sumo Cycle Program Generator. We can test the cycles of traffic lights considering the different states of the city, weather, congestion, driver expertise, vehicle’s features, and so forth, but prioritizing the most relevant scenarios among a large and varied set of them. The improvement achieved in solution quality is remarkable, especially for CO2 emissions, in which we have obtained a reduction of 126.99% compared with the experts’ solutions.

  15. Development of an Auto-Validation Program for MARS Code Assessments

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2006-01-01

    MARS (Multi-dimensional Analysis of Reactor Safety) code is a best-estimate thermal hydraulic system analysis code developed at KAERI. It is important for a thermal hydraulic computer code to be assessed against theoretical and experimental data to verify and validate the performance and the integrity of the structure, models and correlations of the code. The code assessment efforts for complex thermal hydraulics code such as MARS code can be tedious, time-consuming and require large amount of human intervention in data transfer to see the results in graphic forms. Code developers produce many versions of a code during development and each version need to be verified for integrity. Thus, for MARS code developers, it is desirable to have an automatic way of carrying out the code assessment calculations. In the present work, an Auto-Validation program that carries out the code assessment efforts has been developed. The program uses the user supplied configuration file (with '.vv' extension) which contain commands to read input file, to execute the user selected MARS program, and to generate result graphs. The program can be useful if a same set of code assessments is repeated with different versions of the code. The program is written with the Delphi program language. The program runs under the Microsoft Windows environment

  16. Validation of a fracture mechanics approach to nuclear transportation cask design through a drop test program

    International Nuclear Information System (INIS)

    Sorenson, K.B.

    1986-01-01

    Sandia National Laboratories (SNL), under contract to the Department of Energy, is conducting a research program to develop and validate a fracture mechanics approach to cask design. A series of drop tests of a transportation cask is planned for the summer of 1986 as the method for benchmarking and, thereby, validating the fracture mechanics approach. This paper presents the drop test plan and background leading to the development of the test plan including structural analyses, material characterization, and non-destructive evaluation (NDE) techniques necessary for defining the test plan properly

  17. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    Science.gov (United States)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  18. Granulosis viruses, with emphasis on the GV of the Indian meal moth, Plodia interpunctella.

    Science.gov (United States)

    Consigli, R A; Tweeten, K A; Anderson, D K; Bulla, L A

    1983-01-01

    The granulosis viruses and nuclear polyhedrosis viruses are being considered for use as biological insecticides for control of their insect hosts. Many of these insect species, which include some of the most serious pests of agriculture and forests, have become difficult to control because they have developed resistance to chemical insecticides. Several laboratory and field studies have demonstrated that the baculoviruses (GV and NPV) are promising alternatives to chemicals for the control of economically important insects. These viruses are highly virulent, selective, and stable, and the impact on the environment following their application is minimal. A decision concerning the application of baculoviruses to stored grain and field crops must be based upon a prudent consideration of the benefits to be obtained and the potential risks of their use. Such decisions should be made only after consideration of the physical, chemical, and biological properties of these viruses. In addition, methods must be developed for the unequivocal identification of these viruses, and their effects on nontarget species at the cellular and molecular levels must be investigated. This can best be accomplished if a sufficient body of knowledge regarding the molecular properties of these viruses and their infection process is accumulated by an extensive quantitative approach. Much of this knowledge is lacking because, prior to their consideration for use as insecticides, the baculoviruses appeared to have little medical or economic importance. As a result, interest in studying them was limited. It has become obvious that the molecular properties of these viruses must be investigated if full advantage is to be taken of using them as insect control agents, and if present and future problems concerning their use as insecticides are to be handled properly. Fundamental research on the biochemical and biophysical properties of baculoviruses has concentrated mainly on a variety of nuclear

  19. Qualitative Validation of the IMM Model for ISS and STS Programs

    Science.gov (United States)

    Kerstman, E.; Walton, M.; Reyes, D.; Boley, L.; Saile, L.; Young, M.; Arellano, J.; Garcia, Y.; Myers, J. G.

    2016-01-01

    To validate and further improve the Integrated Medical Model (IMM), medical event data were obtained from 32 ISS and 122 STS person-missions. Using the crew characteristics from these observed missions, IMM v4.0 was used to forecast medical events and medical resource utilization. The IMM medical condition incidence values were compared to the actual observed medical event incidence values, and the IMM forecasted medical resource utilization was compared to actual observed medical resource utilization. Qualitative comparisons of these parameters were conducted for both the ISS and STS programs. The results of these analyses will provide validation of IMM v4.0 and reveal areas of the model requiring adjustments to improve the overall accuracy of IMM outputs. This validation effort should result in enhanced credibility of the IMM and improved confidence in the use of IMM as a decision support tool for human space flight.

  20. The Development and Validation of a Transformational Leadership Survey for Substance Use Treatment Programs

    Science.gov (United States)

    Edwards, Jennifer R.; Knight, Danica K.; Broome, Kirk M.; Flynn, Patrick M.

    2014-01-01

    Directors in substance use treatment programs are increasingly required to respond to external economic and socio-political pressures. Leadership practices that promote innovation can help offset these challenges. Using focus groups, factor analysis, and validation instruments, the current study developed and established psychometrics for the Survey of Transformational Leadership. In 2008, clinical directors were evaluated on leadership practices by 214 counselors within 57 programs in four U.S. regions. Nine themes emerged: integrity, sensible risk, demonstrates innovation, encourages innovation, inspirational motivation, supports others, develops others, delegates tasks, and expects excellence. Study implications, limitations and suggested future directions are discussed. Funding from NIDA. PMID:20509734

  1. Development and Validation of the Motivation for Tutoring Questionnaire in Problem-Based Learning Programs

    OpenAIRE

    Kassab, Salah Eldin; Hassan, Nahla; El-Araby, Shimaa; Salem, Abdel Halim; Alrebish, Saleh Ali; Al-Amro, Ahmed S.; Al-Shobaili, Hani A.; Hamdy, Hossam

    2017-01-01

    Purpose: There are no published instruments, which measure tutor motivation for conducting small group tutorials in problem-based learning programs. Therefore, we aimed to develop a motivation for tutoring questionnaire in problem-based learning (MTQ-PBL) and evaluate its construct validity. Methods: The questionnaire included 28 items representing four constructs: tutoring self-efficacy (15 items), tutoring interest (6 items), tutoring value (4 items), and tutoring effort (3 items). Tutor...

  2. Secure Programming Cookbook for C and C++ Recipes for Cryptography, Authentication, Input Validation & More

    CERN Document Server

    Viega, John

    2009-01-01

    Secure Programming Cookbook for C and C++ is an important new resource for developers serious about writing secure code for Unix® (including Linux®) and Windows® environments. This essential code companion covers a wide range of topics, including safe initialization, access control, input validation, symmetric and public key cryptography, cryptographic hashes and MACs, authentication and key exchange, PKI, random numbers, and anti-tampering.

  3. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  4. EduSIG: gvSIG aplicado a la enseñanza de la geografía

    OpenAIRE

    Bermejo Domínguez, Juan A.; Anguix Alfaro, Álvaro; Juncos, Raúl

    2009-01-01

    EduSIG parte de la idea de disponer de un SIG como herramienta educativa para el aprendizaje de la geografía. Por un lado EduSIG consiste en un gvSIG más simple, sin herramientas complejas o muy técnicas, que permite navegar, consultar, construir y entender los mapas sin necesitar ninguna formación en Sistemas de Información Geográfica. Por otro lado pretende dar una visión didáctica de la enseñanza de la geografía, incluyendo tanto vistas temáticas predefinidas y diverso...

  5. Mesenchymal stromal cells for treatment of steroid-refractory GvHD: a review of the literature and two pediatric cases

    Directory of Open Access Journals (Sweden)

    Wernicke Caroline M

    2011-08-01

    Full Text Available Abstract Severe acute graft versus host disease (GvHD is a life-threatening complication after allogeneic hematopoietic stem cell transplantation. Human mesenchymal stromal cells (MSCs play an important role in endogenous tissue repair and possess strong immune-modulatory properties making them a promising tool for the treatment of steroid-refractory GvHD. To date, a few reports exist on the use of MSCs in treatment of GvHD in children indicating that children tend to respond better than adults, albeit with heterogeneous results. We here present a review of the literature and the clinical course of two instructive pediatric patients with acute steroid-refractory GvHD after haploidentical stem cell transplantation, which exemplify the beneficial effects of third-party transplanted MSCs in treatment of acute steroid-refractory GvHD. Moreover, we provide a meta-analysis of clinical studies addressing the outcome of patients with steroid-refractory GvHD and treatment with MSCs in adults and in children (n = 183; 122 adults, 61 children. Our meta-analysis demonstrates that the overall response-rate is high (73.8% and confirms, for the first time, that children indeed respond better to treatment of GvHD with MSCs than adults (complete response 57.4% vs. 45.1%, respectively. These data emphasize the significance of this therapeutic approach especially in children and indicate that future prospective studies are needed to assess the reasons for the observed differential response-rates in pediatric and adult patients.

  6. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  7. Assessment model validity document. NAMMU: A program for calculating groundwater flow and transport through porous media

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Morris, S.T.; Porter, J.D.

    1998-05-01

    NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised

  8. Quality of life of patients with graft-versus-host disease (GvHD post-hematopoietic stem cell transplantation

    Directory of Open Access Journals (Sweden)

    Sibéli de Fátima Ferraz Simão Proença

    Full Text Available Abstract OBJECTIVE Assessing the quality of life of adult patients with hematological cancer in the 100 days after transplantation of hematopoietic stem cells and verifying whether the variable graft-versus-host disease (GvHD is predictive of worse results. METHOD An observational correlational and quantitative study with 36 adult participants diagnosed with hematologic cancer who underwent hematopoietic stem cell transplantation from September 2013 to June 2015. RESULT The mean age was 37 years, 52.78% were female, and 61.11% were diagnosed with leukemia. Quality of life scores showed a significant impact between pre-transplantation and pre-hospital discharge, and also within the 100 days post-transplantation. The statistical analysis between the scores for the groups with and without GvHD showed a significant difference between the presence of the complication and worse results. CONCLUSION Quality of life is altered as a result of hematopoietic stem cells transplantation, especially in patients who have graft-versus-host disease.

  9. The Air Force Mobile Forward Surgical Team (MFST): Using the Estimating Supplies Program to Validate Clinical Requirement

    National Research Council Canada - National Science Library

    Nix, Ralph E; Onofrio, Kathleen; Konoske, Paula J; Galarneau, Mike R; Hill, Martin

    2004-01-01

    .... The primary objective of the study was to provide the Air Force with the ability to validate clinical requirements of the MFST assemblage, with the goal of using NHRC's Estimating Supplies Program (ESP...

  10. C-V and G-V characteristics of ion-implanted MOS structures depending upon the geometrical structure of the implanted region

    International Nuclear Information System (INIS)

    Zohta, Y.

    1977-01-01

    It is found that the capacitance-voltage (C-V) and conductance-voltage (G-V) characteristics of MOS capacitors, into which ions of the opposite conductivity type are implanted, depend strongly upon the geometrical structure of the ion-implanted region. This phenomenon can be analyzed in terms of lateral current flow which connects an inversion layer formed in the ion-implanted region to a surrounding nonimplanted substrate. On the basis of this model, the C-V and G-V characteristics are calculated using a simple equivalent circuit, and general relationships inherent in this model are obtained. MOS capacitors with an ion-implanted layer of different geometries have been prepared to measure their C-V and G-V characteristics. Comparison of experimental measurements with theory substantiates the lateral current flow model

  11. Validation of the Monte Carlo Criticality Program KENO V. a for highly-enriched uranium systems

    Energy Technology Data Exchange (ETDEWEB)

    Knight, J.R.

    1984-11-01

    A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results.

  12. Social Validity of the Social Skills Improvement System--Classwide Intervention Program (SSIS-CIP) in the Primary Grades

    Science.gov (United States)

    Wollersheim Shervey, Sarah; Sandilos, Lia E.; DiPerna, James C.; Lei, Pui-Wa

    2017-01-01

    The purpose of this study was to examine the social validity of the Social Skills Improvement System--Classwide Intervention Program (SSIS-CIP) for teachers in the primary grades. Participants included 45 first and second grade teachers who completed a 16-item social validity questionnaire during each year of the SSIS-CIP efficacy trial. Findings…

  13. Experimental evaluation of Hiṅgvādi Ghṛta in behavioral despair using animal models

    Directory of Open Access Journals (Sweden)

    Poonam Ashish Gupte

    2016-01-01

    Full Text Available Context: Depression, a sustained mood disorder caused by selective diminution of specialized cells in brain is increasing at an alarming rate. It will be the second largest morbid illness by next decade and is the leading cause of suicidal deaths. The available antidepressant medications benefit only a third of its recipients and have many side effects. Hence, it is imperative to search in Ayurveda for leads. Aim: To evaluate Anti- depressant activity of Hiṅgvādi Ghṛta in vivo. Settings and Design: Comparative preclinical study. Materials and Methods: Hiṅgvādi Ghṛta (HG was prepared using standard operating procedure, physicochemically analyzed and assessed. Tail Suspension Test (TST model with Swiss albino mice and Forced Swim Test (FST model with Wistar albino rats were used to assess anti-depressant activity. Imipramine hydrochloride in dose of 15 mg/kg for TST and 10 mg/kg for FST, was the standard drug and Ghee as vehicle control in dose of 0.1g/20g for TST and 0.72g/200g for FST orally. Hiṅgvādi Ghṛta in doses of 0.05g/20g (x/2, 0.1g/20g (x and 0.2 g/20g (2x for TST and 0.36g/200g (x/2, 0.72g/200g (x and 1.44g/200g (2x for FST was administered to 3 test groups for 21 days orally except Plain control group which received only distilled water. Duration of immobility in seconds for TST and number of rotations for FST were noted for assessment. Statistical Analysis Used: One way ANOVA followed by Dunnets test and Paired t test. Results: HG was significantly effective at dose of 0.1gm/20gm for TST (P = 0.0037; P < 0.01 and 0.72g/200g for FST (P = 0.0055, P < 0.01 comparable to Imipramine hydrochloride. Conclusions: HG displayed potent anti depressant activity comparable to standard drug Imipramine Hydrochloride.

  14. Measurement of Health Program Equity Made Easier: Validation of a Simplified Asset Index Using Program Data From Honduras and Senegal.

    Science.gov (United States)

    Ergo, Alex; Ritter, Julie; Gwatkin, Davidson R; Binkin, Nancy

    2016-03-01

    the 8 simplified asset index iterations did this proportion exceed 50%. We conclude that substantially reducing the number of variables and questions used to assess equity is feasible, producing valid results and providing a less burdensome way for program implementers or researchers to evaluate whether their interventions are pro-poor. Developing a standardized, simplified asset questionnaire that could be used across countries may prove difficult, however, given that the variables that contribute the most to the asset index are largely country-specific. © Ergo et al.

  15. EDF EPR project: operating principles validation and human factor engineering program

    International Nuclear Information System (INIS)

    Lefebvre, B.; Berard, E.; Arpino, J.-M.

    2005-01-01

    This article describes the specificities of the operating principles chosen by EDF for the EPR project as a result of an extensive Human Factor Engineering program successfully implemented in an industrial project context. The design process and its achievements benefit of the EDF experience feedback not only in term of NPP operation - including the fully computerized control room of the N4-serie - but also in term of NPP designer. The elements exposed hereafter correspond to the basic design phase of EPR HMI which has been completed and successfully validated by the end of 2003. The article aims to remind the context of the project which basically consists in designing a modern and efficient HMI taking into account the operating needs while relying on proven and reliable technologies. The Human Factor Engineering program implemented merges these both aspects by : 1) being fully integrated within the project activities and scheduling; 2) efficiently taking into account the users needs as well as the feasibility constraints by relying on a multidisciplinary design team including HF specialists, I and C specialists, Process specialists and experienced operator representatives. The resulting design process makes a wide use of experience feedback and experienced operator knowledge to complete largely the existing standards for providing a fully useable and successful design method in an industrial context. The article underlines the design process highlights that largely contribute to the successful implementation of a Human Factor Engineering program for EPR. (authors)

  16. Strategy Choice in Solving Arithmetic Word Problems: Are There Differences between Students with Learning Disabilities, G-V Poor Performance, and Typical Achievement Students?

    Science.gov (United States)

    Gonzalez, Juan E. Jimenez; Espinel, Ana Isabel Garcia

    2002-01-01

    A study was designed to test whether there are differences between Spanish children (ages 7-9) with arithmetic learning disabilities (n=60), garden-variety (G-V) poor performance (n=44), and typical children (n=44) in strategy choice when solving arithmetic word problems. No significant differences were found between children with dyscalculia and…

  17. Rigidity spectrum of z greater than or equal to 3 cosmic-ray nuclei in the range 4-285 GV and a search for cosmic antimatter

    Science.gov (United States)

    Golden, R. L.; Adams, J. H., Jr.; Marar, T. M. K.; Deney, C. L.; Badhwar, G. D.; Heckman, H. H.; Lindstrom, P. J.

    1974-01-01

    A measurement, using the magnetic emulsion spectrometer system, of the differential rigidity spectrum of Z greater than or equal to 3 nuclei of the galactic cosmic radiation is presented. The system was flown on Aug. 22, 1969, from Palestine, Texas. The instrument floated above 125,000 feet for eight hours. The data in the rigidity range 8-285 GV can be represented by a power-law spectrum in rigidity, J(rho) = A rho to the minus gamma power, with the exponent gamma = 2.6 plus or minus 0.10. The spectrum in the range 15-285 GV is also described by the same exponent, gamma = 2.6 plus or minus 0.25. The data below 8 GV cannot be described by the same power law without invoking solar modulation. A set of nonunique parameters for modulation are given. Upper limit for the fraction of antimatter in the rigidity range 4-125 GV is .005 with 95% confidence limit.

  18. Preparation and validation of gross alpha/beta samples used in EML's quality assessment program

    International Nuclear Information System (INIS)

    Scarpitta, S.C.

    1997-10-01

    A set of water and filter samples have been incorporated into the existing Environmental Measurements Laboratory's (EML) Quality Assessment Program (QAP) for gross alpha/beta determinations by participating DOE laboratories. The participating laboratories are evaluated by comparing their results with the EML value. The preferred EML method for measuring water and filter samples, described in this report, uses gas flow proportional counters with 2 in. detectors. Procedures for sample preparation, quality control and instrument calibration are presented. Liquid scintillation (LS) counting is an alternative technique that is suitable for quantifying both the alpha ( 241 Am, 230 Th and 238 Pu) and beta ( 90 Sr/ 90 Y) activity concentrations in the solutions used to prepare the QAP water and air filter samples. Three LS counting techniques (Cerenkov, dual dpm and full spectrum analysis) are compared. These techniques may be used to validate the activity concentrations of each component in the alpha/beta solution before the QAP samples are actually prepared

  19. Development and Fit-for-Purpose Validation of a Soluble Human Programmed Death-1 Protein Assay.

    Science.gov (United States)

    Ni, Yan G; Yuan, Xiling; Newitt, John A; Peterson, Jon E; Gleason, Carol R; Haulenbeek, Jonathan; Santockyte, Rasa; Lafont, Virginie; Marsilio, Frank; Neely, Robert J; DeSilva, Binodh; Piccoli, Steven P

    2015-07-01

    Programmed death-1 (PD-1) protein is a co-inhibitory receptor which negatively regulates immune cell activation and permits tumors to evade normal immune defense. Anti-PD-1 antibodies have been shown to restore immune cell activation and effector function-an exciting breakthrough in cancer immunotherapy. Recent reports have documented a soluble form of PD-1 (sPD-1) in the circulation of normal and disease state individuals. A clinical assay to quantify sPD-1 would contribute to the understanding of sPD-1-function and facilitate the development of anti-PD-1 drugs. Here, we report the development and validation of a sPD-1 protein assay. The assay validation followed the framework for full validation of a biotherapeutic pharmacokinetic assay. A purified recombinant human PD-1 protein was characterized extensively and was identified as the assay reference material which mimics the endogenous analyte in structure and function. The lower limit of quantitation (LLOQ) was determined to be 100 pg/mL, with a dynamic range spanning three logs to 10,000 pg/mL. The intra- and inter-assay imprecision were ≤15%, and the assay bias (percent deviation) was ≤10%. Potential matrix effects were investigated in sera from both normal healthy volunteers and selected cancer patients. Bulk-prepared frozen standards and pre-coated Streptavidin plates were used in the assay to ensure consistency in assay performance over time. This assay appears to specifically measure total sPD-1 protein since the human anti-PD-1 antibody, nivolumab, and the endogenous ligands of PD-1 protein, PDL-1 and PDL-2, do not interfere with the assay.

  20. Utilizing job/task analysis to establish content validity in the design of training programs

    Energy Technology Data Exchange (ETDEWEB)

    Nay, W.E.

    1988-01-01

    The decade of the 1980's has been a turbulent time for the Department of Energy. With concern mounting about the terrorist threat, a wave of congressional inquiries and internal inspections crossed the nation and engulfed many of the nuclear laboratories and facilities operated by DOE contractors. A typical finding was the need to improve, and increase, the training of the protective force. The immediate reaction resulted in a wide variety of responses, with most contractors feeling safer with too much, rather than not enough training. As soon as the initial pressures to upgrade subsided, a task force was established to evaluate the overall training needs. Representatives from the contractor facilities worked together to conduct a job analysis of the protective force. A generic task inventory was established, and validated at the different sites. This list has been invaluable for determining the tasks, conditions, and standards needed to develop well stated learning objectives. The enhanced training programs are being refined to ensure job content validity based on the data collected.

  1. Validation study of the COBRA-WC computer program for LMFBR core thermal-hydraulic analysis

    International Nuclear Information System (INIS)

    Khan, E.U.; Bates, J.M.

    1982-01-01

    The COBRA-WC (Whole Core) computer program has been developed as a benchmark code to predict flow and temperature fields in LMFBR rod bundles. Consequently, an extensive validation study has been conducted to reinforce its credibility. A set of generalized parameters predicts data well for a wide range of geometries and operating conditions which include conventional (current generation LMFBRs) fuel and blanket assembly geometry in the forced, mixed, and natural convection regimes. The data base used for validating COBRA-WC was obtained from out-of-pile and in-pile tests. Most of the data was obtained in fully heated bundles with bundle power skew across flats up to 3:1 (max:min), Reynolds number between 500 and 80,000, and coolant mixed-mean temperature rise (δ anti T) in the range, 78 0 F less than or equal to δ anti T less than or equal to 340 0 F. Within the bundle, 95% of the predicted coolant temperature data points fall within +-25 0 F for 150 0 F less than or equal to δ anti T less than or equal to 340 0 F and within +-17 0 F for 78 0 F less than or equal to δ anti T less than or equal to 150 0 F

  2. Program of neuropsychological stimulation of cognition in students: Emphasis on executive functions - development and evidence of content validity

    Directory of Open Access Journals (Sweden)

    Caroline de Oliveira Cardoso

    Full Text Available ABSTRACT Objective: The goal of this study was to describe the construction process and content validity evidence of an early and preventive intervention program for stimulating executive functions (EF in Elementary School children within the school environment. Methods: The process has followed the recommended steps for creating neuropsychological instruments: internal phase of program organization, with literature search and analyses of available materials in the classroom; program construction; analysis by expert judges; data integration and program finalization. To determine the level of agreement among the judges, a Content Validity Index (CVI was calculated. Results: Content validity was evidenced by the agreement among the experts with regards to the program, both in general and for each activity. All steps taken were deemed necessary because they contributed to the identification of positive aspects and possible flaws in the process Conclusion: The steps also helped to adapt stimuli and improve program tasks and activities. Methodological procedures implemented in this study can be adopted by other researchers to create or adapt neuropsychological stimulation and rehabilitation programs. Furthermore, the methodological approach allows the reader to understand, in detail, the technical and scientific rigor adopted in devising this program.

  3. Report of the Department of Energy, Office of Environmental Management, Gamma Spectrometry Data Validation Program

    International Nuclear Information System (INIS)

    Decker, K.; Sanderson, C.G.; Greenlaw, P.

    1996-11-01

    This report represents the results of analyses received on or before August 15, 1996 for the first annual Gamma Spectrometry Data Validation Program (May 1996) designed to assess the capability of DOE laboratories and DOE contractors in performing routine gamma spectra analyses. Data reduction of gamma spectra are normally performed with computer codes supplied by commercial manufacturers or are developed in house. Earlier evaluations of commercial codes gave spurious results for complex spectrum. A calibration spectrum, a background spectrum and three sample spectra of increasing complexity were included for each format. The calibration spectrum contained nuclides covering the energy range from 59.5 keV to 1836 keV. The first two samples contained fallout nuclides with halflives of over 30 days. Naturally occurring nuclides were also present. The third sample contained both short and long lived fission product nuclides. The participants were asked to report values and uncertainties as Becquerel per sample with no decay correction. Sixteen software packages were evaluated. In general, the results do not appear to be dependent on the software used. Based on the control limits established for the Program for the three sample spectra, 62%, 63% and 53%, respectively, of the reported results were evaluated as acceptable

  4. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  5. Social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) in the primary grades.

    Science.gov (United States)

    Wollersheim Shervey, Sarah; Sandilos, Lia E; DiPerna, James C; Lei, Pui-Wa

    2017-09-01

    The purpose of this study was to examine the social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) for teachers in the primary grades. Participants included 45 first and second grade teachers who completed a 16-item social validity questionnaire during each year of the SSIS-CIP efficacy trial. Findings indicated that teachers generally perceived the SSIS-CIP as a socially valid and feasible intervention for primary grades; however, teachers' ratings regarding ease of implementation and relevance and sequence demonstrated differences across grade levels in the second year of implementation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Designing and Assessing the Validity and Reliability of the Hospital Readiness Assessment Tools to Conducting Quality Improvement Program

    Directory of Open Access Journals (Sweden)

    Kamal Gholipoor

    2016-09-01

    Full Text Available Background and objectives : Identifying the readiness of hospital and its strengths and weaknesses can be useful in developing appropriate planning and situation analyses and management to getting effective in clinical audit programs. The aim of this study was to design and assess the validity of the Hospital Readiness Assessment Tools to conduct quality improvement and clinical audit programs. Material and Methods: In this study, based on the results of a systematic review of literature, an initial questionnaire with 77 items was designed. Questionnaire content validity was reviewed by experts in the field of hospital management and quality improvement in Tabriz University of Medical Sciences. For this purpose, 20 questionnaires were sent to experts. Finally, 15 participants returned completed questionnaire. Questionnaire validity was reviewed and confirmed based on Content Validity Index and Content Validity Ratio. Questionnaire reliability was confirmed based on Cronbach's alpha index (α = 0.96 in a pilot study by participation of 30 hospital managers. Results: The results showed that the final questionnaire contains 54 questions as nine category as: data and information (9 items, teamwork (12 questions, resources (5 questions, patient and education (5, intervention design and implementation (5 questions, clinical audit management (4 questions, human resources (6 questions, evidence and standard (4 items and evaluation and feedback (4 items. The final questionnaire content validity index was 0.91 and final questionnaire Cronbach's alpha coefficient was 0.96. Conclusion: Considering the relative good validity and reliability of the designed tool in this study, it appears that the questionnaire can be used to identify and assess the readiness of hospitals for quality improvement and clinical audit program implementation

  7. A synthetic eicosanoid LX-mimetic unravels host-donor interactions in allogeneic BMT-induced GvHD to reveal an early protective role for host neutrophils.

    Science.gov (United States)

    Devchand, Pallavi R; Schmidt, Birgitta A; Primo, Valeria C; Zhang, Qing-yin; Arnaout, M Amin; Serhan, Charles N; Nikolic, Boris

    2005-02-01

    Lipoxin A(4) (LXA(4)) and aspirin-triggered 15-epi-LXA(4) are potent endogenous lipid mediators thought to define the inflammatory set-point. We used single prophylactic administrations of a synthetic aspirin-triggered lipoxin A(4) signal mimetic, ATLa, to probe dynamics of early host-donor interactions in a mouse model for the inflammation-associated multifactorial disease of allogeneic bone marrow transplant (BMT) -induced graft-vs.-host disease (GvHD). We first demonstrated that both host and donor are responsive to the ATLa signals. The simple and restricted regimen of a single prophylactic administration of ATLa [100 ng/mL to donor cells or 1 microg (approximately 50 microg/kg) i.v. to host] was sufficient to delay death. Clinical indicators of weight, skin lesions, diarrhea and eye inflammation were monitored. Histological analyses on day 45 post-BMT showed that the degree of cellular trafficking, particularly neutrophil infiltrate, and protection of end-organ target pathology are different, depending on whether the host or donor was treated with ATLa. Taken together, these results chart some ATLa protective effects on GvHD cellular dynamics over time and identify a previously unrecognized effect of host neutrophils in the early phase post-BMT as important determinants in the dynamics of GvHD onset and progression.-Devchand, P. R., Schmidt, B. A., Primo, V. C., Zhang, Q.-y., Arnaout, M. A., Serhan, C. N., Nikolic, B. A synthetic eicosanoid LX-mimetic unravels host-donor interactions in allogeneic BMT-induced GvHD to reveal an early protective role for host neutrophils.

  8. Validating competencies for an undergraduate training program in rural medicine using the Delphi technique.

    Science.gov (United States)

    Gouveia, Eneline Ah; Braga, Taciana D; Heráclio, Sandra A; Pessoa, Bruno Henrique S

    2016-01-01

    were suggested. Of the competencies that failed to reach a consensus in the first round, seven were excluded from the framework in the second round, with most of these being associated with hospital procedures. A framework of competencies for a program in rural medicine was developed and validated. It consists of 26 core competencies and 158 secondary competencies that should be useful when constructing competency-based curricula in rural medicine for medical education in Brazil.

  9. Parent-completed developmental screening in premature children: a valid tool for follow-up programs.

    Directory of Open Access Journals (Sweden)

    Cyril Flamant

    Full Text Available Our goals were to (1 validate the parental Ages and Stages Questionnaires (ASQ as a screening tool for psychomotor development among a cohort of ex-premature infants reaching 2 years, and (2 analyse the influence of parental socio-economic status and maternal education on the efficacy of the questionnaire. A regional population of 703 very preterm infants (<35 weeks gestational age born between 2003 and 2006 were evaluated at 2 years by their parents who completed the ASQ, by a pediatric clinical examination, and by the revised Brunet Lezine psychometric test with establishment of a DQ score. Detailed information regarding parental socio-economic status was available for 419 infants. At 2 years corrected age, 630 infants (89.6% had an optimal neuromotor examination. Overall ASQ scores for predicting a DQ score ≤85 produced an area under the receiver operator curve value of 0.85 (95% Confidence Interval:0.82-0.87. An ASQ cut-off score of ≤220 had optimal discriminatory power for identifying a DQ score ≤85 with a sensitivity of 0.85 (95%CI:0.75-0.91, a specificity of 0.72 (95%CI:0.69-0.75, a positive likelihood ratio of 3, and a negative likelihood ratio of 0.21. The median value for ASQ was not significantly associated with socio-economic level or maternal education. ASQ is an easy and reliable tool regardless of the socio-economic status of the family to predict normal neurologic outcome in ex-premature infants at 2 years of age. ASQ may be beneficial with a low-cost impact to some follow-up programs, and helps to establish a genuine sense of parental involvement.

  10. Status Update on the GPM Ground Validation Iowa Flood Studies (IFloodS) Field Experiment

    Science.gov (United States)

    Petersen, Walt; Krajewski, Witold

    2013-04-01

    The overarching objective of integrated hydrologic ground validation activities supporting the Global Precipitation Measurement Mission (GPM) is to provide better understanding of the strengths and limitations of the satellite products, in the context of hydrologic applications. To this end, the GPM Ground Validation (GV) program is conducting the first of several hydrology-oriented field efforts: the Iowa Flood Studies (IFloodS) experiment. IFloodS will be conducted in the central to northeastern part of Iowa in Midwestern United States during the months of April-June, 2013. Specific science objectives and related goals for the IFloodS experiment can be summarized as follows: 1. Quantify the physical characteristics and space/time variability of rain (rates, DSD, process/"regime") and map to satellite rainfall retrieval uncertainty. 2. Assess satellite rainfall retrieval uncertainties at instantaneous to daily time scales and evaluate propagation/impact of uncertainty in flood-prediction. 3. Assess hydrologic predictive skill as a function of space/time scales, basin morphology, and land use/cover. 4. Discern the relative roles of rainfall quantities such as rate and accumulation as compared to other factors (e.g. transport of water in the drainage network) in flood genesis. 5. Refine approaches to "integrated hydrologic GV" concept based on IFloodS experiences and apply to future GPM Integrated GV field efforts. These objectives will be achieved via the deployment of the NASA NPOL S-band and D3R Ka/Ku-band dual-polarimetric radars, University of Iowa X-band dual-polarimetric radars, a large network of paired rain gauge platforms with attendant soil moisture and temperature probes, a large network of both 2D Video and Parsivel disdrometers, and USDA-ARS gauge and soil-moisture measurements (in collaboration with the NASA SMAP mission). The aforementioned measurements will be used to complement existing operational WSR-88D S-band polarimetric radar measurements

  11. Evaluating health inequity interventions: applying a contextual (external) validity framework to programs funded by the Canadian Health Services Research Foundation.

    Science.gov (United States)

    Phillips, Kaye; Müller-Clemm, Werner; Ysselstein, Margaretha; Sachs, Jonathan

    2013-02-01

    Including context in the measurement and evaluation of health in equity interventions is critical to understanding how events that occur in an intervention's environment might contribute to or impede its success. This study adapted and piloted a contextual validity assessment framework on a selection of health inequity-related programs funded by the Canadian Health Services Research Foundation (CHSRF) between 1998 and 2006. The two overarching objectives of this study were (1) to determine the relative amount and quality of attention given to conceptualizing, measuring and validating context within CHSRF funded research final reports related to health-inequity; and (2) to contribute evaluative evidence towards the incorporation of context into the assessment and measurement of health inequity interventions. The study found that of the 42/146 CHSRF programs and projects, judged to be related to health inequity 20 adequately reported on the conceptualization, measurement and validation of context. Amongst these health-inequity related project reports, greatest emphasis was placed on describing the socio-political and economical context over actually measuring and validating contextual evidence. Applying a contextual validity assessment framework was useful for distinguishing between the descriptive (conceptual) versus empirical (measurement and validation) inclusion of documented contextual evidence. Although contextual validity measurement frameworks needs further development, this study contributes insight into identifying funded research related to health inequities and preliminary criteria for assessing interventions targeted at specific populations and jurisdictions. This study also feeds a larger critical dialogue (albeit beyond the scope of this study) regarding the relevance and utility of using evaluative techniques for understanding how specific external conditions support or impede the successful implementation of health inequity interventions. Copyright

  12. TCV software test and validation tools and technique. [Terminal Configured Vehicle program for commercial transport aircraft operation

    Science.gov (United States)

    Straeter, T. A.; Williams, J. R.

    1976-01-01

    The paper describes techniques for testing and validating software for the TCV (Terminal Configured Vehicle) program which is intended to solve problems associated with operating a commercial transport aircraft in the terminal area. The TCV research test bed is a Boeing 737 specially configured with digital computer systems to carry out automatic navigation, guidance, flight controls, and electronic displays research. The techniques developed for time and cost reduction include automatic documentation aids, an automatic software configuration, and an all software generation and validation system.

  13. [Effect of Medicated-catgut Embedding at "Changqiang" (GV 1) on Mechanical Pain Threshold and P 38 MAPK Expression in Spinal Cord Tissue in Anus Incisional Pain Rats].

    Science.gov (United States)

    Shu, Tao; Zhang, Shi-Ti; Yan, Feng; Ke, Yu-Pei; Wang, Jun

    2017-10-25

    To observe the effect of medicated-catgut embedding at "Changqiang"(GV 1) on regional pain reaction and expression of p 38 MAPK in the dorsal horn of spinal cord in anus incisional pain rats, so as to explore its analgesic mechanism. Forty male SD rats were randomly divided into control, model, GV 1-embedding and sham acupoint embedding groups ( n =10 rats in each group). The anus incisional pain model was established by making a radial incision (about 10 mm length) at the left lithotomy position of the anus with a surgical knife, and the mechanical pain threshold (PT) was measured by using a Von Frey before and 4, 8, 12, 24 h after operation. The medicated-catgut (about 12.5 mm length/kg body weight) was implanted in the subcutaneous tissue of GV 1 region. The immunoactivity of p 38 MAPK was determined by immunohistochemistry. Compared with the control group, the mechanical PTs were significantly decreased 4, 8, 12 and 24 h after operation both at the site of incision and about 15 mm proximal to the site of incision in the model group ( P <0.05), and the immunoactivity of phosphorylated (p)-p 38 MAPK in the superficial layer of dorsal horns of lumbar spinal cord was significantly increased(24 h)after operation( P <0.05). Compared with the model group, the PTs were significantly increased 8, 12 and 24 h after operation at the site of incision, and 12 h and 24 h at the site about 15 mm proximal to the incision region ( P <0.05), and the immunoactivity level of p-p 38 MAPK was significantly down-regulated in the GV 1-embedding group ( P <0.05). No significant changes were found in the PT and p-p 38 MAPK immunoactivity levels in the sham acupoint embedding group ( P <0.05). Medicated-catgut embedding at "Changqiang"(GV 1) has an analgesic effect in anus incisional pain model rats, which may be related to its effect in down-regulating the expression of p 38 MAPK in the dorsal horn of lumbar spinal cord.

  14. Precision Measurement of the Helium Flux in Primary Cosmic Rays of Rigidities 1.9 GV to 3 TV with the Alpha Magnetic Spectrometer on the International Space Station

    Science.gov (United States)

    Aguilar, M.; Aisa, D.; Alpat, B.; Alvino, A.; Ambrosi, G.; Andeen, K.; Arruda, L.; Attig, N.; Azzarello, P.; Bachlechner, A.; Barao, F.; Barrau, A.; Barrin, L.; Bartoloni, A.; Basara, L.; Battarbee, M.; Battiston, R.; Bazo, J.; Becker, U.; Behlmann, M.; Beischer, B.; Berdugo, J.; Bertucci, B.; Bindi, V.; Bizzaglia, S.; Bizzarri, M.; Boella, G.; de Boer, W.; Bollweg, K.; Bonnivard, V.; Borgia, B.; Borsini, S.; Boschini, M. J.; Bourquin, M.; Burger, J.; Cadoux, F.; Cai, X. D.; Capell, M.; Caroff, S.; Casaus, J.; Castellini, G.; Cernuda, I.; Cerreta, D.; Cervelli, F.; Chae, M. J.; Chang, Y. H.; Chen, A. I.; Chen, G. M.; Chen, H.; Chen, H. S.; Cheng, L.; Chou, H. Y.; Choumilov, E.; Choutko, V.; Chung, C. H.; Clark, C.; Clavero, R.; Coignet, G.; Consolandi, C.; Contin, A.; Corti, C.; Gil, E. Cortina; Coste, B.; Creus, W.; Crispoltoni, M.; Cui, Z.; Dai, Y. M.; Delgado, C.; Della Torre, S.; Demirköz, M. B.; Derome, L.; Di Falco, S.; Di Masso, L.; Dimiccoli, F.; Díaz, C.; von Doetinchem, P.; Donnini, F.; Duranti, M.; D'Urso, D.; Egorov, A.; Eline, A.; Eppling, F. J.; Eronen, T.; Fan, Y. Y.; Farnesini, L.; Feng, J.; Fiandrini, E.; Fiasson, A.; Finch, E.; Fisher, P.; Formato, V.; Galaktionov, Y.; Gallucci, G.; García, B.; García-López, R.; Gargiulo, C.; Gast, H.; Gebauer, I.; Gervasi, M.; Ghelfi, A.; Giovacchini, F.; Goglov, P.; Gong, J.; Goy, C.; Grabski, V.; Grandi, D.; Graziani, M.; Guandalini, C.; Guerri, I.; Guo, K. H.; Haas, D.; Habiby, M.; Haino, S.; Han, K. C.; He, Z. H.; Heil, M.; Hoffman, J.; Hsieh, T. H.; Huang, Z. C.; Huh, C.; Incagli, M.; Ionica, M.; Jang, W. Y.; Jinchi, H.; Kanishev, K.; Kim, G. N.; Kim, K. S.; Kirn, Th.; Korkmaz, M. A.; Kossakowski, R.; Kounina, O.; Kounine, A.; Koutsenko, V.; Krafczyk, M. S.; La Vacca, G.; Laudi, E.; Laurenti, G.; Lazzizzera, I.; Lebedev, A.; Lee, H. T.; Lee, S. C.; Leluc, C.; Li, H. L.; Li, J. Q.; Li, J. Q.; Li, Q.; Li, Q.; Li, T. X.; Li, W.; Li, Y.; Li, Z. H.; Li, Z. Y.; Lim, S.; Lin, C. H.; Lipari, P.; Lippert, T.; Liu, D.; Liu, H.; Liu, Hu; Lolli, M.; Lomtadze, T.; Lu, M. J.; Lu, S. Q.; Lu, Y. S.; Luebelsmeyer, K.; Luo, F.; Luo, J. Z.; Lv, S. S.; Majka, R.; Mañá, C.; Marín, J.; Martin, T.; Martínez, G.; Masi, N.; Maurin, D.; Menchaca-Rocha, A.; Meng, Q.; Mo, D. C.; Morescalchi, L.; Mott, P.; Müller, M.; Nelson, T.; Ni, J. Q.; Nikonov, N.; Nozzoli, F.; Nunes, P.; Obermeier, A.; Oliva, A.; Orcinha, M.; Palmonari, F.; Palomares, C.; Paniccia, M.; Papi, A.; Pauluzzi, M.; Pedreschi, E.; Pensotti, S.; Pereira, R.; Picot-Clemente, N.; Pilo, F.; Piluso, A.; Pizzolotto, C.; Plyaskin, V.; Pohl, M.; Poireau, V.; Putze, A.; Quadrani, L.; Qi, X. M.; Qin, X.; Qu, Z. Y.; Räihä, T.; Rancoita, P. G.; Rapin, D.; Ricol, J. S.; Rodríguez, I.; Rosier-Lees, S.; Rozhkov, A.; Rozza, D.; Sagdeev, R.; Sandweiss, J.; Saouter, P.; Schael, S.; Schmidt, S. M.; von Dratzig, A. Schulz; Schwering, G.; Scolieri, G.; Seo, E. S.; Shan, B. S.; Shan, Y. H.; Shi, J. Y.; Shi, X. Y.; Shi, Y. M.; Siedenburg, T.; Son, D.; Song, J. W.; Spada, F.; Spinella, F.; Sun, W.; Sun, W. H.; Tacconi, M.; Tang, C. P.; Tang, X. W.; Tang, Z. C.; Tao, L.; Tescaro, D.; Ting, Samuel C. C.; Ting, S. M.; Tomassetti, N.; Torsti, J.; Türkoǧlu, C.; Urban, T.; Vagelli, V.; Valente, E.; Vannini, C.; Valtonen, E.; Vaurynovich, S.; Vecchi, M.; Velasco, M.; Vialle, J. P.; Vitale, V.; Vitillo, S.; Wang, L. Q.; Wang, N. H.; Wang, Q. L.; Wang, R. S.; Wang, X.; Wang, Z. X.; Weng, Z. L.; Whitman, K.; Wienkenhöver, J.; Willenbrock, M.; Wu, H.; Wu, X.; Xia, X.; Xie, M.; Xie, S.; Xiong, R. Q.; Xu, N. S.; Xu, W.; Yan, Q.; Yang, J.; Yang, M.; Yang, Y.; Ye, Q. H.; Yi, H.; Yu, Y. J.; Yu, Z. Q.; Zeissler, S.; Zhang, C.; Zhang, J. H.; Zhang, M. T.; Zhang, S. D.; Zhang, S. W.; Zhang, X. B.; Zhang, Z.; Zheng, Z. M.; Zhuang, H. L.; Zhukov, V.; Zichichi, A.; Zimmermann, N.; Zuccon, P.; AMS Collaboration

    2015-11-01

    Knowledge of the precise rigidity dependence of the helium flux is important in understanding the origin, acceleration, and propagation of cosmic rays. A precise measurement of the helium flux in primary cosmic rays with rigidity (momentum/charge) from 1.9 GV to 3 TV based on 50 million events is presented and compared to the proton flux. The detailed variation with rigidity of the helium flux spectral index is presented for the first time. The spectral index progressively hardens at rigidities larger than 100 GV. The rigidity dependence of the helium flux spectral index is similar to that of the proton spectral index though the magnitudes are different. Remarkably, the spectral index of the proton to helium flux ratio increases with rigidity up to 45 GV and then becomes constant; the flux ratio above 45 GV is well described by a single power law.

  15. Validity evidence for the Fundamentals of Laparoscopic Surgery (FLS) program as an assessment tool: a systematic review.

    Science.gov (United States)

    Zendejas, Benjamin; Ruparel, Raaj K; Cook, David A

    2016-02-01

    The Fundamentals of Laparoscopic Surgery (FLS) program uses five simulation stations (peg transfer, precision cutting, loop ligation, and suturing with extracorporeal and intracorporeal knot tying) to teach and assess laparoscopic surgery skills. We sought to summarize evidence regarding the validity of scores from the FLS assessment. We systematically searched for studies evaluating the FLS as an assessment tool (last search update February 26, 2013). We classified validity evidence using the currently standard validity framework (content, response process, internal structure, relations with other variables, and consequences). From a pool of 11,628 studies, we identified 23 studies reporting validity evidence for FLS scores. Studies involved residents (n = 19), practicing physicians (n = 17), and medical students (n = 8), in specialties of general (n = 17), gynecologic (n = 4), urologic (n = 1), and veterinary (n = 1) surgery. Evidence was most common in the form of relations with other variables (n = 22, most often expert-novice differences). Only three studies reported internal structure evidence (inter-rater or inter-station reliability), two studies reported content evidence (i.e., derivation of assessment elements), and three studies reported consequences evidence (definition of pass/fail thresholds). Evidence nearly always supported the validity of FLS total scores. However, the loop ligation task lacks discriminatory ability. Validity evidence confirms expected relations with other variables and acceptable inter-rater reliability, but other validity evidence is sparse. Given the high-stakes use of this assessment (required for board eligibility), we suggest that more validity evidence is required, especially to support its content (selection of tasks and scoring rubric) and the consequences (favorable and unfavorable impact) of assessment.

  16. The United States Department of Energy's Regional Carbon Sequestration Partnerships Program Validation Phase

    Energy Technology Data Exchange (ETDEWEB)

    Litynski, J.T.; Plasynski, S.; McIlvried, H.G.; Mahoney, C.; Srivastava, R.D. [US DOE, Morgantown, WV (United States). National Energy Technology Laboratory

    2008-01-15

    This paper reviews the Validation Phase (Phase II) of the Department of Energy's Regional Carbon Sequestration Partnerships initiative. During the Validation Phase, the seven regional partnerships will put the knowledge learned during the Characterization Phase into practice through field tests that will validate carbon sequestration technologies that are best suited to their respective regions of the country. These tests will verify technologies developed through DOE's core R&D effort and enable implementation of CO{sub 2} sequestration on a large scale, should that become necessary. Pilot projects will have a site-specific focus to test technology; assess formation storage capacity and injectivity; validate and refine existing CO{sub 2} formation models used to determine the transport and fate of CO{sub 2} in the formation; demonstrate the integrity of geologic seals to contain CO{sub 2}; validate monitoring, mitigation, and verification (MMV) technologies; define project costs and compare costs of alternatives; assess potential operational and long-term storage risks; address regulatory requirements; and engage and evaluate public acceptance of sequestration technologies. Field validation tests involving both sequestration in geologic formations and terrestrial sequestration are being developed. The results from the Validation Phase will help to confirm the estimates made during the Characterization Phase and will be used to update the regional atlases and NatCarb.

  17. [Validation of the structure and resources of nosocomial infection control team in hospitals ascribed to VINCat program in Catalonia, Spain].

    Science.gov (United States)

    Limón, Enrique; Pujol, Miquel; Gudiol, Francesc

    2014-07-01

    The main objective of this study was to validate the structure of the infection control team (ICT) in the hospitals adhered to VINCat program and secondary objective was to establish the consistency of resources of each center with the requirements established by the program. Qualitative research consisting of an ethnographic study using participant observation during the years 2008-2010. The centers were stratified in three groups by complexity and beds. The instrument was a semistructured interview to members of the ICT. The transcription of the interview was sent to informants for validation. In November 2010 a questionnaire regarding human resources and number hours dedicated to the ICT was sent. During 2008-2010, 65 centers had been adhered to VINCat program. In 2010, the ICT of Group I hospitals had a mean of two physician, one in full-time and one nurse for every 230 beds. In Group II, one physician part-time and one nurse per 180 beds and in Group III a physician and a nurse for every 98 beds, both part-time. In 2010, all hospitals had a structured ICT, an operative infection committee, and a hospital member representing the center at the program as well as enough electronic resources. The hospitals participating in the program have now VINCat an adequate surveillance structure and meet the minimum technical and human resources required to provide high-quality data. However human resources are not guaranteed. Copyright © 2014. Published by Elsevier Espana.

  18. Validity and reliability of portfolio assessment of competency in a baccalaureate dental hygiene program

    Science.gov (United States)

    Gadbury-Amyot, Cynthia C.

    This study examined validity and reliability of portfolio assessment using Messick's (1996, 1995) unified framework of construct validity. Theoretical and empirical evidence was sought for six aspects of construct validity. The sample included twenty student portfolios. Each portfolio were evaluated by seven faculty raters using a primary trait analysis scoring rubric. There was a significant relationship (r = .81--.95; p Dental Hygiene Board Examination (r = .60; p Dental Testing Service examination was both weak and nonsignificant (r = .19; p > .05). An open-ended survey was used to elicit student feedback on portfolio development. A majority of the students (76%) perceived value in the development of programmatic portfolios. In conclusion, the pattern of findings from this study suggest that portfolios can serve as a valid and reliable measure for assessing student competency.

  19. Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals and General Validation Principles

    Science.gov (United States)

    This document was developed by the EPA to provide guidance to staff and managers regarding the EDSP universe of chemicals and general validation principles for consideration of computational toxicology tools for chemical prioritization.

  20. Reliability and validity of a novel Kinect-based software program for measuring posture, balance and side-bending.

    Science.gov (United States)

    Grooten, Wilhelmus Johannes Andreas; Sandberg, Lisa; Ressman, John; Diamantoglou, Nicolas; Johansson, Elin; Rasmussen-Barr, Eva

    2018-01-08

    Clinical examinations are subjective and often show a low validity and reliability. Objective and highly reliable quantitative assessments are available in laboratory settings using 3D motion analysis, but these systems are too expensive to use for simple clinical examinations. Qinematic™ is an interactive movement analyses system based on the Kinect camera and is an easy-to-use clinical measurement system for assessing posture, balance and side-bending. The aim of the study was to test the test-retest the reliability and construct validity of Qinematic™ in a healthy population, and to calculate the minimal clinical differences for the variables of interest. A further aim was to identify the discriminative validity of Qinematic™ in people with low-back pain (LBP). We performed a test-retest reliability study (n = 37) with around 1 week between the occasions, a construct validity study (n = 30) in which Qinematic™ was tested against a 3D motion capture system, and a discriminative validity study, in which a group of people with LBP (n = 20) was compared to healthy controls (n = 17). We tested a large range of psychometric properties of 18 variables in three sections: posture (head and pelvic position, weight distribution), balance (sway area and velocity in single- and double-leg stance), and side-bending. The majority of the variables in the posture and balance sections, showed poor/fair reliability (ICC validity (Spearman reliability (ICC =0.898), excellent validity (r = 0.943), and Qinematic™ could differentiate between LPB and healthy individuals (p = 0.012). This paper shows that a novel software program (Qinematic™) based on the Kinect camera for measuring balance, posture and side-bending has poor psychometric properties, indicating that the variables on balance and posture should not be used for monitoring individual changes over time or in research. Future research on the dynamic tasks of Qinematic™ is warranted.

  1. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    Energy Technology Data Exchange (ETDEWEB)

    Wingefors, S.; Andersson, J.; Norrby, S. [Swedish Nuclear Power lnspectorate, Stockholm (Sweden). Office of Nuclear Waste Safety; Eisenberg, N.A.; Lee, M.P.; Federline, M.V. [U.S. Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Material Safety and Safeguards; Sagar, B.; Wittmeyer, G.W. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  2. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    International Nuclear Information System (INIS)

    Wingefors, S.; Andersson, J.; Norrby, S.

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  3. Validation of the Positive Parenting Scale (PPS for evaluating face-to-face and online parenting support programs

    Directory of Open Access Journals (Sweden)

    Arminda Suárez

    2016-11-01

    Full Text Available Following the study presenting the Online Parental Support Scale, as part of the evaluation of the ‘Positive Parent’ online program (http://educarenpositivo.es, this article describes the validation of a new scale that evaluates the principles of positive parenting in users of face-to-face and online parenting support programs. To validate the Positive Parenting Scale (PPS, 323 Spanish and Latin American parents participated, who were enrolled in the online program. To obtain the factor structure, we used exploratory structural equation modeling (ESEM with oblimin rotation, and for confirmatory purposes we used as the estimation method the Weighted Least Squares Mean and Variance Adjusted with moving measurement window (WLSMW. We also performed a ROC analysis of rating and continuous diagnostic test results by means of area under the curve (AUC, and tested it by multivariate analysis of Covariance (MANCOVA. The main results showed an optimal factorization of the construct involving a four-factor model with adequate reliability: family involvement, affection and recognition, communication and stress management, and shared activities. Furthermore, discriminative capacity of the scale was proved depending on the levels of Internet experience and educational use of the Internet. The scale shows adequate psychometric properties and its content includes the key aspects of the exercise of positive parenting, which is very useful to evaluate the effectiveness of programs based on this approach.

  4. Additional Support for the Information Systems Analyst Exam as a Valid Program Assessment Tool

    Science.gov (United States)

    Carpenter, Donald A.; Snyder, Johnny; Slauson, Gayla Jo; Bridge, Morgan K.

    2011-01-01

    This paper presents a statistical analysis to support the notion that the Information Systems Analyst (ISA) exam can be used as a program assessment tool in addition to measuring student performance. It compares ISA exam scores earned by students in one particular Computer Information Systems program with scores earned by the same students on the…

  5. Implicit structural inversion of gravity data using linear programming, a validation study

    NARCIS (Netherlands)

    Zon, A.T. van; Roy Chowdhury, K.

    2010-01-01

    In this study, a regional scale gravity data set has been inverted to infer the structure (topography) of the top of the basement underlying sub-horizontal strata. We apply our method to this real data set for further proof of concept, validation and benchmarking against results from an earlier

  6. The Role of Policy Assumptions in Validating High-stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    L. Cronbach has made the point that for validity arguments to be convincing to diverse audiences, they need to be based on assumptions that are credible to these audiences. The interpretations and uses of high stakes test scores rely on a number of policy assumptions about what should be taught in schools, and more specifically, about the content…

  7. Study to validate the outcome goal, competencies and educational objectives for use in intensive care orientation programs.

    Science.gov (United States)

    Boyle, M; Butcher, R; Kenney, C

    1998-03-01

    Intensive care orientation programs have become an accepted component of intensive care education. To date, however, there have been no Australian-based standards defining the appropriate level of competence to be attained upon completion of orientation. The aim of this study was to validate a set of aims, competencies and educational objectives that could form the basis of intensive care orientation and which would ensure an outcome standard of safe and effective practice. An initial document containing a statement of the desired outcome goal, six competency statements and 182 educational objectives was developed through a review of the orientation programs developed by the investigators. The Delphi technique was used to gain consensus among 13 nurses recognised for their expertise in intensive care education. The expert group rated the acceptability of each of the study items and provided suggestions for objectives to be included. An approval rating of 80 per cent was required to retain each of the study items, with the document refined through three Delphi rounds. The final document contains a validated statement of outcome goal, competencies and educational objectives for intensive care orientation programs.

  8. Assessment model validity document - HYDRASTAR. A stochastic continuum program for groundwater flow

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Eriksson, Lars [Equa Simulation AB, Sundbyberg (Sweden)

    2001-12-01

    The prevailing document addresses validation of the stochastic continuum model HYDRASTAR designed for Monte Carlo simulations of groundwater flow in fractured rocks. Here, validation is defined as a process to demonstrate that a model concept is fit for its purpose. Preferably, the validation is carried out by comparison of model predictions with independent field observations and experimental measurements. In addition, other sources can also be used to confirm that the model concept gives acceptable results. One method is to compare results with the ones achieved using other model concepts for the same set of input data. Another method is to compare model results with analytical solutions. The model concept HYDRASTAR has been used in several studies including performance assessments of hypothetical repositories for spent nuclear fuel. In the performance assessments, the main tasks for HYDRASTAR have been to calculate groundwater travel time distributions, repository flux distributions, path lines and their exit locations. The results have then been used by other model concepts to calculate the near field release and far field transport. The aim and framework for the validation process includes describing the applicability of the model concept for its purpose in order to build confidence in the concept. Preferably, this is made by comparisons of simulation results with the corresponding field experiments or field measurements. Here, two comparisons with experimental results are reported. In both cases the agreement was reasonably fair. In the broader and more general context of the validation process, HYDRASTAR results have been compared with other models and analytical solutions. Commonly, the approximation calculations agree well with the medians of model ensemble results. Additional indications that HYDRASTAR is suitable for its purpose were obtained from the comparisons with results from other model concepts. Several verification studies have been made for

  9. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Geochemical databases. Part 1. Pmatch: a program to manage thermochemical data. Part 2. The experimental validation of geochemical computer models

    International Nuclear Information System (INIS)

    Pearson, F.J. Jr.; Avis, J.D.; Nilsson, K.; Skytte Jensen, B.

    1993-01-01

    This work is carried out under cost-sharing contract with European Atomic Energy Community in the framework of its programme on Management and Storage of Radioactive Wastes. Part 1: PMATCH, A Program to Manage Thermochemical Data, describes the development and use of a computer program, by means of which new thermodynamic data from literature may be referenced to a common frame and thereby become internally consistent with an existing database. The report presents the relevant thermodynamic expressions and their use in the program is discussed. When there is not sufficient thermodynamic data available to describe a species behaviour under all conceivable conditions, the problems arising are thoroughly discussed and the available data is handled by approximating expressions. Part II: The Experimental Validation of Geochemical Computer models are the results of experimental investigations of the equilibria established in aqueous suspensions of mixtures of carbonate minerals (Calcium, magnesium, manganese and europium carbonates) compared with theoretical calculations made by means of the geochemical JENSEN program. The study revealed that the geochemical computer program worked well, and that its database was of sufficient validity. However, it was observed that experimental difficulties could hardly be avoided, when as here a gaseous component took part in the equilibria. Whereas the magnesium and calcium carbonates did not demonstrate mutual solid solubility, this produced abnormal effects when manganese and calcium carbonates were mixed resulting in a diminished solubility of both manganese and calcium. With tracer amounts of europium added to a suspension of calcite in sodium carbonate solutions long term experiments revealed a transition after 1-2 months, whereby the tracer became more strongly adsorbed onto calcite. The transition is interpreted as the nucleation and formation of a surface phase incorporating the 'species' NaEu(Co 3 ) 2

  11. ANSYS program and re-validation of the thermal analysis of the Cornell silicon crystal

    International Nuclear Information System (INIS)

    Khounsary, A.; Kuzay, T.

    1992-01-01

    Thermal analysis of the Cornell three-channel silicon crystal is carried out using the ANSYS finite element program. Results are in general agreement with those previously obtained using the Transient Heat Transfer, version B (THTB) program. The main thrust of the present study has been to (a) explore the thermal analysis potentials of the ANSYS program in solving thermal hydraulic problems in the APS beamline design, (b) compare the ANSYS results with those obtained by THTB for a specific test crystal, and (c) obtain some cost benchmarks for the ANSYS program. On the basis of a limited number of test runs for the silicon crystal problem, conclusions can be drawn that (a) except for conduction problems with simple boundary conditions the utility of ANSYS for solving a variety of three-dimensional thermal hydraulic problems is at best limited, (b) in comparison with THTB program, ANSYS requires a more detailed modeling (with increasing computation time) for comparably accurate results, and (c) no firm statement regarding the cost factor can be made at this time although the ANSYS program appears to be more expensive than any other code we have used so far

  12. The United States Department of Energy's Regional Carbon Sequestration Partnerships Program Validation Phase.

    Science.gov (United States)

    Litynski, John T; Plasynski, Sean; McIlvried, Howard G; Mahoney, Christopher; Srivastava, Rameshwar D

    2008-01-01

    This paper reviews the Validation Phase (Phase II) of the Department of Energy's Regional Carbon Sequestration Partnerships initiative. In 2003, the U.S. Department of Energy created a nationwide network of seven Regional Carbon Sequestration Partnerships (RCSP) to help determine and implement the technology, infrastructure, and regulations most appropriate to promote carbon sequestration in different regions of the nation. The objectives of the Characterization Phase (Phase I) were to characterize the geologic and terrestrial opportunities for carbon sequestration; to identify CO(2) point sources within the territories of the individual partnerships; to assess the transportation infrastructure needed for future deployment; to evaluate CO(2) capture technologies for existing and future power plants; and to identify the most promising sequestration opportunities that would need to be validated through a series of field projects. The Characterization Phase was highly successful, with the following achievements: established a national network of companies and professionals working to support sequestration deployment; created regional and national carbon sequestration atlases for the United States and portions of Canada; evaluated available and developing technologies for the capture of CO(2) from point sources; developed an improved understanding of the permitting requirements that future sequestration activities will need to address as well as defined the gap in permitting requirements for large scale deployment of these technologies; created a raised awareness of, and support for, carbon sequestration as a greenhouse gas (GHG) mitigation option, both within industry and among the general public; identified the most promising carbon sequestration opportunities for future field tests; and established protocols for project implementation, accounting, and management. Economic evaluation was started and is continuing and will be a factor in project selection. During the

  13. Numerical validation of selected computer programs in nonlinear analysis of steel frame exposed to fire

    Science.gov (United States)

    Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr

    2018-01-01

    Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.

  14. Development and Validation of an Online Program for Promoting Self-Management among Korean Patients with Chronic Hepatitis B

    Directory of Open Access Journals (Sweden)

    Jinhyang Yang

    2013-01-01

    Full Text Available The hepatitis B virus is second only to tobacco as a known human carcinogen. However, chronic hepatitis B usually does not produce symptoms and people feel healthy even in the early stages of live cancer. Therefore, chronically infected people should perceive it as a serious health problem and move on to appropriate health behaviour. The purpose of this paper is to develop and validate an online program for promoting self-management among Korean patients with chronic hepatitis B. The online program was developed using a prototyping approach and system developing life cycle method, evaluated by users for their satisfaction with the website and experts for the quality of the site. To evaluate the application of the online program, knowledge and self-management compliance of the subjects were measured and compared before and after the application of the online program. There were statistically significant increases in knowledge and self-management compliance in the user group. An online program with high accessibility and applicability including information, motivation, and behavior skill factors can promote self-management of the patient with chronic hepatitis B. Findings from this study allow Korean patients with chronic hepatitis B to engage in proactive and effective health management in the community or clinical practice.

  15. Application and validation of predictive computer programs describing the chemistry of radionuclides in the geosphere

    International Nuclear Information System (INIS)

    Waters, M.; Duffield, J.R.; Griffiths, P.J.F.; Williams, D.R.

    1991-01-01

    Chemval is an international project concerned with improving the data used to model the speciation chemistry of radionuclide migration from underground waste disposal sites. Chemval has two main aims: to produce a reliable database of thermodynamic equilibrium constants for use in such chemical modelling; to perform a series of test-case modelling exercises based upon real site and field data to verify and validate the existing tools used for simulating the chemical speciation and the transport of radionuclides in the environment

  16. Validation of a solid-fluid interaction computer program for the earthquake analysis of nuclear power reactors

    International Nuclear Information System (INIS)

    Dubois, J.; Descleve, P.; Dupont, Y.

    1978-01-01

    This paper evaluates a numerical method for the analysis of the mechanical response of nuclear reactor components composed of steel structures and fluids, during normal or accidental conditions. The method consists of computing the mode shapes and frequencies of the coupled system, with the assumption of small acoustic movements and incompressibility for the fluid. The paper validates the theory and its implementation in the computer program NOVAX (axisymmetric geometry, non axisymmetric loads and response for earthquake response studies) by comparison with known theoretical and experimental results. (author)

  17. PBSCT is associated with poorer survival and increased chronic GvHD than BMT in Japanese paediatric patients with acute leukaemia and an HLA-matched sibling donor.

    Science.gov (United States)

    Shinzato, Aki; Tabuchi, Ken; Atsuta, Yoshiko; Inoue, Masami; Inagaki, Jiro; Yabe, Hiromasa; Koh, Katsuyoshi; Kato, Koji; Ohta, Hideaki; Kigasawa, Hisato; Kitoh, Toshiyuki; Ogawa, Atsushi; Takahashi, Yoshiyuki; Sasahara, Yoji; Kato, Shun-Ichi; Adachi, Souichi

    2013-09-01

    Peripheral blood stem cells (PBSC) may be used as an alternative to bone marrow (BM) for allogeneic transplantation. Since peripheral blood stem cell bank from unrelated volunteer donor has been started in Japan, use of PBSC allografts may be increased. Therefore we surveyed the outcomes of Japanese leukemia children after PBSC and BM transplantation. This retrospective study compared the outcomes of 661 children (0-18 years) with acute lymphoblastic leukaemia (ALL) or acute myeloid leukaemia (AML) who received their first allogeneic peripheral blood stem cell transplantation (PBSCT; n = 90) or bone marrow transplantation (BMT; n = 571) from HLA-matched siblings between January 1996 and December 2007. Neutrophil recovery was faster after PBSCT than after BMT (ALL: P vs. 9.9%, P = 0.0066; AML: 41.6% vs. 11.1%, P vs. 57.1%, P = 0.0257). The 5-year overall survival (OS) was lower after PBSCT than after BMT for ALL (42.4% vs. 63.7%, P = 0.0032) and AML (49.8% vs. 71.8%, P = 0.0163). Multivariate analysis revealed the use of PBSC was a significant risk factor for DFS and OS. PBSCT and BMT did not differ in relapse rate, acute GvHD for ALL and AML, or in DFS for AML. PBSC allografts in Japanese children engraft faster but are associated with poorer survival and increased chronic GvHD. Copyright © 2013 Wiley Periodicals, Inc.

  18. Validation of the Monte Carlo Criticality Program KENO V.a for highly-enriched uranium systems

    International Nuclear Information System (INIS)

    Knight, J.R.

    1984-11-01

    A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results

  19. Reading Success: Validation of a Specialized Literacy Program (1978-2007)

    Science.gov (United States)

    Idol, Lorna

    2010-01-01

    Reading Success is an individualized teacher-guided literacy program proven for 663 students who experienced difficulty with reading. The students had learning disabilities, mild mental retardation, and behavior challenges; were at risk for school failure; or were transitioning from speaking Spanish to English and experiencing literacy problems.…

  20. Nutrient profiling can help identify foods of good nutritional quality for their price: a validation study with linear programming.

    Science.gov (United States)

    Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole

    2008-06-01

    Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.

  1. The jmzQuantML programming interface and validator for the mzQuantML data standard.

    Science.gov (United States)

    Qi, Da; Krishna, Ritesh; Jones, Andrew R

    2014-03-01

    The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard. © 2014 The Authors. Proteomics published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. An experimental program for testing the validity of flow and transport models in unsaturated tuff: The Yucca Mountain Project

    International Nuclear Information System (INIS)

    Shephard, L.E.; Glass, R.J.; Siegel, M.D.; Tidwell, V.C.

    1990-01-01

    Groundwater flow and contaminant transport through the unsaturated zone are receiving increased attention as options for waste disposal in saturated media continue to be considered as a potential means for resolving the nation's waste management concerns. An experimental program is being developed to test the validity of conceptual flow and transport models that are being formulated to predict the long-term performance at Yucca Mountain. This program is in the developmental stage and will continue to evolve as information is acquired and knowledge is improved with reference to flow and transport in unsaturated fractured media. The general approach for directing the validation effort entails identifying those processes which may cause the site to fail relative to imposed regulatory requirements, evaluating the key assumptions underlying the conceptual models used or developed to describe these processes, and developing new conceptual models as needed. Emphasis is currently being placed in four general areas: flow and transport in unsaturated fractures; fracture-matrix interactions; infiltration flow instability; and evaluation of scale effects in heterogeneous fractured media. Preliminary results and plans or each of these areas for both the laboratory and field investigation components will be presented in the manuscript. 1 ref

  3. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    Science.gov (United States)

    Leontjevas, Ruslan; Gerritsen, Debby L; Koopmans, Raymond T C M; Smalbrugge, Martin; Vernooij-Dassen, Myrra J F J

    2012-06-01

    A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). Before effect analyses, to evaluate AiD process data on sampling quality (recruitment and randomization, reach) and intervention quality (relevance and feasibility, extent to which AiD was performed), which can be used for understanding internal and external validity. In this article, a model is presented that divides process evaluation data into first- and second-order process data. Qualitative and quantitative data based on personal files of residents, interviews of nursing home professionals, and a research database were analyzed according to the following process evaluation components: sampling quality and intervention quality. Nursing home. The pattern of residents' informed consent rates differed for dementia special care units and somatic units during the study. The nursing home staff was satisfied with the AiD program and reported that the program was feasible and relevant. With the exception of the first screening step (nursing staff members using a short observer-based depression scale), AiD components were not performed fully by NH staff as prescribed in the AiD protocol. Although NH staff found the program relevant and feasible and was satisfied with the program content, individual AiD components may have different feasibility. The results on sampling quality implied that statistical analyses of AiD effectiveness should account for the type of unit, whereas the findings on intervention quality implied that, next to the type of unit, analyses should account for the extent to which individual AiD program components were performed. In general, our first-order process data evaluation confirmed internal and external validity of the AiD trial, and this evaluation enabled further statistical fine tuning. The importance of

  4. Validation of Evidence-Based Fall Prevention Programs for Adults with Intellectual and/or Developmental Disorders: A Modified Otago Exercise Program.

    Science.gov (United States)

    Renfro, Mindy; Bainbridge, Donna B; Smith, Matthew Lee

    2016-01-01

    Evidence-based fall prevention (EBFP) programs significantly decrease fall risk, falls, and fall-related injuries in community-dwelling older adults. To date, EBFP programs are only validated for use among people with normal cognition and, therefore, are not evidence-based for adults with intellectual and/or developmental disorders (IDD) such as Alzheimer's disease and related dementias, cerebral vascular accident, or traumatic brain injury. Adults with IDD experience not only a higher rate of falls than their community-dwelling, cognitively intact peers but also higher rates and earlier onset of chronic diseases, also known to increase fall risk. Adults with IDD experience many barriers to health care and health promotion programs. As the lifespan for people with IDD continues to increase, issues of aging (including falls with associated injury) are on the rise and require effective and efficient prevention. A modified group-based version of the Otago Exercise Program (OEP) was developed and implemented at a worksite employing adults with IDD in Montana. Participants were tested pre- and post-intervention using the Center for Disease Control and Prevention's (CDC) Stopping Elderly Accidents Deaths and Injuries (STEADI) tool kit. Participants participated in progressive once weekly, 1-h group exercise classes and home programs over a 7-week period. Discharge planning with consumers and caregivers included home exercise, walking, and an optional home assessment. Despite the limited number of participants ( n  = 15) and short length of participation, improvements were observed in the 30-s Chair Stand Test, 4-Stage Balance Test, and 2-Minute Walk Test. Additionally, three individuals experienced an improvement in ambulation independence. Participants reported no falls during the study period. Promising results of this preliminary project underline the need for further study of this modified OEP among adults with IDD. Future multicenter study should include more

  5. On the reason for the kink in the rigidity spectra of cosmic-ray protons and helium nuclei near 230 GV

    Energy Technology Data Exchange (ETDEWEB)

    Loznikov, V. M., E-mail: loznikov@yandex.ru; Erokhin, N. S.; Zol’nikova, N. N.; Mikhailovskaya, L. A. [Russian Academy of Sciences, Space Research Institute (Russian Federation)

    2016-07-15

    A three-component phenomenological model describing the specific features of the spectrum of cosmic-ray protons and helium nuclei in the rigidity range of 30–2×10{sup 5} GV is proposed. The first component corresponds to the constant background; the second, to the variable “soft” (30–500 GV) heliospheric source; and the third, to the variable “hard” (0.5–200 TV) source located inside a local bubble. The existence and variability of both sources are provided by the corresponding “surfatron accelerators,” whose operation requires the presence of an extended region with an almost uniform (in both magnitude and direction) magnetic field, orthogonally (or obliquely) to which electromagnetic waves propagate. The maximum energy to which cosmic rays can be accelerated is determined by the source size. The soft source with a size of ∼100 AU is located at the periphery of the heliosphere, behind the front of the solar wind shock wave. The hard source with a size of >0.1 pc is located near the boundary of an interstellar cloud at a distance of ∼0.01 pc from the Sun. The presence of a kink in the rigidity spectra of p and He near 230 GV is related to the variability of the physical conditions in the acceleration region and depends on the relation between the amplitudes and power-law exponents in the dependences of the background, soft heliospheric source, and hard near galactic source. The ultrarelativistic acceleration of p and He by an electromagnetic wave propagating in space plasma across the external magnetic field is numerically analyzed. Conditions for particle trapping by the wave and the dynamics of the particle velocity and momentum components are considered. The calculations show that, in contrast to electrons and positrons (e{sup +}), the trapped protons relatively rapidly escape from the effective potential well and cease to accelerate. Due to this effect, the p and He spectra are softer than that of e{sup +}. The possibility that the

  6. Prevention validation and accounting platform: a framework for establishing accountability and performance measures of substance abuse prevention programs.

    Science.gov (United States)

    Kim, S; McLeod, J H; Williams, C; Hepler, N

    2000-01-01

    The field of substance abuse prevention has neither an overarching conceptual framework nor a set of shared terminologies for establishing the accountability and performance outcome measures of substance abuse prevention services rendered. Hence, there is a wide gap between what we currently have as data on one hand and information that are required to meet the performance goals and accountability measures set by the Government Performance and Results Act of 1993 on the other. The task before us is: How can we establish the accountability and performance measures of substance abuse prevention programs and transform the field of prevention into prevention science? The intent of this volume is to serve that purpose and accelerate the processes of this transformation by identifying the requisite components of the transformation (i.e., theory, methodology, convention on terms, and data) and by introducing an open forum called, Prevention Validation and Accounting (PREVA) Platform. The entire PREVA Platform (for short, the Platform) is designed as an analytic framework, which is formulated by a collectivity of common concepts, terminologies, accounting units, protocols for counting the units, data elements, and operationalizations of various constructs, and other summary measures intended to bring about an efficient and effective measurement of process input, program capacity, process output, performance outcome, and societal impact of substance abuse prevention programs. The measurement units and summary data elements are designed to be measured across time and across jurisdictions, i.e., from local to regional to state to national levels. In the Platform, the process input is captured by two dimensions of time and capital. Time is conceptualized in terms of service delivery time and time spent for research and development. Capital is measured by the monies expended for the delivery of program activities during a fiscal or reporting period. Program capacity is captured

  7. Validation of a Numerical Program for Analyzing Kinetic Energy Potential in the Bangka Strait, North Sulawesi, Indonesia

    Science.gov (United States)

    Rompas, P. T. D.; Taunaumang, H.; Sangari, F. J.

    2018-02-01

    The paper presents validation of the numerical program that computes the distribution of marine current velocities in the Bangka strait and the kinetic energy potential in the form the distributions of available power per area in the Bangka strait. The numerical program used the RANS model where the pressure distribution in the vertical assumed to be hydrostatic. The 2D and 3D numerical program results compared with the measurement results that are observation results to the moment conditions of low and high tide currents. It found no different significant between the numerical results and the measurement results. There are 0.97-2.2 kW/m2 the kinetic energy potential in the form the distributions of available power per area in the Bangka strait when low tide currents, whereas when high tide currents of 1.02-2.1 kW/m2. The results show that to be enabling the installation of marine current turbines for construction of power plant in the Bangka strait, North Sulawesi, Indonesia.

  8. Validity and consistency analysis of a social transformation scale for the impact evaluation of the ViraVida program

    Directory of Open Access Journals (Sweden)

    Rodrigo Campos Crivelaro

    2014-12-01

    Full Text Available According to estimates by the United Nations Children's Fund (UNICEF, about one million children worldwide are directly affected by sexual violence and nearly a third of all cases occur in Brazil. The Program ViraVida acts to reduce the problem in the country, rescuing teenagers and youngsters in this situation, providing psychological, educational, and vocational assistance, including support for monitoring the placement and labor market. In this context, the main goal of the study is to analyze the validity and consistency of the Social Transformation Scale of the ViraVida Program. The study represents the second stage of the impact evaluation of the Program to measure possible impacts to strengthen employability, autonomy, self-esteem, community and family ties of young people from 16 to 24 years in situation of sexual exploitation. The methodology is based on the Factor Analysis procedures, including a verification of internal consistency of the full scale and their specific domains. Both proved to be consistent with Cronbach’s Alpha greater than 0.7. The results provide security for the performance of the later stage due to the ViraVida evaluation: evaluative research on adolescents and youngsters in the 11 states and 14 cities where ViraVida is ongoing.

  9. Contribution to the physical validation of computer programs for reactor cores flows

    International Nuclear Information System (INIS)

    Bourgeois, Pierre

    1998-01-01

    A κ-ε turbulence model was implemented in the FLICA computer code which is devoted to thermal-hydraulic analysis of nuclear reactor cores flows. Foreseen applications concern single-phase flows in rod bundles. First-moment closure principles are reminded. Low Reynolds wall effects are accounted for by a two-layer approach. A computational method for the distance from the wall must have been developed to do so. Two two-layer κ-ε models are proposed and studied: the classical isotropic version, based on the Boussinesq's hypothesis, and an original anisotropic version which supposes a non-linear relation between Reynolds stresses and mean deformation rate. The second one permits the treatment of anisotropy, which is encountered in non-circular ducts in general, and in rod bundles in particular. Turbulent solver is linearized implicit, based on a finite volume method - VF9 scheme for the viscous part, upwind scheme for passive scalar for the convective part, centered scheme for the source terms. Several numerical simulations on 2D and 3D configurations were conducted (validation standard test, industrial application). (author) [fr

  10. Environmental Impact Statement. Space Nuclear Thermal Propulsion Program. Particle Bed reactor Propulsion Technology Development and Validation

    Science.gov (United States)

    1993-05-01

    EcoripoS 20 Reported byt Neretd St. ThOnsac. CSS No. ISO 25 Ftetd Offiace to Lao Vegas. solvels. hoe. ThO Department of ACCAE PCTMC SAA- S/S-?SASSOCIATED...PropulblonProgram LO =01 a a ulib bso "M "Om IS so ffm 0" Idamman wown Swom 0"u "ommi a Say OM "Wame OW Minh itWahm I am a thlvtr- yaw residet ot atd Sawa. I...bW %"-~A, ...Wm pý%JWWd..akopik ~ l~ha.18.711 alae. mn~e OW ~ ffm fM V mW Oft ~ WA&I Alls ..I IMd fW q 1.d 4J 1.20 .dinrapinh- *t.- Php* PW I el.JI*ý

  11. An Inductive Logic Programming Approach to Validate Hexose Binding Biochemical Knowledge.

    Science.gov (United States)

    Nassif, Houssam; Al-Ali, Hassan; Khuri, Sawsan; Keirouz, Walid; Page, David

    2010-01-01

    Hexoses are simple sugars that play a key role in many cellular pathways, and in the regulation of development and disease mechanisms. Current protein-sugar computational models are based, at least partially, on prior biochemical findings and knowledge. They incorporate different parts of these findings in predictive black-box models. We investigate the empirical support for biochemical findings by comparing Inductive Logic Programming (ILP) induced rules to actual biochemical results. We mine the Protein Data Bank for a representative data set of hexose binding sites, non-hexose binding sites and surface grooves. We build an ILP model of hexose-binding sites and evaluate our results against several baseline machine learning classifiers. Our method achieves an accuracy similar to that of other black-box classifiers while providing insight into the discriminating process. In addition, it confirms wet-lab findings and reveals a previously unreported Trp-Glu amino acids dependency.

  12. Validation of use of the low energies library in the GATE program: assessment of the effective mass attenuation coefficient

    International Nuclear Information System (INIS)

    Argenta, Jackson; Brambilla, Claudia R.; Silva, Ana Maria Marques da; Hoff, Gabriela

    2010-01-01

    Geant4 Application for Emission Tomography program (GATE) is a versatile toolkit for nuclear medicine simulations of SPECT and PET studies. GATE takes advantage of well-validated libraries of physics processes models, geometry description, tracking of particles through materials, response of detector and visualization tools offered by Geant4 (version 4.0). One package available to simulate electromagnetic interactions is low energy electromagnetic processes (LEP). The purpose of this work was to evaluate the LEP package used by GATE 4 for nuclear medicine shielding simulations. Several simulations were made involving a monodirectional and 140 keV monoenergetic point source beam, passing through barriers of variable thickness of water and lead. The results showed good agreement with the theoretical model, indicating that GATE 4 uses correctly the LEP package. (author)

  13. Verification and validation of predictive computer programs describing the near and far-field chemistry of radioactive waste disposal systems

    International Nuclear Information System (INIS)

    Read, D.; Broyd, T.W.

    1988-01-01

    This paper provides an introduction to CHEMVAL, an international project concerned with establishing the applicability of chemical speciation and coupled transport models to the simulation of realistic waste disposal situations. The project aims to validate computer-based models quantitatively by comparison with laboratory and field experiments. Verification of the various computer programs employed by research organisations within the European Community is ensured through close inter-laboratory collaboration. The compilation and review of thermodynamic data forms an essential aspect of this work and has led to the production of an internally consistent standard CHEMVAL database. The sensitivity of results to variation in fundamental constants is being monitored at each stage of the project and, where feasible, complementary laboratory studies are used to improve the data set. Currently, thirteen organisations from five countries are participating in CHEMVAL which forms part of the Commission of European Communities' MIRAGE 2 programme of research. (orig.)

  14. Validation of the Ventgraph program for use in metal/non-metal mines

    Energy Technology Data Exchange (ETDEWEB)

    Pritchard, C.J. [National Inst. for Occupational Safety and Health, Spokane, WA (United States)

    2010-07-01

    Ventgraph is a ventilation software developed by the Polish Academy of Sciences. It has features similar to other ventilation programs, such as network simulation and contaminant dispersal. Its additional capabilities include mine fire simulation, compressible flow modelling, and real-time on-screen visualization of mine ventilation and fire effects. For that reason, it has been widely used around the world for studying coal mine fires, fighting fires with inert gases, spontaneous combustion, and mine emergency exercises. Ventgraph has been used to a much lesser extent in metal/non-metal (M/NM) mines. The National Institute for Occupational Safety and Health has determined that the use of Ventgraph to hardrock mining methods would be beneficial for studying M/NM ventilation effects, mine evacuation training, risk analysis of potential mine ventilation changes, airborne contaminants, recirculation, and mine fires. Ventgraph was used to simulate the 1972 Sunshine Mine fire where 91 miners perished. The Sunshine Mine was chosen because of its deep, complex ventilation system. Calibration of Ventgraph's fire simulation module to known events of the fire showed close correlation to contaminant levels observed and real-time movement of fire combustion products through the mine. It was concluded that Ventgraph is a valuable tool for M/NM mine ventilation, fire, and evacuation planning. 13 refs., 3 figs.

  15. Validation of the REL2005 code package on Gd-poisoned PWR type assemblies through the CAMELEON experimental program

    International Nuclear Information System (INIS)

    Blaise, Patrick; Vidal, Jean-Francois; Santamarina, Alain

    2009-01-01

    This paper details the validation of Gd-poisoned 17x17 PWR lattices, through several configurations of the CAMELEON experimental program, by using the newly qualified REL2005 French code package. After a general presentation of the CAMELEON program that took place in the EOLE critical Facility in Cadarache, one describes the new REL2005 code package relying on the deterministic transport code APOLLO2.8 based on characteristics method (MOC), and its new CEA2005 library based on the latest JEFF-3.1.1 nuclear data evaluation. For critical masses, the average Calculation-to-Experiment C/E's on the k eff are (136 ± 80) pcm and (300 ± 76) pcm for the reference 281 groups MOC and optimized 26 groups MOC schemes respectively. These values include also a drastic improvement of about 250 pcm due to the change in the library from JEF2.2 to JEFF3.1. For pin-by-pin radial power distributions, reference and REL2005 results are very close, with maximum discrepancies of the order of 2%, i.e., in the experimental uncertainty limits. The Optimized REL2005 code package allows to predict the reactivity worth of the Gd-clusters (averaged on 9 experimental configurations) to be C/E Δρ(Gd clusters) = +1.3% ± 2.3%. (author)

  16. Validation of Evidence-Based Fall Prevention Programs for Adults with Intellectual and/or Developmental Disorders (FallPAIDD: A Modified Otago Exercise Program

    Directory of Open Access Journals (Sweden)

    Mindy Renfro

    2016-12-01

    Full Text Available INTRODUCTION: Evidence-based fall prevention (EBFP programs significantly decrease fall risk, falls, and fall-related injuries in community-dwelling older adults. To date, EBFP programs are only validated for use among people with normal cognition and, therefore, are not evidence-based for adults with intellectual and/or developmental disorders (IDD such as Alzheimer’s disease and related dementias (ADRD, cerebral vascular accident (CVA, or traumatic brain injury (TBI. BACKGROUND: Adults with IDD experience not only a higher rate of falls than their community-dwelling, cognitively intact peers, but also higher rates and earlier onset of chronic diseases, also known to increase fall risk. Adults with IDD experience many barriers to healthcare and health promotion programs. As the lifespan for people with IDD continues to increase, issues of aging (including falls with associated injury are on the rise and require effective and efficient prevention. METHODS: A modified group-based version of the Otago Exercise Program (OEP was developed and implemented at a worksite employing adults with IDD in Montana. Participants were tested pre and post-intervention using the Center for Disease Control and Prevention’s (CDC STopping Elderly Accidents Deaths and Injuries (STEADI tool kit. Participants participated in progressive once weekly, one-hour group exercise classes and home programs over a 7-week period. Discharge planning with consumers and caregivers included home exercise, walking, and an optional home assessment. RESULTS: Despite the limited number of participants (n=15 and short length of participation, improvements were observed in the 30-Second Chair Stand Test, 4-Stage Balance Test, and 2-Minute Walk Test. Additionally, three individuals experienced an improvement in ambulation independence. Participants reported no falls during the study period. DISCUSSION: Promising results of this preliminary project underline the need for further study

  17. Contrast-enhanced spectral mammography in recalls from the Dutch breast cancer screening program : validation of results in a large multireader, multicase study

    OpenAIRE

    Lalji, U C; Houben, I P L; Prevos, R; Gommers, S; van Goethem, M; Vanwetswinkel, S; Pijnappel, R; Steeman, R; Frotscher, C; Mok, W; Nelemans, P; Smidt, M L; Beets-Tan, R G; Wildberger, J E; Lobbes, M B I

    2016-01-01

    OBJECTIVES: Contrast-enhanced spectral mammography (CESM) is a promising problem-solving tool in women referred from a breast cancer screening program. We aimed to study the validity of preliminary results of CESM using a larger panel of radiologists with different levels of CESM experience. METHODS: All women referred from the Dutch breast cancer screening program were eligible for CESM. 199 consecutive cases were viewed by ten radiologists. Four had extensive CESM experience, three had no C...

  18. GvSIG in the academic education of heterogeneous target groups – experiences in lectures, exercises and eLearning

    Directory of Open Access Journals (Sweden)

    Wolfgang Dorner

    2012-03-01

    Full Text Available Thanks to easier operability and a growing range of functions, open source products are increasingly being used in teaching GIS to students of various course programs. The elaboration of such courses poses the challenge of taking into account different study paths, allowing for student autonomy (e-learning, and choosing the right software. The article suggests answers to these questions by presenting the classes offered at the University of Applied Sciences in Deggendorf and the University of Passau since winter 2010/11 as well as ideas for future course offers.

  19. ¿Exito en California? A Validity Critique of Language Program Evaluations and Analysis of English Learner Test Scores

    Directory of Open Access Journals (Sweden)

    Marilyn S. Thompson

    2002-01-01

    Full Text Available Several states have recently faced ballot initiatives that propose to functionally eliminate bilingual education in favor of English-only approaches. Proponents of these initiatives have argued an overall rise in standardized achievement scores of California's limited English proficient (LEP students is largely due to the implementation of English immersion programs mandated by Proposition 227 in 1998, hence, they claim Exito en California (Success in California. However, many such arguments presented in the media were based on flawed summaries of these data. We first discuss the background, media coverage, and previous research associated with California's Proposition 227. We then present a series of validity concerns regarding use of Stanford-9 achievement data to address policy for educating LEP students; these concerns include the language of the test, alternative explanations, sample selection, and data analysis decisions. Finally, we present a comprehensive summary of scaled-score achievement means and trajectories for California's LEP and non-LEP students for 1998-2000. Our analyses indicate that although scores have risen overall, the achievement gap between LEP and EP students does not appear to be narrowing.

  20. Contrast-enhanced spectral mammography in recalls from the Dutch breast cancer screening program : validation of results in a large multireader, multicase study

    NARCIS (Netherlands)

    Lalji, U C; Houben, I P L; Prevos, R; Gommers, S; van Goethem, M; Vanwetswinkel, S; Pijnappel, R; Steeman, R; Frotscher, C; Mok, W; Nelemans, P; Smidt, M L; Beets-Tan, R G; Wildberger, J E; Lobbes, M B I

    2016-01-01

    OBJECTIVES: Contrast-enhanced spectral mammography (CESM) is a promising problem-solving tool in women referred from a breast cancer screening program. We aimed to study the validity of preliminary results of CESM using a larger panel of radiologists with different levels of CESM experience.

  1. Using a Non-Equivalent Groups Quasi Experimental Design to Reduce Internal Validity Threats to Claims Made by Math and Science K-12 Teacher Recruitment Programs

    Science.gov (United States)

    Moin, Laura

    2009-10-01

    The American Recovery and Reinvestment Act national policy established in 2009 calls for ``meaningful data'' that demonstrate educational improvements, including the recruitment of high-quality teachers. The scant data available and the low credibility of many K-12 math/science teacher recruitment program evaluations remain the major barriers for the identification of effective recruitment strategies. Our study presents a methodology to better evaluate the impact of recruitment programs on increasing participants' interest in teaching careers. The research capitalizes on the use of several control groups and presents a non-equivalent groups quasi-experimental evaluation design that produces program effect claims with higher internal validity than claims generated by current program evaluations. With this method that compares responses to a teaching career interest question from undergraduates all along a continuum from just attending an information session to participating (or not) in the recruitment program, we were able to compare the effect of the program in increasing participants' interest in teaching careers versus the evolution of the same interest but in the absence of the program. We were also able to make suggestions for program improvement and further research. While our findings may not apply to other K-12 math/science teacher recruitment programs, we believe that our evaluation methodology does and will contribute to conduct stronger program evaluations. In so doing, our evaluation procedure may inform recruitment program designers and policy makers.

  2. Ground Validation Assessments of GPM Core Observatory Science Requirements

    Science.gov (United States)

    Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail

    2017-04-01

    NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements

  3. Development of an assessment tool to measure students′ perceptions of respiratory care education programs: Item generation, item reduction, and preliminary validation

    Directory of Open Access Journals (Sweden)

    Ghazi Alotaibi

    2013-01-01

    Full Text Available Objectives: Students who perceived their learning environment positively are more likely to develop effective learning strategies, and adopt a deep learning approach. Currently, there is no validated instrument for measuring the educational environment of educational programs on respiratory care (RC. The aim of this study was to develop an instrument to measure students′ perception of the RC educational environment. Materials and Methods: Based on the literature review and an assessment of content validity by multiple focus groups of RC educationalists, potential items of the instrument relevant to RC educational environment construct were generated by the research group. The initial 71 item questionnaire was then field-tested on all students from the 3 RC programs in Saudi Arabia and was subjected to multi-trait scaling analysis. Cronbach′s alpha was used to assess internal consistency reliabilities. Results: Two hundred and twelve students (100% completed the survey. The initial instrument of 71 items was reduced to 65 across 5 scales. Convergent and discriminant validity assessment demonstrated that the majority of items correlated more highly with their intended scale than a competing one. Cronbach′s alpha exceeded the standard criterion of >0.70 in all scales except one. There was no floor or ceiling effect for scale or overall score. Conclusions: This instrument is the first assessment tool developed to measure the RC educational environment. There was evidence of its good feasibility, validity, and reliability. This first validation of the instrument supports its use by RC students to evaluate educational environment.

  4. Validation of satellite data through the remote sensing techniques and the inclusion of them into agricultural education pilot programs

    Science.gov (United States)

    Papadavid, Georgios; Kountios, Georgios; Bournaris, T.; Michailidis, Anastasios; Hadjimitsis, Diofantos G.

    2016-08-01

    Nowadays, the remote sensing techniques have a significant role in all the fields of agricultural extensions as well as agricultural economics and education but they are used more specifically in hydrology. The aim of this paper is to demonstrate the use of field spectroscopy for validation of the satellite data and how combination of remote sensing techniques and field spectroscopy can have more accurate results for irrigation purposes. For this reason vegetation indices are used which are mostly empirical equations describing vegetation parameters during the lifecycle of the crops. These numbers are generated by some combination of remote sensing bands and may have some relationship to the amount of vegetation in a given image pixel. Due to the fact that most of the commonly used vegetation indices are only concerned with red-near-infrared spectrum and can be divided to perpendicular and ratio based indices the specific goal of the research is to illustrate the effect of the atmosphere to those indices, in both categories. In this frame field spectroscopy is employed in order to derive the spectral signatures of different crops in red and infrared spectrum after a campaign of ground measurements. The main indices have been calculated using satellite images taken at interval dates during the whole lifecycle of the crops by using a GER 1500 spectro-radiomete. These indices was compared to those extracted from satellite images after applying an atmospheric correction algorithm -darkest pixel- to the satellite images at a pre-processing level so as the indices would be in comparable form to those of the ground measurements. Furthermore, there has been a research made concerning the perspectives of the inclusion of the above mentioned remote satellite techniques to agricultural education pilot programs.

  5. DNA Commission of the International Society for Forensic Genetics: Recommendations on the validation of software programs performing biostatistical calculations for forensic genetics applications.

    Science.gov (United States)

    Coble, M D; Buckleton, J; Butler, J M; Egeland, T; Fimmers, R; Gill, P; Gusmão, L; Guttman, B; Krawczak, M; Morling, N; Parson, W; Pinto, N; Schneider, P M; Sherry, S T; Willuweit, S; Prinz, M

    2016-11-01

    The use of biostatistical software programs to assist in data interpretation and calculate likelihood ratios is essential to forensic geneticists and part of the daily case work flow for both kinship and DNA identification laboratories. Previous recommendations issued by the DNA Commission of the International Society for Forensic Genetics (ISFG) covered the application of bio-statistical evaluations for STR typing results in identification and kinship cases, and this is now being expanded to provide best practices regarding validation and verification of the software required for these calculations. With larger multiplexes, more complex mixtures, and increasing requests for extended family testing, laboratories are relying more than ever on specific software solutions and sufficient validation, training and extensive documentation are of upmost importance. Here, we present recommendations for the minimum requirements to validate bio-statistical software to be used in forensic genetics. We distinguish between developmental validation and the responsibilities of the software developer or provider, and the internal validation studies to be performed by the end user. Recommendations for the software provider address, for example, the documentation of the underlying models used by the software, validation data expectations, version control, implementation and training support, as well as continuity and user notifications. For the internal validations the recommendations include: creating a validation plan, requirements for the range of samples to be tested, Standard Operating Procedure development, and internal laboratory training and education. To ensure that all laboratories have access to a wide range of samples for validation and training purposes the ISFG DNA commission encourages collaborative studies and public repositories of STR typing results. Published by Elsevier Ireland Ltd.

  6. HLA-inferred extended haplotype disparity level is more relevant than the level of HLA mismatch alone for the patients survival and GvHD in T cell-replate hematopoietic stem cell transplantation from unrelated donor.

    Science.gov (United States)

    Nowak, Jacek; Nestorowicz, Klaudia; Graczyk-Pol, Elzbieta; Mika-Witkowska, Renata; Rogatko-Koros, Marta; Jaskula, Emilia; Koscinska, Katarzyna; Madej, Sylwia; Tomaszewska, Agnieszka; Nasilowska-Adamska, Barbara; Szczepinski, Andrzej; Halaburda, Kazimierz; Dybko, Jaroslaw; Kuliczkowski, Kazimierz; Czerw, Tomasz; Giebel, Sebastian; Holowiecki, Jerzy; Baranska, Malgorzata; Pieczonka, Anna; Wachowiak, Jacek; Czyz, Anna; Gil, Lidia; Lojko-Dankowska, Anna; Komarnicki, Mieczyslaw; Bieniaszewska, Maria; Kucharska, Agnieszka; Hellmann, Andrzej; Gronkowska, Anna; Jedrzejczak, Wieslaw W; Markiewicz, Miroslaw; Koclega, Anna; Kyrcz-Krzemien, Slawomira; Mielcarek, Monika; Kalwak, Krzysztof; Styczynski, Jan; Wysocki, Mariusz; Drabko, Katarzyna; Wojcik, Beata; Kowalczyk, Jerzy; Gozdzik, Jolanta; Pawliczak, Daria; Gwozdowicz, Slawomir; Dziopa, Joanna; Szlendak, Urszula; Witkowska, Agnieszka; Zubala, Marta; Gawron, Agnieszka; Warzocha, Krzysztof; Lange, Andrzej

    2018-06-01

    Serious risks in unrelated hematopoietic stem cell transplantation (HSCT) including graft versus host disease (GvHD) and mortality are associated with HLA disparity between donor and recipient. The increased risks might be dependent on disparity in not-routinely-tested multiple polymorphisms in genetically dense MHC region, being organized in combinations of two extended MHC haplotypes (Ehp). We assessed the clinical role of donor-recipient Ehp disparity levels in N = 889 patients by the population-based detection of HLA allele phase mismatch. We found increased GvHD incidences and mortality rates with increasing Ehp mismatch level even with the same HLA mismatch level. In multivariate analysis HLA mismatch levels were excluded from models and Ehp disparity level remained independent prognostic factor for high grade acute GvHD (p = 0.000037, HR = 10.68, 95%CI 5.50-32.5) and extended chronic GvHD (p < 0.000001, HR = 15.51, CI95% 5.36-44.8). In group with single HLA mismatch, patients with double Ehp disparity had worse 5-year overall survival (45% vs. 56%, p = 0.00065, HR = 4.05, CI95% 1.69-9.71) and non-relapse mortality (40% vs. 31%, p = 0.00037, HR = 5.63, CI95% 2.04-15.5) than patients with single Ehp disparity. We conclude that Ehp-linked factors contribute to the high morbidity and mortality in recipients given HLA-mismatched unrelated transplant and Ehp matching should be considered in clinical HSCT. Copyright © 2018. Published by Elsevier Inc.

  7. Precision Measurement of the Proton Flux in Primary Cosmic Rays from Rigidity 1 GV to 1.8 TV with the Alpha Magnetic Spectrometer on the International Space Station

    Science.gov (United States)

    Aguilar, M.; Aisa, D.; Alpat, B.; Alvino, A.; Ambrosi, G.; Andeen, K.; Arruda, L.; Attig, N.; Azzarello, P.; Bachlechner, A.; Barao, F.; Barrau, A.; Barrin, L.; Bartoloni, A.; Basara, L.; Battarbee, M.; Battiston, R.; Bazo, J.; Becker, U.; Behlmann, M.; Beischer, B.; Berdugo, J.; Bertucci, B.; Bigongiari, G.; Bindi, V.; Bizzaglia, S.; Bizzarri, M.; Boella, G.; de Boer, W.; Bollweg, K.; Bonnivard, V.; Borgia, B.; Borsini, S.; Boschini, M. J.; Bourquin, M.; Burger, J.; Cadoux, F.; Cai, X. D.; Capell, M.; Caroff, S.; Casaus, J.; Cascioli, V.; Castellini, G.; Cernuda, I.; Cerreta, D.; Cervelli, F.; Chae, M. J.; Chang, Y. H.; Chen, A. I.; Chen, H.; Cheng, G. M.; Chen, H. S.; Cheng, L.; Chou, H. Y.; Choumilov, E.; Choutko, V.; Chung, C. H.; Clark, C.; Clavero, R.; Coignet, G.; Consolandi, C.; Contin, A.; Corti, C.; Gil, E. Cortina; Coste, B.; Creus, W.; Crispoltoni, M.; Cui, Z.; Dai, Y. M.; Delgado, C.; Della Torre, S.; Demirköz, M. B.; Derome, L.; Di Falco, S.; Di Masso, L.; Dimiccoli, F.; Díaz, C.; von Doetinchem, P.; Donnini, F.; Du, W. J.; Duranti, M.; D'Urso, D.; Eline, A.; Eppling, F. J.; Eronen, T.; Fan, Y. Y.; Farnesini, L.; Feng, J.; Fiandrini, E.; Fiasson, A.; Finch, E.; Fisher, P.; Galaktionov, Y.; Gallucci, G.; García, B.; García-López, R.; Gargiulo, C.; Gast, H.; Gebauer, I.; Gervasi, M.; Ghelfi, A.; Gillard, W.; Giovacchini, F.; Goglov, P.; Gong, J.; Goy, C.; Grabski, V.; Grandi, D.; Graziani, M.; Guandalini, C.; Guerri, I.; Guo, K. H.; Haas, D.; Habiby, M.; Haino, S.; Han, K. C.; He, Z. H.; Heil, M.; Hoffman, J.; Hsieh, T. H.; Huang, Z. C.; Huh, C.; Incagli, M.; Ionica, M.; Jang, W. Y.; Jinchi, H.; Kanishev, K.; Kim, G. N.; Kim, K. S.; Kirn, Th.; Kossakowski, R.; Kounina, O.; Kounine, A.; Koutsenko, V.; Krafczyk, M. S.; La Vacca, G.; Laudi, E.; Laurenti, G.; Lazzizzera, I.; Lebedev, A.; Lee, H. T.; Lee, S. C.; Leluc, C.; Levi, G.; Li, H. L.; Li, J. Q.; Li, Q.; Li, Q.; Li, T. X.; Li, W.; Li, Y.; Li, Z. H.; Li, Z. Y.; Lim, S.; Lin, C. H.; Lipari, P.; Lippert, T.; Liu, D.; Liu, H.; Lolli, M.; Lomtadze, T.; Lu, M. J.; Lu, S. Q.; Lu, Y. S.; Luebelsmeyer, K.; Luo, J. Z.; Lv, S. S.; Majka, R.; Mañá, C.; Marín, J.; Martin, T.; Martínez, G.; Masi, N.; Maurin, D.; Menchaca-Rocha, A.; Meng, Q.; Mo, D. C.; Morescalchi, L.; Mott, P.; Müller, M.; Ni, J. Q.; Nikonov, N.; Nozzoli, F.; Nunes, P.; Obermeier, A.; Oliva, A.; Orcinha, M.; Palmonari, F.; Palomares, C.; Paniccia, M.; Papi, A.; Pauluzzi, M.; Pedreschi, E.; Pensotti, S.; Pereira, R.; Picot-Clemente, N.; Pilo, F.; Piluso, A.; Pizzolotto, C.; Plyaskin, V.; Pohl, M.; Poireau, V.; Postaci, E.; Putze, A.; Quadrani, L.; Qi, X. M.; Qin, X.; Qu, Z. Y.; Räihä, T.; Rancoita, P. G.; Rapin, D.; Ricol, J. S.; Rodríguez, I.; Rosier-Lees, S.; Rozhkov, A.; Rozza, D.; Sagdeev, R.; Sandweiss, J.; Saouter, P.; Sbarra, C.; Schael, S.; Schmidt, S. M.; von Dratzig, A. Schulz; Schwering, G.; Scolieri, G.; Seo, E. S.; Shan, B. S.; Shan, Y. H.; Shi, J. Y.; Shi, X. Y.; Shi, Y. M.; Siedenburg, T.; Son, D.; Spada, F.; Spinella, F.; Sun, W.; Sun, W. H.; Tacconi, M.; Tang, C. P.; Tang, X. W.; Tang, Z. C.; Tao, L.; Tescaro, D.; Ting, Samuel C. C.; Ting, S. M.; Tomassetti, N.; Torsti, J.; Türkoǧlu, C.; Urban, T.; Vagelli, V.; Valente, E.; Vannini, C.; Valtonen, E.; Vaurynovich, S.; Vecchi, M.; Velasco, M.; Vialle, J. P.; Vitale, V.; Vitillo, S.; Wang, L. Q.; Wang, N. H.; Wang, Q. L.; Wang, R. S.; Wang, X.; Wang, Z. X.; Weng, Z. L.; Whitman, K.; Wienkenhöver, J.; Wu, H.; Wu, X.; Xia, X.; Xie, M.; Xie, S.; Xiong, R. Q.; Xin, G. M.; Xu, N. S.; Xu, W.; Yan, Q.; Yang, J.; Yang, M.; Ye, Q. H.; Yi, H.; Yu, Y. J.; Yu, Z. Q.; Zeissler, S.; Zhang, J. H.; Zhang, M. T.; Zhang, X. B.; Zhang, Z.; Zheng, Z. M.; Zhuang, H. L.; Zhukov, V.; Zichichi, A.; Zimmermann, N.; Zuccon, P.; Zurbach, C.; AMS Collaboration

    2015-05-01

    A precise measurement of the proton flux in primary cosmic rays with rigidity (momentum/charge) from 1 GV to 1.8 TV is presented based on 300 million events. Knowledge of the rigidity dependence of the proton flux is important in understanding the origin, acceleration, and propagation of cosmic rays. We present the detailed variation with rigidity of the flux spectral index for the first time. The spectral index progressively hardens at high rigidities.

  8. Multicenter Validation of a Customizable Scoring Tool for Selection of Trainees for a Residency or Fellowship Program. The EAST-IST Study.

    Science.gov (United States)

    Bosslet, Gabriel T; Carlos, W Graham; Tybor, David J; McCallister, Jennifer; Huebert, Candace; Henderson, Ashley; Miles, Matthew C; Twigg, Homer; Sears, Catherine R; Brown, Cynthia; Farber, Mark O; Lahm, Tim; Buckley, John D

    2017-04-01

    Few data have been published regarding scoring tools for selection of postgraduate medical trainee candidates that have wide applicability. The authors present a novel scoring tool developed to assist postgraduate programs in generating an institution-specific rank list derived from selected elements of the U.S. Electronic Residency Application System (ERAS) application. The authors developed and validated an ERAS and interview day scoring tool at five pulmonary and critical care fellowship programs: the ERAS Application Scoring Tool-Interview Scoring Tool. This scoring tool was then tested for intrarater correlation versus subjective rankings of ERAS applications. The process for development of the tool was performed at four other institutions, and it was performed alongside and compared with the "traditional" ranking methods at the five programs and compared with the submitted National Residency Match Program rank list. The ERAS Application Scoring Tool correlated highly with subjective faculty rankings at the primary institution (average Spearman's r = 0.77). The ERAS Application Scoring Tool-Interview Scoring Tool method correlated well with traditional ranking methodology at all five institutions (Spearman's r = 0.54, 0.65, 0.72, 0.77, and 0.84). This study validates a process for selecting and weighting components of the ERAS application and interview day to create a customizable, institution-specific tool for ranking candidates to postgraduate medical education programs. This scoring system can be used in future studies to compare the outcomes of fellowship training.

  9. El "Diario" de GV. Papini

    Directory of Open Access Journals (Sweden)

    Francesco Casnati

    1963-06-01

    Full Text Available De tiempo atrás teníase noticia de este "Diario": Ridolfi, por ejemplo, lo había consultado para la biografía de su autor, y aun extractos de él aparecieron en algún periódico. ¿ Qué motivos hubo para tardar tanto en publicarlo? Dos, como principales y evidentes: primero, que aún vivían personajes que en el "Diario" aparecen tratados no precisamente con guante blanco; segundo, que se imponía esperar se calmaran las animadversiones y repulsas suscitadas por ciertas simpatías de Papini y que, pasada ya la guerra, reavivaron contra él insidias y amenazas.

  10. Validation of a simplified food frequency questionnaire for the assessment of dietary habits in Iranian adults: Isfahan Healthy Heart Program, Iran.

    Science.gov (United States)

    Mohammadifard, Noushin; Sajjadi, Firouzeh; Maghroun, Maryam; Alikhasi, Hassan; Nilforoushzadeh, Farzaneh; Sarrafzadegan, Nizal

    2015-03-01

    Dietary assessment is the first step of dietary modification in community-based interventional programs. This study was performed to validate a simple food frequency questionnaire (SFFQ) for assessment of selected food items in epidemiological studies with a large sample size as well as community trails. This validation study was carried out on 264 healthy adults aged ≥ 41 years old living in 3 district central of Iran, including Isfahan, Najafabad, and Arak. Selected food intakes were assessed using a 48-item food frequency questionnaire (FFQ). The FFQ was interviewer-administered, which was completed twice; at the beginning of the study and 2 weeks thereafter. The validity of this SFFQ was examined compared to estimated amount by single 24 h dietary recall and 2 days dietary record. Validation of the FFQ was determined using Spearman correlation coefficients between daily frequency consumption of food groups as assessed by the FFQ and the qualitative amount of daily food groups intake accessed by dietary reference method was applied to evaluate validity. Intraclass correlation coefficients (ICC) were used to determine the reproducibility. Spearman correlation coefficient between the estimated amount of food groups intake by examined and reference methods ranged from 0.105 (P = 0.378) in pickles to 0.48 (P studies and clinical trial with large participants.

  11. Validation of a simplified food frequency questionnaire for the assessment of dietary habits in Iranian adults: Isfahan Healthy Heart Program, Iran

    Directory of Open Access Journals (Sweden)

    Noushin Mohammadifard

    2015-03-01

    Full Text Available BACKGROUND: Dietary assessment is the first step of dietary modification in community-based interventional programs. This study was performed to validate a simple food frequency questionnaire (SFFQ for assessment of selected food items in epidemiological studies with a large sample size as well as community trails. METHODS: This validation study was carried out on 264 healthy adults aged ≥ 41 years old living in 3 district central of Iran, including Isfahan, Najafabad, and Arak. Selected food intakes were assessed using a 48-item food frequency questionnaire (FFQ. The FFQ was interviewer-administered, which was completed twice; at the beginning of the study and 2 weeks thereafter. The validity of this SFFQ was examined compared to estimated amount by single 24 h dietary recall and 2 days dietary record. Validation of the FFQ was determined using Spearman correlation coefficients between daily frequency consumption of food groups as assessed by the FFQ and the qualitative amount of daily food groups intake accessed by dietary reference method was applied to evaluate validity. Intraclass correlation coefficients (ICC were used to determine the reproducibility. RESULTS: Spearman correlation coefficient between the estimated amount of food groups intake by examined and reference methods ranged from 0.105 (P = 0.378 in pickles to 0.48 (P < 0.001 in plant protein. ICC for reproducibility of FFQ were between 0.47-0.69 in different food groups (P < 0.001. CONCLUSION: The designed SFFQ has a good relative validity and reproducibility for assessment of selected food groups intake. Thus, it can serve as a valid tool in epidemiological studies and clinical trial with large participants.   

  12. Development, validation, and utility of an instrument to assess core competencies in the Leadership Education in Neurodevelopmental and Related Disabilities (LEND) program.

    Science.gov (United States)

    Leff, Stephen S; Baum, Katherine T; Bevans, Katherine B; Blum, Nathan J

    2015-02-01

    To describe the development and psychometric evaluation of the Core Competency Measure (CCM), an instrument designed to assess professional competencies as defined by the Maternal Child Health Bureau (MCHB) and targeted by Leadership Education in Neurodevelopmental and Related Disabilities (LEND) programs. The CCM is a 44-item self-report measure comprised of six subscales to assess clinical, interdisciplinary, family-centered/cultural, community, research, and advocacy/policy competencies. The CCM was developed in an iterative fashion through participatory action research, and then nine cohorts of LEND trainees (N = 144) from 14 different disciplines completed the CCM during the first week of the training program. A 6-factor confirmatory factor analysis model was fit to data from the 44 original items. After three items were removed, the model adequately fit the data (comparative fit indices = .93, root mean error of approximation = .06) with all factor loadings exceeding .55. The measure was determined to be quite reliable as adequate internal consistency and test-retest reliability were found for each subscale. The instrument's construct validity was supported by expected differences in self-rated competencies among fellows representing various disciplines, and the convergent validity was supported by the pattern of inter-correlations between subscale scores. The CCM appears to be a reliable and valid measure of MCHB core competencies for our sample of LEND trainees. It provides an assessment of key training areas addressed by the LEND program. Although the measure was developed within only one LEND Program, with additional research it has the potential to serve as a standardized tool to evaluate the strengths and limitations of MCHB training, both within and between programs.

  13. Development and content validity of the CENA Program for Educational Training on the Neuropsychology of Learning, with an emphasis on executive functions and attention

    Science.gov (United States)

    Pureza, Janice R.; Fonseca, Rochele P.

    2017-01-01

    Introduction The importance of executive functions (EF) in childhood development, and their role as indicators of health, well-being, professional and academic success have been demonstrated by several studies in the literature. FE are cognitive processes that aim to control and manage behavior to achieve specific goal and included skills planning, inhibition, cognitive flexibility, (executive) attention and the central executive component of working memory (WM). In the context of education, the EF are crucial for continued learning and efficient academic performance due to their involvement in several components of the educational process. Objective The aim of this article was to describe the development and content validity of the CENA Program for Educational Training on the Neuropsychology of Learning, with an emphasis on executive functions and attention. Methods The study involved seven specialists (four responsible for evaluating the program, and three involved in brainstorming), and was carried out in three stages: Background research: neuropsychology and education; Program development - author brainstorming and Evaluation by expert judges The goals, language and methods. Results CENA Program were considered adequate, attesting to its content validity as a school-based neuropsychological intervention. Conclusion Teacher training in school neuropsychology may be an important area for future investment and contribute to academic achievement and student development in the Brazilian education system. PMID:29213497

  14. Cross-cultural adaptation of an adolescent HIV prevention program: social validation of social contexts and behavior among Botswana adolescents.

    Science.gov (United States)

    St Lawrence, Janet S; Seloilwe, Esther; Magowe, Mabel; Dithole, Kefalotse; Kgosikwena, Billy; Kokoro, Elija; Lesaane, Dipuo

    2013-08-01

    An evidence-based HIV prevention intervention was adapted for Botswana youth with qualitative interviews, input from an adolescent panel, and social validation. Qualitative interviews were conducted with 40 boys and girls ages 13-19. An adolescent panel then drafted scenarios reflecting social situations described in the interviews that posed risk for HIV. A social validation sample (N = 65) then indicated the prevalence and difficulty of each situation. Youth described informational needs, pressures to use alcohol and drugs, peer pressure for unprotected sex, and intergenerational sex initiations as risk-priming situations. From 17% to 57% of the social validation sample had personally experienced the situations drafted by the adolescent panel. There were no differences in the ratings of boys versus girls, but youth over age 16 more often reported that they had experienced these risky situations. The results were embedded into the intervention. Major changes to the intervention resulted from this three-phase process.

  15. Educating Physicians for Rural America: Validating Successes and Identifying Remaining Challenges With the Rural Medical Scholars Program.

    Science.gov (United States)

    Wheat, John R; Leeper, James D; Murphy, Shannon; Brandon, John E; Jackson, James R

    2018-02-01

    To evaluate the Rural Medical Scholars (RMS) Program's effectiveness to produce rural physicians for Alabama. A nonrandomized intervention study compared RMS (1997-2002) with control groups in usual medical education (1991-2002) at the University of Alabama School of Medicine's main and regional campuses. Participants were RMS and others admitted to regular medical education, and the intervention was the RMS Program. Measures assessed the percentage of graduates practicing in rural areas. Odds ratios compared effectiveness of producing rural Alabama physicians. The RMS Program (N = 54), regional campuses (N = 182), and main campus (N = 649) produced 48.1% (odds ratio 6.4, P rural physicians, respectively. The RMS Program, contrasted to other local programs of medical education, was effective in producing rural physicians. These results were comparable to benchmark programs in the Northeast and Midwest USA on which the RMS Program was modeled, justifying the assumption that model programs can be replicated in different regions. However, this positive effect was not shared by a disparate rural minority population, suggesting that models for rural medical education must be adjusted to meet the challenge of such communities for physicians. © 2017 National Rural Health Association.

  16. Ostomy Home Skills Program

    Medline Plus

    Full Text Available ... State Requirements Contact Online Education Accreditation, Verification, and Validation Accreditation, Verification, and Validation Programs Accreditation, Verification, and ...

  17. Comparison of Stepped Care Delivery Against a Single, Empirically Validated Cognitive-Behavioral Therapy Program for Youth With Anxiety: A Randomized Clinical Trial.

    Science.gov (United States)

    Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine

    2017-10-01

    Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Evidences of Validity of the Brazilian Scale of Learner's Attitude towards Distance Education Programs

    Science.gov (United States)

    Coelho, Francisco Antonio, Jr.; Cortat, Mariane; Flores, Clarissa Leite; Santos, Flávio Augusto Mendes; Alves, Gleidilson Costa; Faiad, Cristiane; Ramos, Wilsa Maria; Rodrigues da Silva, Alan

    2018-01-01

    Online learning is one of the fastest growing trends in educational uses of technology. In this study, an instrument to measure the social attitudes of the Brazilian students based on distance education was developed and validated. The study population consisted of public administration undergraduate students that has been providing by distance…

  19. Content and face validity of a comprehensive robotic skills training program for general surgery, urology, and gynecology.

    Science.gov (United States)

    Dulan, Genevieve; Rege, Robert V; Hogg, Deborah C; Gilberg-Fisher, Kristine K; Tesfay, Seifu T; Scott, Daniel J

    2012-04-01

    The authors previously developed a comprehensive, proficiency-based robotic training curriculum that aimed to address 23 unique skills identified via task deconstruction of robotic operations. The purpose of this study was to determine the content and face validity of this curriculum. Expert robotic surgeons (n = 12) rated each deconstructed skill regarding relevance to robotic operations, were oriented to the curricular components, performed 3 to 5 repetitions on the 9 exercises, and rated each exercise. In terms of content validity, experts rated all 23 deconstructed skills as highly relevant (4.5 on a 5-point scale). Ratings for the 9 inanimate exercises indicated moderate to thorough measurement of designated skills. For face validity, experts indicated that each exercise effectively measured relevant skills (100% agreement) and was highly effective for training and assessment (4.5 on a 5-point scale). These data indicate that the 23 deconstructed skills accurately represent the appropriate content for robotic skills training and strongly support content and face validity for this curriculum. Copyright © 2012. Published by Elsevier Inc.

  20. Using business plan development as a capstone project for MPH programs in Canada: validation through the student perspective.

    Science.gov (United States)

    Papadopoulos, Andrew; Britten, Nicole; Hatcher, Meghan; Rainville, Keira

    2013-10-01

    Master of Public Health (MPH) programs have been developed across Canada as a response to the need for adequately trained individuals to work in the public health sector. Educational institutions that deliver MPH programs have a responsibility to ensure that graduates of their program have the essential knowledge, skills and attitudes to begin a successful career in public health. The Public Health Agency of Canada has created the core competencies for public health to guide the development, delivery and evaluation of MPH programs. In Canada, a capstone project is the recommended method of evaluating the MPH graduate's ability to demonstrate proficiency in the public health core competencies. A business plan that develops the framework for a public health program is an ideal capstone project currently used in practice within the University of Guelph MPH program. This group assignment incorporates all 36 of the public health core competencies while providing students with a real-world public health experience, and should be considered for inclusion within MPH programs across Canada. Business planning provides students the opportunity to engage in practice-based learning, applying theoretical knowledge to practice. Further, the ability to develop realistic but financially feasible public health problems is an invaluable skill for MPH graduates. As the development of programs becomes more restricted and the continuation of other programs are under constant threat, the ability to develop a sound business plan is a required skill for individuals entering the public health sector, and will ensure students are able to maximize outcomes given tight fiscal budgets and limited resources.

  1. Benchmarking and Validation of IR Signature Programs: SensorVision, CAMEO-SIM and RadThermIR

    National Research Council Canada - National Science Library

    Nelsson, Claes; Hermansson, Patrik; Winzell, Thomas; Sjoekvist, Stefan

    2005-01-01

    Computer programs for prediction of optical signatures of targets in background are valuable tools for several applications such as study of new platform concepts, new coatings, and assessments of new...

  2. Ostomy Home Skills Program

    Medline Plus

    Full Text Available ... Validation Programs Accreditation, Verification, and Validation Programs Accredited Education Institutes ... Entering Resident Readiness Assessment Evidence-Based Decisions in ...

  3. Validation of photon-heating calculations in irradiation reactor with the experimental AMMON program and the CARMEN device

    International Nuclear Information System (INIS)

    Lemaire, Matthieu

    2015-01-01

    The temperature in the different core structures of Material-Testing Reactors (MTR) is a key physical parameter for MTRs' performance and safety. In nuclear reactors, where neutron and photon flux are sustained by fission chain reactions, neutrons and photons steadily deposit energy in the structures they cross and lead to a temperature rise in these structures. In non-fissile core structures (such as material samples, experimental devices, control rods, fuel claddings, and so on), the main part of nuclear heating is induced by photon interactions. This photon heating must therefore be well calculated as it is a key input parameter for MTR thermal studies, whose purpose is for instance to help determine the proper sizing of cooling power, electrical heaters and insulation gaps in MTR irradiation devices. The Jules Horowitz Reactor (JHR) is the next international MTR under construction in the south of France at CEA Cadarache research center (French Alternative Energies and Atomic Energy Commission). The JHR will be a major research infrastructure for the test of structural material and fuel behavior under irradiation. It will also produce from 25% to 50% of the European demand of medical radioisotopes for diagnostic purposes. High levels of nuclear heating are expected in the JHR core, with an absorbed-dose rate up to 20 watts per hafnium gram at nominal power (100 MW). Compared to a Pressurized-Water Reactor (PWR), the JHR is made of a specific array of materials (aluminum rack, beryllium reflector, hafnium control rods) and the feedback on photon-heating calculations with these features is limited. It is therefore necessary to validate photon-heating calculation tools (calculation codes and the European nuclear-data JEFF3.1.1 library) for use in the JHR, that is, it is necessary to determine the biases and uncertainties that are relevant for the photon-heating values calculated with these tools in the JHR. This topic constitutes the core of the present

  4. Psychometric validation and reliability analysis of a Spanish version of the patient satisfaction with cancer-related care measure: a patient navigation research program study.

    Science.gov (United States)

    Jean-Pierre, Pascal; Fiscella, Kevin; Winters, Paul C; Paskett, Electra; Wells, Kristen; Battaglia, Tracy

    2012-09-01

    Patient satisfaction (PS), a key measure of quality of cancer care, is a core study outcome of the multi-site National Cancer Institute-funded Patient Navigation Research Program. Despite large numbers of underserved monolingual Spanish speakers (MSS) residing in USA, there is no validated Spanish measure of PS that spans the whole spectrum of cancer-related care. The present study reports on the validation of the Patient Satisfaction with Cancer Care (PSCC) measure for Spanish (PSCC-Sp) speakers receiving diagnostic and therapeutic cancer-related care. Original PSCC items were professionally translated and back translated to ensure cultural appropriateness, meaningfulness, and equivalence. Then, the resulting 18-item PSCC-Sp measure was administered to 285 MSS. We evaluated latent structure and internal consistency of the PSCC-Sp using principal components analysis (PCA) and Cronbach coefficient alpha (α). We used correlation analyses to demonstrate divergence and convergence of the PSCC-Sp with a Spanish version of the Patient Satisfaction with Interpersonal Relationship with Navigator (PSN-I-Sp) measure and patients' demographics. The PCA revealed a coherent set of items that explicates 47% of the variance in PS. Reliability assessment demonstrated that the PSCC-Sp had high internal consistency (α = 0.92). The PSCC-Sp demonstrated good face validity and convergent and divergent validities as indicated by moderate correlations with the PSN-I-Sp (p = 0.003) and nonsignificant correlations with marital status and household income (all p(s) > 0.05). The PSCC-Sp is a valid and reliable measure of PS and should be tested in other MSS populations.

  5. Validation of two scales for measuring participation and perceived stigma in Chinese community-based rehabilitation programs.

    Science.gov (United States)

    Chung, Eva Yin-Han; Lam, Gigi

    2018-05-29

    The World Health Organization has asserted the importance of enhancing participation of people with disabilities within the International Classification of Functioning, Disability and Health framework. Participation is regarded as a vital outcome in community-based rehabilitation. The actualization of the right to participate is limited by social stigma and discrimination. To date, there is no validated instrument for use in Chinese communities to measure participation restriction or self-perceived stigma. This study aimed to translate and validate the Participation Scale and the Explanatory Model Interview Catalogue (EMIC) Stigma Scale for use in Chinese communities with people with physical disabilities. The Chinese versions of the Participation Scale and the EMIC stigma scale were administered to 264 adults with physical disabilities. The two scales were examined separately. The reliability analysis was studied in conjunction with the construct validity. Reliability analysis was conducted to assess the internal consistency and item-total correlation. Exploratory factor analysis was conducted to investigate the latent patterns of relationships among variables. A Rasch model analysis was conducted to test the dimensionality, internal validity, item hierarchy, and scoring category structure of the two scales. Both the Participation Scale and the EMIC stigma scale were confirmed to have good internal consistency and high item-total correlation. Exploratory factor analysis revealed the factor structure of the two scales, which demonstrated the fitting of a pattern of variables within the studied construct. The Participation Scale was found to be multidimensional, whereas the EMIC stigma scale was confirmed to be unidimensional. The item hierarchies of the Participation Scale and the EMIC stigma scale were discussed and were regarded as compatible with the cultural characteristics of Chinese communities. The Chinese versions of the Participation Scale and the EMIC

  6. EURISOL-DS Multi-MW Target: Experimental program associated to validation of CFD simulations of the mercury loop

    CERN Document Server

    Blumenfeld, Laure; Kadi, Yacine; Samec, Karel; Lindroos, Mats

    At the core of the Eurisol project facility, the neutron source produces spallation neutrons from a proton beam impacting dense liquid. The liquid circulates at high speed inside the source, a closed vessel with beam windows.This technical note summarises the needed of the hydraulic METEX 1 and METEX 2 data tests to contribute to validate CFD turbulent simulation of liquid metal with the LES model and FEM structural model as well as a-dimensional analysis of Laser Dopplet Velocimetry for cavitation measurements.

  7. Process Evaluation to Explore Internal and External Validity of the "Act in Case of Depression" Care Program in Nursing Homes

    NARCIS (Netherlands)

    Leontjevas, R.; Gerritsen, D.L.; Koopmans, R.T.C.M.; Smalbrugge, M.; Vernooij-Dassen, M.F.J.

    2012-01-01

    Background: A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): " Act in case of Depression" (AiD). Objective: Before effect analyses, to evaluate AiD

  8. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    NARCIS (Netherlands)

    Leontjevas, R.; Gerritsen, D.L.; Koopmans, R.T.C.M.; Smalbrugge, M.; Vernooij-Dassen, M.J.F.J.

    2012-01-01

    BACKGROUND: A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). OBJECTIVE: Before effect analyses, to evaluate AiD

  9. A Preliminary Verification and Validation (V and V) Methodology for the Artifacts Programmed with a Hardware Description Language (HDL)

    International Nuclear Information System (INIS)

    Suh, Yong Suk; Keum, Jong Yong; Park, Je Youn; Jo, Ki Ho; Jo, Chang Whan

    2008-01-01

    Nowadays, the FPGA (Field Programmable Gate Array) is widely used in various fields of industry. The FPGA was evolved from the technology of PLD (Programmable Logic Device). The FPGA provides more logic gates than the PLD, which integrates millions of programmable logic gates into a chip. It also provides a massive, fast and reliable processing performance. So, we can integrate a system's functions into a FPGA, which can be a SoC (System on Chip). Furthermore, we can make a FPGA-based DSP, which DSP functions are implemented with a FPGA. With these merits, the FPGA is also used in the nuclear industry. For example, the safety-critical I and C component is manufactured with the FPGA. The FPGA is programmed with a HDL. The quality of the artifacts programmed with a HDL can impact on the quality of a FPGA. When a hazard fault exists in the artifact of a FPGA and is activated during its operation, an accident caused by the fault in the FPGA occurs. So, it is necessary to ensure the quality of the artifacts. This paper, for the purpose of applying it to the SMART (System-integrated Modular Advanced ReacTor) MMIS project, is to present a preliminary V and V methodology for HDL programmed artifacts. For this, we reviewed the following items: - Characteristics of HDL programming - Applicable requirements for a HDL program used for the safety-critical systems - Fault modes of a FPGA Based on the review, we establish the preliminary V and V methodology

  10. Systems analysis programs for Hands-on integrated reliability evaluations (SAPHIRE) Version 5.0: Verification and validation (V ampersand V) manual. Volume 9

    International Nuclear Information System (INIS)

    Jones, J.L.; Calley, M.B.; Capps, E.L.; Zeigler, S.L.; Galyean, W.J.; Novack, S.D.; Smith, C.L.; Wolfram, L.M.

    1995-03-01

    A verification and validation (V ampersand V) process has been performed for the System Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE) Version 5.0. SAPHIRE is a set of four computer programs that NRC developed for performing probabilistic risk assessments. They allow an analyst to perform many of the functions necessary to create, quantify, and evaluate the risk associated with a facility or process being analyzed. The programs are Integrated Reliability and Risk Analysis System (IRRAS) System Analysis and Risk Assessment (SARA), Models And Results Database (MAR-D), and Fault tree, Event tree, and Piping and instrumentation diagram (FEP) graphical editor. Intent of this program is to perform a V ampersand V of successive versions of SAPHIRE. Previous efforts have been the V ampersand V of SAPHIRE Version 4.0. The SAPHIRE 5.0 V ampersand V plan is based on the SAPHIRE 4.0 V ampersand V plan with revisions to incorporate lessons learned from the previous effort. Also, the SAPHIRE 5.0 vital and nonvital test procedures are based on the test procedures from SAPHIRE 4.0 with revisions to include the new SAPHIRE 5.0 features as well as to incorporate lessons learned from the previous effort. Most results from the testing were acceptable; however, some discrepancies between expected code operation and actual code operation were identified. Modifications made to SAPHIRE are identified

  11. Assessing food selection in a health promotion program: validation of a brief instrument for American Indian children in the southwest United States.

    Science.gov (United States)

    Koehler, K M; Cunningham-Sabo, L; Lambert, L C; McCalman, R; Skipper, B J; Davis, S M

    2000-02-01

    Brief dietary assessment instruments are needed to evaluate behavior changes of participants in dietary intervention programs. The purpose of this project was to design and validate an instrument for children participating in Pathways to Health, a culturally appropriate, cancer prevention curriculum. Validation of a brief food selection instrument, Yesterday's Food Choices (YFC), which contained 33 questions about foods eaten the previous day with response choices of yes, no, or not sure. Reference data for validation were 24-hour dietary recalls administered individually to 120 students selected randomly. The YFC and 24-hour dietary recalls were administered to American Indian children in fifth- and seventh-grade classes in the Southwest United States. Dietary recalls were coded for food items in the YFC and results were compared for each item using percentage agreement and the kappa statistic. Percentage agreement for all items was greater than 60%; for most items it was greater than 70%, and for several items it was greater than 80%. The amount of agreement beyond that explained by chance (kappa statistic) was generally small. Three items showed substantial agreement beyond chance (kappa > or = 0.6); 2 items showed moderate agreement (kappa = 0.40 to 0.59) most items showed fair agreement (kappa = 0.20 to 0.39). The food items showing substantial agreement were hot or cold cereal, low-fat milk, and mutton or chile stew. Fried or scrambled eggs and deep-fried foods showed moderate agreement beyond chances. Previous development and validation of brief food selection instruments for children participating in health promotion programs has had limited success. In this study, instrument-related factors that apparently contributed to poor agreement between data from the YFC and 24-hour dietary recall were inclusion of categories of foods vs specific foods; food knowledge, preparation, and vocabulary, item length, and overreporting of attractive foods. Collecting and

  12. Fatty acid ethyl esters (FAEEs) as markers for alcohol in meconium: method validation and implementation of a screening program for prenatal drug exposure.

    Science.gov (United States)

    Hastedt, Martin; Krumbiegel, Franziska; Gapert, René; Tsokos, Michael; Hartwig, Sven

    2013-09-01

    Alcohol consumption during pregnancy is a widespread problem and can cause severe fetal damage. As the diagnosis of fetal alcohol syndrome is difficult, the implementation of a reliable marker for alcohol consumption during pregnancy into meconium drug screening programs would be invaluable. A previously published gas chromatography mass spectrometry method for the detection of fatty acid ethyl esters (FAEEs) as alcohol markers in meconium was optimized and newly validated for a sample size of 50 mg. This method was applied to 122 cases from a drug-using population. The meconium samples were also tested for common drugs of abuse. In 73 % of the cases, one or more drugs were found. Twenty percent of the samples tested positive for FAEEs at levels indicating significant alcohol exposure. Consequently, alcohol was found to be the third most frequently abused substance within the study group. This re-validated method provides an increase in testing sensitivity, is reliable and easily applicable as part of a drug screening program. It can be used as a non-invasive tool to detect high alcohol consumption in the last trimester of pregnancy. The introduction of FAEEs testing in meconium screening was found to be of particular use in a drug-using population.

  13. Developing and establishing the validity and reliability of the perceptions toward Aviation Safety Action Program (ASAP) and Line Operations Safety Audit (LOSA) questionnaires

    Science.gov (United States)

    Steckel, Richard J.

    Aviation Safety Action Program (ASAP) and Line Operations Safety Audits (LOSA) are voluntary safety reporting programs developed by the Federal Aviation Administration (FAA) to assist air carriers in discovering and fixing threats, errors and undesired aircraft states during normal flights that could result in a serious or fatal accident. These programs depend on voluntary participation of and reporting by air carrier pilots to be successful. The purpose of the study was to develop and validate a measurement scale to measure U.S. air carrier pilots' perceived benefits and/or barriers to participating in ASAP and LOSA programs. Data from these surveys could be used to make changes to or correct pilot misperceptions of these programs to improve participation and the flow of data. ASAP and LOSA a priori models were developed based on previous research in aviation and healthcare. Sixty thousand ASAP and LOSA paper surveys were sent to 60,000 current U.S. air carrier pilots selected at random from an FAA database of pilot certificates. Two thousand usable ASAP and 1,970 usable LOSA surveys were returned and analyzed using Confirmatory Factor Analysis. Analysis of the data using confirmatory actor analysis and model generation resulted in a five factor ASAP model (Ease of use, Value, Improve, Trust and Risk) and a five factor LOSA model (Value, Improve, Program Trust, Risk and Management Trust). ASAP and LOSA data were not normally distributed, so bootstrapping was used. While both final models exhibited acceptable fit with approximate fit indices, the exact fit hypothesis and the Bollen-Stine p value indicated possible model mis-specification for both ASAP and LOSA models.

  14. MSR - SPHINX concept program Eros (Experimental zero power Salt reactor SR-0) - The proposed experimental program as a basis for validation of reactor physics methods

    Energy Technology Data Exchange (ETDEWEB)

    Hron, M.; Juricek, V.; Kyncl, J.; Mikisek, M.; Rypar, V. [Nuclear Research Institute Rez plc, Rez (Czech Republic)

    2007-07-01

    The Molten Salt Reactor (MSR) - SPHINX (SPent Hot fuel Incinerator by Neutron fluX) concept solves this principal problem of spent fuel treatment by means of so-called nuclear incineration. It means the burning of fissionable part of its inventory and transmutation of other problematic radionuclides by use of nuclear reactions with neutrons in a MSR-SPHINX system. This reactor system is an actinide burner (most in resonance neutron spectrum) and a radionuclide transmuter in a well-thermalized neutron spectrum. In the frame of the physical part, there are computational analyses and experimental activities. The experimental program has been focused, in its first stage, on a short-term irradiation of small size samples of molten-salt systems as well as structural materials proposed for the MSR blanket in the field of high neutron flux of research reactors. The proposed next stage of the program will focus on a large-scale experimental verification of design inputs by use of MSR-type inserting zones into the existing light water moderated experimental reactor LR-0, which may allow us to modify it into the experimental zero power salt reactor SR-0. There will be a detail description of the proposed program given in the paper together with the so far performed experiments and their first results. These realized experiments help us also to verify computational codes used, and to recognize some anomalies related to molten fluorides utilization. (authors)

  15. CRAB-II: a computer program to predict hydraulics and scram dynamics of LMFBR control assemblies and its validation

    International Nuclear Information System (INIS)

    Carelli, M.D.; Baker, L.A.; Willis, J.M.; Engel, F.C.; Nee, D.Y.

    1982-01-01

    This paper presents an analytical method, the computer code CRAB-II, which calculates the hydraulics and scram dynamics of LMFBR control assemblies of the rod bundle type and its validation against prototypic data obtained for the Clinch River Breeder Reactor (CRBR) primary control assemblies. The physical-mathematical model of the code is presented, followed by a description of the testing of prototypic CRBR control assemblies in water and sodium to characterize, respectively, their hydraulic and scram dynamics behavior. Comparison of code predictions against the experimental data are presened in detail; excellent agreement was found. Also reported are experimental data and empirical correlations for the friction factor of the absorber bundle in the entire flow range (laminar to turbulent) which represent an extension of the state-of-the-art, since only fuel and blanket assemblies friction factor correlations were previously reported in the open literature

  16. Validation of use of the low energies library in the GATE program: assessment of the effective mass attenuation coefficient

    International Nuclear Information System (INIS)

    Argenta, J.; Brambilla, C.R.; Marques da Silva, A.M.; Hoff, G.

    2009-01-01

    GATE (Geant4 Application for Emission Tomography) is a versatile tool kit for nuclear medicine simulations of SPECT and PET studies. GATE takes advantage of well-validated libraries of physics processes models, geometry description, tracking of particles through materials, response of detector and visualization tools offered by Geant4. One package available to simulate electromagnetic interactions is LEP (Low Energy Electromagnetic Processes). The purpose of this work was to evaluate the LEP package used by GATE 4 for nuclear medicine shielding simulations. Several simulations were made involving a mono directional and 140 keV monoenergetic point source beam, passing through barriers of variable thickness of water and lead. The results showed good agreement with the theoretical model, indicating that GATE 4 uses correctly the LEP package. (author)

  17. Research use of the AIDA www.2aida.org diabetes software simulation program: a review--part 2. Generating simulated blood glucose data for prototype validation.

    Science.gov (United States)

    Lehmann, Eldon D

    2003-01-01

    The purpose of this review is to describe research applications of the AIDA diabetes software simulator. AIDA is a computer program that permits the interactive simulation of insulin and glucose profiles for teaching, demonstration, and self-learning purposes. Since March/April 1996 it has been made freely available on the Internet as a noncommercial contribution to continuing diabetes education. Up to May 2003 well over 320,000 visits have been logged at the main AIDA Website--www.2aida.org--and over 65,000 copies of the AIDA program have been downloaded free-of-charge. This review (the second of two parts) overviews research projects and ventures, undertaken for the most part by other research workers in the diabetes computing field, that have made use of the freeware AIDA program. As with Part 1 of the review (Diabetes Technol Ther 2003;5:425-438) relevant research work was identified in three main ways: (i) by personal (e-mail/written) communications from researchers, (ii) via the ISI Web of Science citation database to identify published articles which referred to AIDA-related papers, and (iii) via searches on the Internet. Also, in a number of cases research students who had sought advice about AIDA, and diabetes computing in general, provided copies of their research dissertations/theses upon the completion of their projects. Part 2 of this review highlights some more of the research projects that have made use of the AIDA diabetes simulation program to date. A wide variety of diabetes computing topics are addressed. These range from learning about parameter interactions using simulated blood glucose data, to considerations of dietary assessments, developing new diabetes models, and performance monitoring of closed-loop insulin delivery devices. Other topics include evaluation/validation research usage of such software, applying simulated blood glucose data for prototype training/validation, and other research uses of placing technical information on the Web

  18. From reactive to proactive: developing a valid clinical ethics needs assessment survey to support ethics program strategic planning (part 1 of 2).

    Science.gov (United States)

    Frolic, Andrea; Jennings, Barb; Seidlitz, Wendy; Andreychuk, Sandy; Djuric-Paulin, Angela; Flaherty, Barb; Peace, Donna

    2013-03-01

    As ethics committees and programs become integrated into the "usual business" of healthcare organizations, they are likely to face the predicament of responding to greater demands for service and higher expectations, without an influx of additional resources. This situation demands that ethics committees and programs allocate their scarce resources (including their time, skills and funds) strategically, rather than lurching from one ad hoc request to another; finding ways to maximize the effectiveness, efficiency, impact and quality of ethics services is essential in today's competitive environment. How can Hospital Ethics Committees (HECs) begin the process of strategic priority-setting to ensure they are delivering services where and how they are most needed? This paper describes the creation of the Clinical Ethics Needs Assessment Survey (CENAS) as a tool to understand interprofessional staff perceptions of the organization's ethical climate, challenging ethical issues and educational priorities. The CENAS was designed to support informed resource allocation and advocacy by HECs. By sharing our process of developing and validating this ethics needs assessment survey we hope to enable strategic priority-setting in other resource-strapped ethics programs, and to empower HECs to shift their focus to more proactive, quality-focused initiatives.

  19. Validación de un programa de vigilancia de infecciones nosocomiales Validation of a nosocomial infections surveillance program

    Directory of Open Access Journals (Sweden)

    M. Sigfrido Rangel-Frausto

    1999-01-01

    Full Text Available OBJETIVO. Validar el programa de vigilancia de infecciones nosocomiales y conocer la morbilidad y la mortalidad. MATERIAL Y MÉTODOS. Un médico especialmente capacitado, realizó vigilancia intensiva de todos los pacientes admitidos en el hospital. Los casos de infección fueron discutidos con otros dos médicos y el resultado se comparó con la vigilancia rutinaria. Se incluyó a todos los pacientes hospitalizados del 11 de julio al 12 de agosto de 1995, que no tenían un proceso infeccioso activo o que no manifestaban un periodo de incubación a su ingreso. Se siguieron diariamente y se registraron datos de: edad, sexo y padecimiento de ingreso. Se recabó información sobre tratamiento antimicrobiano, microrganismo aislado y susceptibilidad. Se evaluó el estado clínico final y se estimó el tiempo de estancia hospitalaria. RESULTADOS. De 429 pacientes, 45 desarrollaron infección nosocomial (casos y 384 no lo hicieron (controles. La incidencia de infecciones nosocomiales fue de 10.48/100. La sensibilidad y la especificidad del programa fueron de 93.3 y 98.7%, respectivamente. La mortalidad en los infectados fue de 11.11%, y en el grupo de los no infectados, de 2.4%. El promedio de estancia hospitalaria fue de 20 y 11 días, para infectados y no infectados, respectivamente (pOBJECTIVES. To validate the nosocomial infections surveillance system, establish its impact in morbi-mortality. MATERIAL AND METHODS. Surveillance of every single patient admited during a one month period was done by one of us (DMG. Each posibile case was discussed with two other hospital epidemiologists (SPLR, MSRF. This intensive surveillance was compared against the routinely surveillance performed by the nurses. We included all hospitalized patients between 11th July and 12th of August according to CDC (Atlanta, GA nosocomial infections definitions. Patients were followed everyday and information about age, gender, underlying diagnosis, microorganisms responsible

  20. Laboratory research program to aid in developing and testing the validity of conceptual models for flow and transport through unsaturated porous media

    International Nuclear Information System (INIS)

    Glass, R.J.

    1991-01-01

    As part of the Yucca Mountain Project, a laboratory research program is being developed at Sandia National Laboratories that will integrate fundamental physical experimentation with conceptual model formulation and mathematical modeling and aid in subsequent model validation for unsaturated zone water and contaminant transport. Experimental systems are being developed to explore flow and transport processes and assumptions of fundamental importance to various conceptual models. Experimentation will run concurrently in two types of systems: fractured and nonfractured tuffaceous systems; and analogue systems having specific characteristics of the tuff systems but designed to maximize experimental control and resolution of data measurement. Areas in which experimentation currently is directed include infiltration flow instability, water and solute movement in unsaturated fractures, fracture-matrix interaction, and scaling laws to define effective large-scale properties for heterogeneous, fractured media. 16 refs

  1. Laboratory research program to aid in developing and testing the validity of conceptual models for flow and transport through unsaturated porous media

    International Nuclear Information System (INIS)

    Glass, R.J.

    1990-01-01

    As part of the Yucca Mountain Project, a laboratory research program is being developed at Sandia National Laboratories that will integrate fundamental physical experimentation with conceptual formulation and mathematical modeling and aid in subsequent model validation for unsaturated zone water and contaminant transport. Experimental systems are being developed to explore flow and transport processes and assumptions of fundamental importance to various conceptual models. Experimentation will run concurrently in two types of systems: fractured and nonfractured tuffaceous systems; and analogue systems having specific characteristics of the tuff systems but designed to maximize experimental control and resolution of data measurement. Questions to which experimentation currently is directed include infiltration flow instability, water and solute movement in unsaturated fractures, fracture-matrix interaction, and the definition of effective large-scale properties for heterogeneous, fractured media. 16 refs

  2. Contrast-enhanced spectral mammography in recalls from the Dutch breast cancer screening program: validation of results in a large multireader, multicase study.

    Science.gov (United States)

    Lalji, U C; Houben, I P L; Prevos, R; Gommers, S; van Goethem, M; Vanwetswinkel, S; Pijnappel, R; Steeman, R; Frotscher, C; Mok, W; Nelemans, P; Smidt, M L; Beets-Tan, R G; Wildberger, J E; Lobbes, M B I

    2016-12-01

    Contrast-enhanced spectral mammography (CESM) is a promising problem-solving tool in women referred from a breast cancer screening program. We aimed to study the validity of preliminary results of CESM using a larger panel of radiologists with different levels of CESM experience. All women referred from the Dutch breast cancer screening program were eligible for CESM. 199 consecutive cases were viewed by ten radiologists. Four had extensive CESM experience, three had no CESM experience but were experienced breast radiologists, and three were residents. All readers provided a BI-RADS score for the low-energy CESM images first, after which the score could be adjusted when viewing the entire CESM exam. BI-RADS 1-3 were considered benign and BI-RADS 4-5 malignant. With this cutoff, we calculated sensitivity, specificity and area under the ROC curve. CESM increased diagnostic accuracy in all readers. The performance for all readers using CESM was: sensitivity 96.9 % (+3.9 %), specificity 69.7 % (+33.8 %) and area under the ROC curve 0.833 (+0.188). CESM is superior to conventional mammography, with excellent problem-solving capabilities in women referred from the breast cancer screening program. Previous results were confirmed even in a larger panel of readers with varying CESM experience. • CESM is consistently superior to conventional mammography • CESM increases diagnostic accuracy regardless of a reader's experience • CESM is an excellent problem-solving tool in recalls from screening programs.

  3. Culture-sensitive adaptation and validation of the community-oriented program for the control of rheumatic diseases methodology for rheumatic disease in Latin American indigenous populations.

    Science.gov (United States)

    Peláez-Ballestas, Ingris; Granados, Ysabel; Silvestre, Adriana; Alvarez-Nemegyei, José; Valls, Evart; Quintana, Rosana; Figuera, Yemina; Santiago, Flor Julian; Goñi, Mario; González, Rosa; Santana, Natalia; Nieto, Romina; Brito, Irais; García, Imelda; Barrios, Maria Cecilia; Marcano, Manuel; Loyola-Sánchez, Adalberto; Stekman, Ivan; Jorfen, Marisa; Goycochea-Robles, Maria Victoria; Midauar, Fadua; Chacón, Rosa; Martin, Maria Celeste; Pons-Estel, Bernardo A

    2014-09-01

    The purpose of the study is to validate a culturally sensitive adaptation of the community-oriented program for the control of rheumatic diseases (COPCORD) methodology in several Latin American indigenous populations. The COPCORD Spanish questionnaire was translated and back-translated into seven indigenous languages: Warao, Kariña and Chaima (Venezuela), Mixteco, Maya-Yucateco and Raramuri (Mexico) and Qom (Argentina). The questionnaire was administered to almost 100 subjects in each community with the assistance of bilingual translators. Individuals with pain, stiffness or swelling in any part of the body in the previous 7 days and/or at any point in life were evaluated by physicians to confirm a diagnosis according to criteria for rheumatic diseases. Overall, individuals did not understand the use of a 0-10 visual analog scale for pain intensity and severity grading and preferred a Likert scale comprising four items for pain intensity (no pain, minimal pain, strong pain, and intense pain). They were unable to discriminate between pain intensity and pain severity, so only pain intensity was included. For validation, 702 subjects (286 male, 416 female, mean age 42.7 ± 18.3 years) were interviewed in their own language. In the last 7 days, 198 (28.2 %) subjects reported having musculoskeletal pain, and 90 (45.4 %) of these had intense pain. Compared with the physician-confirmed diagnosis, the COPCORD questionnaire had 73.8 % sensitivity, 72.9 % specificity, a positive likelihood ratio of 2.7 and area under the receiver operating characteristic curve of 0.73. The COPCORD questionnaire is a valid screening tool for rheumatic diseases in indigenous Latin American populations.

  4. The X-Ray Pebble Recirculation Experiment (X-PREX): Facility Description, Preliminary Discrete Element Method Simulation Validation Studies, and Future Test Program

    International Nuclear Information System (INIS)

    Laufer, Michael R.; Bickel, Jeffrey E.; Buster, Grant C.; Krumwiede, David L.; Peterson, Per F.

    2014-01-01

    This paper presents a facility description, preliminary results, and future test program of the new X-Ray Pebble Recirculation Experiment (X-PREX), which is now operational and being used to collect data on the behavior of slow dense granular flows relevant to pebble bed reactor core designs. The X-PREX facility uses digital x-ray tomography methods to track both the translational and rotational motion of spherical pebbles, which provides unique experimental results that can be used to validate discrete element method (DEM) simulations of pebble motion. The validation effort supported by the X-PREX facility provides a means to build confidence in analysis of pebble bed configuration and residence time distributions that impact the neutronics, thermal hydraulics, and safety analysis of pebble bed reactor cores. Preliminary experimental and DEM simulation results are reported for silo drainage, a classical problem in the granular flow literature, at several hopper angles. These studies include conventional converging and novel diverging geometries that provide additional flexibility in the design of pebble bed reactor cores. Excellent agreement is found between the X-PREX experimental and DEM simulation results. Finally, this paper discusses additional studies in progress relevant to the design and analysis of pebble bed reactor cores including pebble recirculation in cylindrical core geometries and evaluation of forces on shut down blades inserted directly into a packed pebble bed. (author)

  5. Empirical validation of building simulation programs - Swiss contribution to IEA Task 34, Annex 43; Empirische Validierung von Gebaeudesimulationsprogrammen. Schweizer Beitrag zu IEA Task 34 / Annex 43. Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Loutzenhiser, P.; Manz, H. (eds.)

    2006-11-15

    This comprehensive, illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on work carried out on the validation of building simulation programs. the purpose of this project was to create a data set for use when evaluating the accuracy of models for glazing units and windows with and without shading devices. A series of eight experiments that subsequently increased in complexity were performed in an outdoor test cell located on the Swiss Federal Laboratories for Material Testing and Research (EMPA) campus in Duebendorf, Switzerland. Particular emphasis was placed on accurately determining the test cell characteristics. The report presents information on experimental set-ups, their validation and the methodology used. Further chapters describe particular experiments made, including transient characterisation, evaluation of irradiation models on tiled facades, as well as those made on glazing units with various types of shading and blinds. The thermal properties of windows are looked at. The results of experiments made with four different models, HELIOS, EnergyPlus, DOE-2.1E and IDA-ICE, are discussed.

  6. Internal validity of a household food security scale is consistent among diverse populations participating in a food supplement program in Colombia

    Directory of Open Access Journals (Sweden)

    Melgar-Quinonez Hugo

    2008-05-01

    Full Text Available Abstract Objective We assessed the validity of a locally adapted Colombian Household Food Security Scale (CHFSS used as a part of the 2006 evaluation of the food supplement component of the Plan for Improving Food and Nutrition in Antioquia, Colombia (MANA – Plan Departamental de Seguridad Alimentaria y Nutricional de Antioquia. Methods Subjects included low-income families with pre-school age children in MANA that responded affirmatively to at least one CHFSS item (n = 1,319. Rasch Modeling was used to evaluate the psychometric characteristics of the items through measure and INFIT values. Differences in CHFSS performance were assessed by area of residency, socioeconomic status and number of children enrolled in MANA. Unidimensionality of a scale by group was further assessed using Differential Item Functioning (DIF. Results Most CHFSS items presented good fitness with most INFIT values within the adequate range of 0.8 to 1.2. Consistency in item measure values between groups was found for all but two items in the comparison by area of residency. Only two adult items exhibited DIF between urban and rural households. Conclusion The results indicate that the adapted CHFSS is a valid tool to assess the household food security of participants in food assistance programs like MANA.

  7. Internal validity of a household food security scale is consistent among diverse populations participating in a food supplement program in Colombia.

    Science.gov (United States)

    Hackett, Michelle; Melgar-Quinonez, Hugo; Uribe, Martha C Alvarez

    2008-05-23

    We assessed the validity of a locally adapted Colombian Household Food Security Scale (CHFSS) used as a part of the 2006 evaluation of the food supplement component of the Plan for Improving Food and Nutrition in Antioquia, Colombia (MANA - Plan Departamental de Seguridad Alimentaria y Nutricional de Antioquia). Subjects included low-income families with pre-school age children in MANA that responded affirmatively to at least one CHFSS item (n = 1,319). Rasch Modeling was used to evaluate the psychometric characteristics of the items through measure and INFIT values. Differences in CHFSS performance were assessed by area of residency, socioeconomic status and number of children enrolled in MANA. Unidimensionality of a scale by group was further assessed using Differential Item Functioning (DIF). Most CHFSS items presented good fitness with most INFIT values within the adequate range of 0.8 to 1.2. Consistency in item measure values between groups was found for all but two items in the comparison by area of residency. Only two adult items exhibited DIF between urban and rural households. The results indicate that the adapted CHFSS is a valid tool to assess the household food security of participants in food assistance programs like MANA.

  8. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    Science.gov (United States)

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  9. Validity of air-displacement plethysmography in the assessment of body composition changes in a 16-month weight loss program

    Directory of Open Access Journals (Sweden)

    Hull Holly R

    2006-08-01

    Full Text Available Abstract Objective To compare the accuracy of air displacement plethysmography (ADP and dual energy x-ray absorptionmetry (DXA in tracking changes in body composition after a 16 month weight loss intervention in overweight and obese females. Methods 93 healthy female subjects (38.9 ± 5.7 yr, 159.8 ± 5.6 cm, 76.7 ± 9.9 kg, 30.0 ± 3.4 kg/m2 completed a 16 month weight loss intervention. Eligible subjects attended 15 treatment sessions occurring over the course of 4 months with educational content including topics relating to physical activity and exercise, diet and eating behavior, and behavior modification. In the remaining 12 months, subjects underwent a lifestyle program designed to increase physical activity and improve eating habits. Before and after the intervention, subjects had their percent body fat (%fat, fat mass (FM, and fat-free mass (FFM assessed by DXA and ADP. Results Significant differences (p ≤ 0.001 were found between DXA and ADP at baseline %fat (46.0 % fat vs. 42.0 % fat, FM (35.3 kg vs. 32.5 kg and FFM (40.8 kg vs. 44.2 kg as well as at post intervention for %fat (42.1% fat vs. 38.3 % fat, FM (30.9 kg vs. 28.4 kg and FFM (41.7 kg vs. 44.7 kg. At each time point, ADP %fat and total FM was significantly lower (p ≤ 0.001 than DXA while FFM was significantly higher (p ≤ 0.001. However, both techniques tracked %fat changes similarly considering that there were no differences between the two means. Furthermore, a Bland-Altman analysis was performed and no significant bias was observed, thus demonstrating the ability of ADP to measure body fat across a wide range of fatness. Conclusion At baseline and post weight loss, a significant difference was found between ADP and DXA. However, the results indicate both methods are highly related and track changes in %fat similarly after a weight loss program in overweight and obese females. Additionally, the mean changes in %fat were similar between the two techniques, suggesting

  10. Validity of the international physical activity questionnaire and the Singapore prospective study program physical activity questionnaire in a multiethnic urban Asian population

    Directory of Open Access Journals (Sweden)

    Tai E Shyong

    2011-10-01

    Full Text Available Abstract Background Physical activity patterns of a population remain mostly assessed by the questionnaires. However, few physical activity questionnaires have been validated in Asian populations. We previously utilized a combination of different questionnaires to assess leisure time, transportation, occupational and household physical activity in the Singapore Prospective Study Program (SP2. The International Physical Activity Questionnaire (IPAQ has been developed for a similar purpose. In this study, we compared estimates from these two questionnaires with an objective measure of physical activity in a multi-ethnic Asian population. Methods Physical activity was measured in 152 Chinese, Malay and Asian Indian adults using an accelerometer over five consecutive days, including a weekend. Participants completed both the physical activity questionnaire in SP2 (SP2PAQ and IPAQ long form. 43subjects underwent a second set of measurements on average 6 months later to assess reproducibility of the questionnaires and the accelerometer measurements. Spearman correlations were used to evaluate validity and reproducibility and correlations for validity were corrected for within-person variation of accelerometer measurements. Agreement between the questionnaires and the accelerometer measurements was also evaluated using Bland Altman plots. Results The corrected correlation with accelerometer estimates of energy expenditure from physical activity was better for the SP2PAQ (vigorous activity: r = 0.73; moderate activity: r = 0.27 than for the IPAQ (vigorous activity: r = 0.31; moderate activity: r = 0.15. For moderate activity, the corrected correlation between SP2PAQ and the accelerometer was higher for Chinese (r = 0.38 and Malays (r = 0.57 than for Indians (r = -0.09. Both questionnaires overestimated energy expenditure from physical activity to a greater extent at higher levels of physical activity than at lower levels of physical activity. The

  11. Advancement of compressible multiphase flows and sodium-water reaction analysis program SERAPHIM. Validation of a numerical method for the simulation of highly underexpanded jets

    International Nuclear Information System (INIS)

    Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira

    2010-01-01

    SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)

  12. Programming

    International Nuclear Information System (INIS)

    Jackson, M.A.

    1982-01-01

    The programmer's task is often taken to be the construction of algorithms, expressed in hierarchical structures of procedures: this view underlies the majority of traditional programming languages, such as Fortran. A different view is appropriate to a wide class of problem, perhaps including some problems in High Energy Physics. The programmer's task is regarded as having three main stages: first, an explicit model is constructed of the reality with which the program is concerned; second, this model is elaborated to produce the required program outputs; third, the resulting program is transformed to run efficiently in the execution environment. The first two stages deal in network structures of sequential processes; only the third is concerned with procedure hierarchies. (orig.)

  13. Programming

    OpenAIRE

    Jackson, M A

    1982-01-01

    The programmer's task is often taken to be the construction of algorithms, expressed in hierarchical structures of procedures: this view underlies the majority of traditional programming languages, such as Fortran. A different view is appropriate to a wide class of problem, perhaps including some problems in High Energy Physics. The programmer's task is regarded as having three main stages: first, an explicit model is constructed of the reality with which the program is concerned; second, thi...

  14. Relative Validity and Reliability of a 1-Week, Semiquantitative Food Frequency Questionnaire for Women Participating in the Supplemental Nutrition Assistance Program.

    Science.gov (United States)

    Sanjeevi, Namrata; Freeland-Graves, Jeanne; George, Goldy Chacko

    2017-12-01

    The Supplemental Nutrition Assistance Program (SNAP) plays a critical role in reducing food insecurity by distribution of benefits at a monthly interval to participants. Households that receive assistance from SNAP spend at least three-quarters of benefits within the first 2 weeks of receipt. Because this expenditure pattern may be associated with lower food intake toward the end of the month, it is important to develop a tool that can assess the weekly diets of SNAP participants. The goal of this study was to develop and assess the relative validity and reliability of a semiquantitative 1-week food frequency questionnaire (FFQ) tailored to a population of women participating in SNAP. The FFQ was derived from an existing 195-item FFQ that was based on a reference period of 1 month. This 195-item FFQ has been validated in a population of low-income postpartum women who were recruited from central Texas during 2004. Mean daily servings of each food item in the 195-item FFQ completed by women who took part in the 2004 validation study were calculated to determine the most frequently consumed food items. Emphasis on these items led to the creation of a shorter, 1-week FFQ of only 95 items. This new 1-week instrument was compared with 3-day diet records to evaluate relative validity in a sample of women participating in SNAP. For reliability, the FFQ was administered a second time, separated by a 1-month time interval. The validity study included 70 female SNAP participants who were recruited from the partner agencies of the Central Texas Food Bank from March to June 2015. A subsample of 40 women participated in the reliability study. Outcome measures were mean nutrient intake values obtained from the two tests of the 95-item FFQ and 3-day diet records. Deattenuated Pearson correlation coefficients examined relationships in nutrient intake between the 95-item FFQ and 3-day diet records, and a paired samples t test determined differences in mean nutrient intake. Weighted

  15. An app for patient education and self-audit within an enhanced recovery program for bowel surgery: a pilot study assessing validity and usability.

    Science.gov (United States)

    Pecorelli, Nicolò; Fiore, Julio F; Kaneva, Pepa; Somasundram, Abarna; Charlebois, Patrick; Liberman, A Sender; Stein, Barry L; Carli, Franco; Feldman, Liane S

    2018-05-01

    While patient engagement and clinical audit are key components of successful enhanced recovery programs (ERPs), they require substantial resource allocation. The objective of this study was to assess the validity and usability of a novel mobile device application for education and self-reporting of adherence for patients undergoing bowel surgery within an established ERP. Prospectively recruited patients undergoing bowel surgery within an ERP used a novel app specifically designed to provide daily recovery milestones and record adherence to 15 different ERP processes and six patient-reported outcomes (PROs). Validity was measured by the agreement index (Cohen's kappa coefficient for categorical, and interclass correlation coefficient (ICC) for continuous variables) between patient-reported data through the app and data recorded by a clinical auditor. Acceptability and usability of the app were measured by the System Usability Scale (SUS). Forty-five patients participated in the study (mean age 61, 64% male). Overall, patients completed 159 of 179 (89%) of the available questionnaires through the app. Median time to complete a questionnaire was 2 min 49 s (i.q.r. 2'32″-4'36″). Substantial (kappa > 0.6) or almost perfect agreement (kappa > 0.8) and strong correlation (ICC > 0.7) between data collected through the app and by the clinical auditor was found for 14 ERP processes and four PROs. Patient-reported usability was high; mean SUS score was 87 (95% CI 83-91). Only 6 (13%) patients needed technical support to use the app. Forty (89%) patients found the app was helpful to achieve their daily goals, and 34 (76%) thought it increased their motivation to recover after surgery. This novel application provides a tool to record patient adherence to care processes and PROs, with high agreement with traditional clinical audit, high usability, and patient satisfaction. Future studies should investigate the use of mobile device apps as strategies to increase

  16. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 05: A novel respiratory motion simulation program for VMAT treatment plans: a phantom validation study

    International Nuclear Information System (INIS)

    Hubley, Emily; Pierce, Greg; Ploquin, Nicolas

    2016-01-01

    Purpose: To develop and validate a computational method to simulate craniocaudal respiratory motion in a VMAT treatment plan. Methods: Three 4DCTs of the QUASAR respiratory motion phantom were acquired with a 2cm water-density spherical tumour embedded in cedar to simulate lung. The phantom was oscillating sinusoidally with an amplitude of 2cm and periods of 3, 4, and 5 seconds. An ITV was contoured and 5mm PTV margin was added. High and a low modulation factor VMAT plans were created for each scan. An in-house program was developed to simulate respiratory motion in the treatment plans by shifting the MLC leaf positions relative to the phantom. Each plan was delivered to the phantom and the dose was measured using Gafchromic film. The measured and calculated plans were compared using an absolute dose gamma analysis (3%/3mm). Results: The average gamma pass rate for the low modulation plan and high modulation plans were 91.1% and 51.4% respectively. The difference between the high and low modulation plans gamma pass rates is likely related to the different sampling frequency of the respiratory curve and the higher MLC leaf speeds in the high modulation plan. A high modulation plan has a slower gantry speed and therefore samples the breathing cycle at a coarser frequency leading to inaccuracies between the measured and planned doses. Conclusion: A simple program, including a novel method for increasing sampling frequency beyond the control point frequency, has been developed to simulate respiratory motion in VMAT plans by shifting the MLC leaf positions.

  17. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 05: A novel respiratory motion simulation program for VMAT treatment plans: a phantom validation study

    Energy Technology Data Exchange (ETDEWEB)

    Hubley, Emily; Pierce, Greg; Ploquin, Nicolas [University of Calgary, Tom Baker Cancer Centre, Tom Baker Cancer Centre (Canada)

    2016-08-15

    Purpose: To develop and validate a computational method to simulate craniocaudal respiratory motion in a VMAT treatment plan. Methods: Three 4DCTs of the QUASAR respiratory motion phantom were acquired with a 2cm water-density spherical tumour embedded in cedar to simulate lung. The phantom was oscillating sinusoidally with an amplitude of 2cm and periods of 3, 4, and 5 seconds. An ITV was contoured and 5mm PTV margin was added. High and a low modulation factor VMAT plans were created for each scan. An in-house program was developed to simulate respiratory motion in the treatment plans by shifting the MLC leaf positions relative to the phantom. Each plan was delivered to the phantom and the dose was measured using Gafchromic film. The measured and calculated plans were compared using an absolute dose gamma analysis (3%/3mm). Results: The average gamma pass rate for the low modulation plan and high modulation plans were 91.1% and 51.4% respectively. The difference between the high and low modulation plans gamma pass rates is likely related to the different sampling frequency of the respiratory curve and the higher MLC leaf speeds in the high modulation plan. A high modulation plan has a slower gantry speed and therefore samples the breathing cycle at a coarser frequency leading to inaccuracies between the measured and planned doses. Conclusion: A simple program, including a novel method for increasing sampling frequency beyond the control point frequency, has been developed to simulate respiratory motion in VMAT plans by shifting the MLC leaf positions.

  18. A cross-validation trial of an Internet-based prevention program for alcohol and cannabis: Preliminary results from a cluster randomised controlled trial.

    Science.gov (United States)

    Champion, Katrina E; Newton, Nicola C; Stapinski, Lexine; Slade, Tim; Barrett, Emma L; Teesson, Maree

    2016-01-01

    Replication is an important step in evaluating evidence-based preventive interventions and is crucial for establishing the generalizability and wider impact of a program. Despite this, few replications have occurred in the prevention science field. This study aims to fill this gap by conducting a cross-validation trial of the Climate Schools: Alcohol and Cannabis course, an Internet-based prevention program, among a new cohort of Australian students. A cluster randomized controlled trial was conducted among 1103 students (Mage: 13.25 years) from 13 schools in Australia in 2012. Six schools received the Climate Schools course and 7 schools were randomized to a control group (health education as usual). All students completed a self-report survey at baseline and immediately post-intervention. Mixed-effects regressions were conducted for all outcome variables. Outcomes assessed included alcohol and cannabis use, knowledge and intentions to use these substances. Compared to the control group, immediately post-intervention the intervention group reported significantly greater alcohol (d = 0.67) and cannabis knowledge (d = 0.72), were less likely to have consumed any alcohol (even a sip or taste) in the past 6 months (odds ratio = 0.69) and were less likely to intend on using alcohol in the future (odds ratio = 0.62). However, there were no effects for binge drinking, cannabis use or intentions to use cannabis. These preliminary results provide some support for the Internet-based Climate Schools: Alcohol and Cannabis course as a feasible way of delivering alcohol and cannabis prevention. Intervention effects for alcohol and cannabis knowledge were consistent with results from the original trial; however, analyses of longer-term follow-up data are needed to provide a clearer indication of the efficacy of the intervention, particularly in relation to behavioral changes. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  19. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  20. Ostomy Home Skills Program

    Medline Plus

    Full Text Available ... Special Activities Resources Housing and Travel ... Contact Online Education Accreditation, Verification, and Validation Accreditation, Verification, and Validation Programs Accreditation, Verification, and ...

  1. Effects of acupuncture at GV20 and ST36 on the expression of matrix metalloproteinase 2, aquaporin 4, and aquaporin 9 in rats subjected to cerebral ischemia/reperfusion injury.

    Directory of Open Access Journals (Sweden)

    Hong Xu

    Full Text Available BACKGROUND/PURPOSE: Ischemic stroke is characterized by high morbidity and mortality worldwide. Matrix metalloproteinase 2 (MMP2, aquaporin (AQP 4, and AQP9 are linked to permeabilization of the blood-brain barrier (BBB in cerebral ischemia/reperfusion injury (CIRI. BBB disruption, tissue inflammation, and MMP/AQP upregulation jointly provoke brain edema/swelling after CIRI, while acupuncture and electroacupuncture can alleviate CIRI symptoms. This study evaluated the hypothesis that acupuncture and electroacupuncture can similarly exert neuroprotective actions in a rat model of middle cerebral artery occlusion (MCAO by modulating MMP2/AQP4/APQ9 expression and inflammatory cell infiltration. METHODS: Eighty 8-week-old Sprague-Dawley rats were randomly divided into sham group S, MCAO model group M, acupuncture group A, electroacupuncture group EA, and edaravone group ED. The MCAO model was established by placement of a suture to block the middle carotid artery, and reperfusion was triggered by suture removal in all groups except group S. Acupuncture and electroacupuncture were administered at acupoints GV20 (governing vessel-20 and ST36 (stomach-36. Rats in groups A, EA, and ED received acupuncture, electroacupuncture, or edaravone, respectively, immediately after MCAO. Neurological function (assessed using the Modified Neurological Severity Score, infarct volume, MMP2/AQP4/AQP9 mRNA and protein expression, and inflammatory cell infiltration were all evaluated at 24 h post-reperfusion. RESULTS: Acupuncture and electroacupuncture significantly decreased infarct size and improved neurological function. Furthermore, target mRNA and protein levels and inflammatory cell infiltration were significantly reduced in groups A, EA, and ED vs. group M. However, MMP2/AQP levels and inflammatory cell infiltration were generally higher in groups A and EA than in group ED except MMP2 mRNA levels. CONCLUSIONS: Acupuncture and electroacupuncture at GV20 and ST36

  2. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  3. Assessing the Relative Performance of Microwave-Based Satellite Rain Rate Retrievals Using TRMM Ground Validation Data

    Science.gov (United States)

    Wolff, David B.; Fisher, Brad L.

    2011-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecasts of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (Aqua) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparisons with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellite estimates is examined via comparisons with space- and time-coincident GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25deg terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSR-E over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm/hr. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU

  4. Systematic Model for Validating Equipment Uses in Selected Marketing and Distribution Education Programs. Final Report, February 1, 1980-June 30, 1981.

    Science.gov (United States)

    Gildan, Kate; Buckner, Leroy

    Research was conducted to provide a model for selecting equipment for marketing and distributive education programs that was required for the development of the skills or competencies needed to perform in marketing and distribution occupation. A research of the literature identified both competency statements for three program areas--Fashion…

  5. Software for validating parameters retrieved from satellite

    Digital Repository Service at National Institute of Oceanography (India)

    Muraleedharan, P.M.; Sathe, P.V.; Pankajakshan, T.

    -channel Scanning Microwave Radiometer (MSMR) onboard the Indian satellites Occansat-1 during 1999-2001 were validated using this software as a case study. The program has several added advantages over the conventional method of validation that involves strenuous...

  6. Mercury and Cyanide Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  7. ICP-MS Data Validation

    Science.gov (United States)

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  8. Construct Validation of a Program to Increase Use of Self-Regulation for Physical Activity among Overweight and Obese Adults with Type 2 Diabetes Mellitus

    Science.gov (United States)

    Petosa, R. Lingyak; Silfee, Valerie

    2016-01-01

    Background: Studies have revealed that overweight adults with type 2 diabetes have low rates of physical activity and are resistant to change. Purpose: The purpose of this study was to use construct validation of intervention methods to examine the impact of a 4-week behavioral intervention on the use of self-regulation skills for physical…

  9. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  10. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  11. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  12. Validation of the Monte Carlo criticality program KENO IV and the Hansen-Roach sixteen-energy-group-cross sections for high-assay uranium systems

    International Nuclear Information System (INIS)

    Handley, G.R.; Masters, L.C.; Stachowiak, R.V.

    1981-01-01

    Validation of the Monte Carlo criticality code, KENO IV, and the Hansen-Roach sixteen-energy-group cross sections was accomplished by calculating the effective neutron multiplication constant, k/sub eff/, of 29 experimentally critical assemblies which had uranium enrichments of 92.6% or higher in the uranium-235 isotope. The experiments were chosen so that a large variety of geometries and of neutron energy spectra were covered. Problems, calculating the k/sub eff/ of systems with high-uranium-concentration uranyl nitrate solution that were minimally reflected or unreflected, resulted in the separate examination of five cases

  13. Teaching Applied Behavior Analysis Knowledge Competencies to Direct-Care Service Providers: Outcome Assessment and Social Validation of a Training Program

    Science.gov (United States)

    Luiselli, James K.; Bass, Jennifer D.; Whitcomb, Sara A.

    2010-01-01

    Staff training is a critical performance improvement objective within behavioral health care organizations. This study evaluated a systematic training program for teaching applied behavior analysis knowledge competencies to newly hired direct-care employees at a day and residential habilitation services agency for adults with intellectual and…

  14. Validity Study of the "Preschool Language Scale-4" with English-Speaking Hispanic and European American Children in Head Start Programs

    Science.gov (United States)

    Qi, Cathy H.; Marley, Scott C.

    2011-01-01

    The purpose of the study was to examine the psychometric properties of the "Preschool Language Scale-4" (PLS-4) with a sample of English-speaking Hispanic and European American children who attended Head Start programs. Participants were 440 children between the ages of 3 and 5 years (52% male; 86% Hispanic and 14% European American).…

  15. Development and validation of an interactive efficient dose rates distribution calculation program ARShield for visualization of radiation field in nuclear power plants

    International Nuclear Information System (INIS)

    He, Shuxiang; Zhang, Han; Wang, Mengqi; Zang, Qiyong; Zhang, Jingyu; Chen, Yixue

    2017-01-01

    Point kernel integration (PKI) method is widely used in the visualization of radiation field in engineering applications because of the features of quickly dealing with large-scale complicated geometry space problems. But the traditional PKI programs have a lot of restrictions, such as complicated modeling, complicated source setting, 3D fine mesh results statistics and large-scale computing efficiency. To break the traditional restrictions for visualization of radiation field, ARShield was developed successfully. The results show that ARShield can deal with complicated plant radiation shielding problems for visualization of radiation field. Compared with SuperMC and QAD, it can be seen that the program is reliable and efficient. Also, ARShield can meet the demands of calculation speediness and interactive operations of modeling and displaying 3D geometries on a graphical user interface, avoiding error modeling in calculation and visualization. (authors)

  16. Joining U.S. NRC international round robin for weld residual stress analysis. Stress analysis and validation in PWSCC mitigation program

    International Nuclear Information System (INIS)

    Maekawa, Akira; Serizawa, Hisashi; Murakawa, Hidekazu

    2012-01-01

    It is necessary to establish properly reliable weld residual stress analysis methods for accurate crack initiation and growth assessment of primary water stress corrosion cracking (PWSCC), which may occur in nickel-based dissimilar metal welds in pressurized water reactors. The U.S. Nuclear Regulatory Commission conducted an international round robin for weld residual stress analysis to improve stress analysis methods and to examine the uncertainties involved in the calculated stress values. In this paper, the results from the authors' participation in the round robin were reported. In the round robin, the weld residual stress in a nickel-based dissimilar metal weld of a pressurizer surge nozzle mock-up was computed under various analysis conditions. Based on these residual stress analysis results, a welding simulation code currently being developed that uses the iterative substructure method was validated and affecting factors on the analysis results were identified. (author)

  17. Validation of an ambulatory capacity measure in Parkinson disease: a construct derived from the Unified Parkinson's Disease Rating Scale.

    Science.gov (United States)

    Parashos, Sotirios A; Elm, Jordan; Boyd, James T; Chou, Kelvin L; Dai, Lin; Mari, Zoltan; Morgan, John C; Sudarsky, Lewis; Wielinski, Catherine L

    2015-01-01

    A construct calculated as the sum of items 13-15, 29, 30 of the Unified Parkinson's Disease Rating Scale (UPDRS) has been used as an "Ambulatory Capacity Measure" (ACM) in Parkinson disease (PD). Its construct validity has never been examined. A similar construct, consisting of the mean value of the same UPDRS items has been used under the acronym PIGD as a measure of postural instability and gait disorder in PD. To examine the construct validity of the ACM and PIGD in PD. We analyzed data in an existing database of 340 PD patients, Hoehn and Yahr stages (HYS) 1-5 who participated in a study of falls. Number of falls (NOF) was recorded over 4 weeks, and UPDRS (mental, ADL, and motor subscales), HYS, Activities Based Confidence Scale (ABC), Freezing of Gait Questionnaire (FOG), Five Times Sit-to-Stand (FTSS), Timed Up-and Go (TUG), Gait Velocity (GV), and Berg Balance Scale (BBS) evaluations were performed. Internal consistency was assessed by Cronbach's alpha. Construct validity was assessed through correlations of the ACM and PIGD to these measures and to their summed-ranks. A coefficient of determination was calculated through linear regression. Mean age was 71.4, mean age at diagnosis 61.4 years; 46% were women; mean UPDRS subscale scores were: Mental 3.7; ADL 15.7; motor: 27.1; mean ACM was 6.51, and mean PIGD 1.30. Cronbach's alpha was 0.78 for both ACM and PIGD. Spearman correlation coefficients between the ACM/PIGD and ABC, FOG, TUG, GV and BBS were 0.69, 0.72, 0.67, 0.58, and 0.70 respectively. Correlation between the ACM/PIGD and summed-ranks of HYS, NOF, ABC, FOG, FTSS, TUG, GV and BBS was high (Spearman r = 0.823, p < 0.0001); 68% of the variability in the summed-ranks was explained by ACM/PIGD. The ACM and the PIGD are valid global measures and accurately reflect the combined effects of the various components of ambulatory capacity in PD patients with HY stages 1-4.

  18. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  19. The need for a public information program to promote understanding of the validity of the safety of IAEA transport regulations for shipment of radioactive material

    International Nuclear Information System (INIS)

    Kubo, M.

    2004-01-01

    It is important to convey basic knowledge that demonstrates to the general public and public officials that transport of radioactive materials is safe. Data, analysis, and testing for certification in member states of the IAEA as well as experience with packages involved in accidents demonstrate the margin of safety when radioactive material material is transported. In addition, the experience of TranSAS activity has shown it to be an effective and transparent means to the public people for Member States to demonstrate their commitment to the safe transport of RAM. Therefore, in the future, the IAEA must continue and expand its public efforts to make the public aware of the very high certainty of safe transport that is the consequence of following the regulations. I would like to ask IAEA to have the transportation specialist groups designated by each Member State. These transportation specialist groups, working with the IAEA transport regulations in each country, should have as a central activity an information program that conveys the margin of safety inherent in the IAEA transport regulations. Finally I would like to ask IAEA to produce a program relating to public perception of RAM transport for the public throughout the world. And also I would like to ask IAEA to send the transportation specialist groups to Member States and many concerned countries to explain and demonstrate the adequacy of the IAEA Regulations

  20. The need for a public information program to promote understanding of the validity of the safety of IAEA transport regulations for shipment of radioactive material

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, M. [Japan Nuclear Cycle Development Inst., Ibaraki (Japan)

    2004-07-01

    It is important to convey basic knowledge that demonstrates to the general public and public officials that transport of radioactive materials is safe. Data, analysis, and testing for certification in member states of the IAEA as well as experience with packages involved in accidents demonstrate the margin of safety when radioactive material material is transported. In addition, the experience of TranSAS activity has shown it to be an effective and transparent means to the public people for Member States to demonstrate their commitment to the safe transport of RAM. Therefore, in the future, the IAEA must continue and expand its public efforts to make the public aware of the very high certainty of safe transport that is the consequence of following the regulations. I would like to ask IAEA to have the transportation specialist groups designated by each Member State. These transportation specialist groups, working with the IAEA transport regulations in each country, should have as a central activity an information program that conveys the margin of safety inherent in the IAEA transport regulations. Finally I would like to ask IAEA to produce a program relating to public perception of RAM transport for the public throughout the world. And also I would like to ask IAEA to send the transportation specialist groups to Member States and many concerned countries to explain and demonstrate the adequacy of the IAEA Regulations.

  1. Development and Validation of NODAL-LAMBDA Program for the Calculation of the Sub-criticality of LAMDA MODES By Nodal Methods in BWR reactors

    International Nuclear Information System (INIS)

    Munoz-Cobo, J. L.; Merino, R.; Escriva, A.; Melara, J.; Concejal, A.

    2014-01-01

    We have developed a 3D code with two energy groups and diffusion theory that is capable of calculating eigenvalues lambda of a BWR reactor using nodal methods and boundary conditions that calculates ALBEDO NODAL-LAMBDA from the properties of the reflector code itself. The code calculates the sub-criticality of the first harmonic, which is involved in the stability against oscillations reactor out of phase, and which is needed for calculating the decay rate for data out of phase oscillations. The code is very fast and in a few seconds is able to make a calculation of the first eigenvalues and eigenvectors, discretized solving the problem with different matrix elements zero. The code uses the LAPACK and ARPACK libraries. It was necessary to modify the LAPACK library to perform various operations with five non-diagonal matrices simultaneously in order to reduce the number of calls to bookstores and simplify the procedure for calculating the matrices in compressed format CSR. The code is validated by comparing it with the results for SIMULATE different cases and making 3D BENCHMAR of the IAEA. (Author)

  2. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  3. Simplified Asset Indices to Measure Wealth and Equity in Health Programs: A Reliability and Validity Analysis Using Survey Data From 16 Countries.

    Science.gov (United States)

    Chakraborty, Nirali M; Fry, Kenzo; Behl, Rasika; Longfield, Kim

    2016-03-01

    Social franchising programs in low- and middle-income countries have tried using the standard wealth index, based on the Demographic and Health Survey (DHS) questionnaire, in client exit interviews to assess clients' relative wealth compared with the national wealth distribution to ensure equity in service delivery. The large number of survey questions required to capture the wealth index variables have proved cumbersome for programs. Using an adaptation of the Delphi method, we developed shortened wealth indices and in February 2015 consulted 15 stakeholders in equity measurement. Together, we selected the best of 5 alternative indices, accompanied by 2 measures of agreement (percent agreement and Cohen's kappa statistic) comparing wealth quintile assignment in the new indices to the full DHS index. The panel agreed that reducing the number of assets was more important than standardization across countries because a short index would provide strong indication of client wealth and be easier to collect and use in the field. Additionally, the panel agreed that the simplified index should be highly correlated with the DHS for each country (kappa ≥ 0.75) for both national and urban-specific samples. We then revised indices for 16 countries and selected the minimum number of questions and question options required to achieve a kappa statistic ≥ 0.75 for both national and urban populations. After combining the 5 wealth quintiles into 3 groups, which the expert panel deemed more programmatically meaningful, reliability between the standard DHS wealth index and each of 3 simplified indices was high (median kappa = 0.81, 086, and 0.77, respectively, for index B that included only the common questions from the DHS VI questionnaire, index D that included the common questions plus country-specific questions, and index E that found the shortest list of common and country-specific questions that met the minimum reliability criteria of kappa ≥ 0.75). Index E was the

  4. Simplified Asset Indices to Measure Wealth and Equity in Health Programs: A Reliability and Validity Analysis Using Survey Data From 16 Countries

    Science.gov (United States)

    Chakraborty, Nirali M; Fry, Kenzo; Behl, Rasika; Longfield, Kim

    2016-01-01

    ABSTRACT Background: Social franchising programs in low- and middle-income countries have tried using the standard wealth index, based on the Demographic and Health Survey (DHS) questionnaire, in client exit interviews to assess clients’ relative wealth compared with the national wealth distribution to ensure equity in service delivery. The large number of survey questions required to capture the wealth index variables have proved cumbersome for programs. Methods: Using an adaptation of the Delphi method, we developed shortened wealth indices and in February 2015 consulted 15 stakeholders in equity measurement. Together, we selected the best of 5 alternative indices, accompanied by 2 measures of agreement (percent agreement and Cohen’s kappa statistic) comparing wealth quintile assignment in the new indices to the full DHS index. The panel agreed that reducing the number of assets was more important than standardization across countries because a short index would provide strong indication of client wealth and be easier to collect and use in the field. Additionally, the panel agreed that the simplified index should be highly correlated with the DHS for each country (kappa ≥ 0.75) for both national and urban-specific samples. We then revised indices for 16 countries and selected the minimum number of questions and question options required to achieve a kappa statistic ≥ 0.75 for both national and urban populations. Findings: After combining the 5 wealth quintiles into 3 groups, which the expert panel deemed more programmatically meaningful, reliability between the standard DHS wealth index and each of 3 simplified indices was high (median kappa = 0.81, 086, and 0.77, respectively, for index B that included only the common questions from the DHS VI questionnaire, index D that included the common questions plus country-specific questions, and index E that found the shortest list of common and country-specific questions that met the minimum reliability

  5. Validation of the use of exogenous gonadotropins (PG600) to increase the efficiency of gilt development programs without affecting lifetime productivity in the breeding herd.

    Science.gov (United States)

    Patterson, J; Triemert, E; Gustafson, B; Werner, T; Holden, N; Pinilla, J C; Foxcroft, G

    2016-02-01

    The objective of this study was to validate the use of exogenous gonadotropin (PG600) treatment for stimulating estrus in noncyclic gilts and to compare lifetime productivity of gilts recorded as having natural (NAT) versus PG600-induced (PG600) first estrus in a commercial setting. Prepubertal Camborough gilts ( = 4,489) were delivered to a gilt development unit (GDU) with the goal of delivering known cyclic breeding-eligible females to the sow farm (SF). A boar exposure area (BEAR) was designed to facilitate stimulation and detection of puberty by providing fence line and direct contact (15 min daily) with mature boars over an intensive 28-d period, starting at approximately d 160 (d 0). At d 14, nonpubertal gilts were mixed in new pen groups. At d 23, noncyclic "opportunity" gilts with no record of vulval development and required to meet breeding targets, were eligible for treatment with PG600 to induce puberty. Overall, 77.6% ( = 3,475) of gilts exhibited standing estrus (NAT = 2,654; PG600 = 821) and were eligible for shipping to the SF at approximately 35 d, and 76.6% of gilts that were administered PG600 exhibited the standing reflex within 13 d of treatment. Ultimately, 72.0% of gilts entering the GDU were delivered to the SF as breeding-eligible females. Considering the gilts delivered, a greater proportion of NAT than PG600 gilts were successfully bred ( 0.05) in the proportion of NAT and PG600 gilts farrowing a third litter, but a greater proportion of NAT than PG600 gilts farrowed their fourth litter ( 0.05). A negative correlation ( productivity to parity 4 were not affected by growth rate classification at puberty.

  6. Neutronics experimental validation of the Jules Horowitz reactor fuel by interpretation of the VALMONT experimental program-transposition of the uncertainties on the reactivity of JHR with JEF2.2 and JEFF3.1.1

    International Nuclear Information System (INIS)

    Leray, O.; Hudelot, J.P.; Doederlein, C.; Vaglio-Gaudard, C.; Antony, M.; Santamarina, A.; Bernard, D.

    2012-01-01

    The new European material testing Jules Horowitz Reactor (JHR), currently under construction in Cadarache center (CEA France), will use LEU (20% enrichment in 235 U) fuels (U 3 Si 2 for the start up and UMoAl in the future) which are quite different from the industrial oxide fuel, for which an extensive neutronics experimental validation database has been established. The HORUS3D/N neutronics calculation scheme, used for the design and safety studies of the JHR, is being developed within the framework of a rigorous verification-numerical validation-experimental validation methodology. In this framework, the experimental VALMONT (Validation of Aluminium Molybdenum uranium fuel for Neutronics) program has been performed in the MINERVE facility of CEA Cadarache (France), in order to qualify the capability of HORUS3D/N to accurately calculate the reactivity of the JHR reactor. The MINERVE facility using the oscillation technique provides accurate measurements of reactivity effect of samples. The VALMONT program includes oscillations of samples of UAl ∞ /Al and UMo/Al with enrichments ranging from 0.2% to 20% and Uranium densities from 2.2 to 8 g/cm 3 . The geometry of the samples and the pitch of the experimental lattice ensure maximum representativeness with the neutron spectrum expected for JHR. By comparing the effect of the sample with the one of a known fuel specimen, the reactivity effect can be measured in absolute terms and be compared to computational results. Special attention was paid to the rigorous determination and reduction of the experimental uncertainties. The calculational analysis of the VALMONT results was performed with the French deterministic code APOLLO2. A comparison of the impact of the different calculation methods, data libraries and energy meshes that were tested is presented. The interpretation of the VALMONT experimental program allowed the experimental validation of JHR fuel UMoAl8 (with an enrichment of 19.75% 235 U) by the Minerve

  7. DOE responses to the State of New Mexico's comments on ''summary of the results of the evaluation of the WIPP site and preliminary design validation program'' (WIPP-DOE-161)

    International Nuclear Information System (INIS)

    1983-06-01

    During the 60-day period provided for comments on the ''Summary of the Results of the Evaluation of the WIPP Site and Preliminary Design Validation Program'' (WIPP-DOE-161), written submittals and hearing testimony from about 133 individuals, 7 citizens groups and 6 state agencies were received by the Department of Energy (DOE). Approximately 25% of the public comment submittals were positive statements supporting the WIPP, with the remaining 75% reflecting concern with one or more aspects of the project. A portion of the state's comment package (submitted by the Governor of New Mexico) contained concerns relevant to WIPP which were unrelated to site suitability. Supportive comments formed the majority of the submittals from the New Mexico Environmental Evaluation Group (EEG) which ''...is charged with the responsibility of evaluating the suitability of the site for carrying out the mission of WIPP by analyzing all the reports and other information which form the background to the DOE evaluation of the site''

  8. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  9. Assessment of teacher competence using video portfolios: reliability, construct validity and consequential validity

    NARCIS (Netherlands)

    Admiraal, W.; Hoeksma, M.; van de Kamp, M.-T.; van Duin, G.

    2011-01-01

    The richness and complexity of video portfolios endanger both the reliability and validity of the assessment of teacher competencies. In a post-graduate teacher education program, the assessment of video portfolios was evaluated for its reliability, construct validity, and consequential validity.

  10. Certification Testing as an Illustration of Argument-Based Validation

    Science.gov (United States)

    Kane, Michael

    2004-01-01

    The theories of validity developed over the past 60 years are quite sophisticated, but the methodology of validity is not generally very effective. The validity evidence for major testing programs is typically much weaker than the evidence for more technical characteristics such as reliability. In addition, most validation efforts have a strong…

  11. Realtime validation of treatment programs on reconfigurable ...

    African Journals Online (AJOL)

    We present in this work an Algorithm/ Architecture adequation experience to prototype a real time image coder that will be used in surveillance applications. This last uses two algorithms: the first for compression and storage of the filmed scenes, the second to extract the moving objects edges. For the implementation, we ...

  12. Construct Validity and Case Validity in Assessment

    Science.gov (United States)

    Teglasi, Hedwig; Nebbergall, Allison Joan; Newman, Daniel

    2012-01-01

    Clinical assessment relies on both "construct validity", which focuses on the accuracy of conclusions about a psychological phenomenon drawn from responses to a measure, and "case validity", which focuses on the synthesis of the full range of psychological phenomena pertaining to the concern or question at hand. Whereas construct validity is…

  13. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  14. Soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  15. Development and Validation of Reentry Simulation Using MATLAB

    National Research Council Canada - National Science Library

    Jameson, Jr, Robert E

    2006-01-01

    This research effort develops a program using MATLAB to solve the equations of motion for atmospheric reentry and analyzes the validity of the program for use as a tool to expeditiously predict reentry profiles...

  16. Swainsonine, a novel fungal metabolite: optimization of fermentative production and bioreactor operations using evolutionary programming.

    Science.gov (United States)

    Singh, Digar; Kaur, Gurvinder

    2014-08-01

    The optimization of bioreactor operations towards swainsonine production was performed using an artificial neural network coupled evolutionary program (EP)-based optimization algorithm fitted with experimental one-factor-at-a-time (OFAT) results. The effects of varying agitation (300-500 rpm) and aeration (0.5-2.0 vvm) rates for different incubation hours (72-108 h) were evaluated in bench top bioreactor. Prominent scale-up parameters, gassed power per unit volume (P g/V L, W/m(3)) and volumetric oxygen mass transfer coefficient (K L a, s(-1)) were correlated with optimized conditions. A maximum of 6.59 ± 0.10 μg/mL of swainsonine production was observed at 400 rpm-1.5 vvm at 84 h in OFAT experiments with corresponding P g/VL and K L a values of 91.66 W/m(3) and 341.48 × 10(-4) s(-1), respectively. The EP optimization algorithm predicted a maximum of 10.08 μg/mL of swainsonine at 325.47 rpm, 1.99 vvm and 80.75 h against the experimental production of 7.93 ± 0.52 μg/mL at constant K L a (349.25 × 10(-4) s(-1)) and significantly reduced P g/V L (33.33 W/m(3)) drawn by the impellers.

  17. Hydrogen program overview

    Energy Technology Data Exchange (ETDEWEB)

    Gronich, S. [Dept. of Energy, Washington, DC (United States). Office of Utility Technologies

    1997-12-31

    This paper consists of viewgraphs which summarize the following: Hydrogen program structure; Goals for hydrogen production research; Goals for hydrogen storage and utilization research; Technology validation; DOE technology validation activities supporting hydrogen pathways; Near-term opportunities for hydrogen; Market for hydrogen; and List of solicitation awards. It is concluded that a full transition toward a hydrogen economy can begin in the next decade.

  18. A qualidade percebida em programas municipais de actividade física para idosos: validação estatística para Portugal The perceived quality of physical activity programs for elderly: statistical validation for Portugal

    Directory of Open Access Journals (Sweden)

    Isilda Barata Dias

    2011-03-01

    Full Text Available O envelhecimento demográfico alastra em todo o mundo particularmente na Europa. O INE de Portugal (2002 confirma essa tendência neste país onde as autarquias locais têm um papel político e cultural fundamental na promoção da qualidade de vida aos seus cidadãos. Este estudo descreve o processo de validação de um instrumento fiável, simples e adequado para avaliar a qualidade percebida que os idosos fazem dos programas de actividade física autárquicos nas principais capitais de distrito em Portugal. Para além de uma sistematização de literatura sobre questionários de avaliação da qualidade de serviços, apresenta-se detalhadamente o procedimento estatístico que permitiu classificar o poder explicativo das diversas dimensões de qualidade estudadas. O tratamento de um inquérito aplicado a uma amostra de 210 idosos (0,5% da população portuguesa permite demonstrar que são considerados fortes os factores relativos às dimensões "Variedade" e "Recursos Humanos" e fracos os relacionados com a dimensão "Aspectos Gerais".Ageing is an increasing phenomenon all over the world, especially in Europe. The Portuguese INE (2002 confirms this trend for Portugal where local authorities play a crucial political and cultural role in promoting the quality of life of their citizens. In this paper we describe the validation process of a reliable and simple instrument fitted to assess the perceived quality of elderly people about the physical activity programs across the district municipalities. Besides a systematic literature review on inquiring services' quality we present the statistical procedure that allowed us to classify the explanation power of several quality dimensions. The results came out from a questionnaire applied to a sample of 210 elderly (0.5% of the Portuguese population and demonstrate that the factors related to the "Variety" and "Human Resources" dimensions are strong and those related to "General Aspects" dimension are

  19. Lesson 6: Signature Validation

    Science.gov (United States)

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  20. Site characterization and validation

    International Nuclear Information System (INIS)

    Olsson, O.; Eriksson, J.; Falk, L.; Sandberg, E.

    1988-04-01

    The borehole radar investigation program of the SCV-site (Site Characterization and Validation) has comprised single hole reflection measurements with centre frequencies of 22, 45, and 60 MHz. The radar range obtained in the single hole reflection measurements was approximately 100 m for the lower frequency (22 MHz) and about 60 m for the centre frequency 45 MHz. In the crosshole measurements transmitter-receiver separations from 60 to 200 m have been used. The radar investigations have given a three dimensional description of the structure at the SCV-site. A generalized model of the site has been produced which includes three major zones, four minor zones and a circular feature. These features are considered to be the most significant at the site. Smaller features than the ones included in the generalized model certainly exist but no additional features comparable to the three major zones are thought to exist. The results indicate that the zones are not homogeneous but rather that they are highly irregular containing parts of considerably increased fracturing and parts where their contrast to the background rock is quite small. The zones appear to be approximately planar at least at the scale of the site. At a smaller scale the zones can appear quite irregular. (authors)

  1. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  2. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  3. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  4. Verification, validation, and reliability of predictions

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1987-04-01

    The objective of predicting long-term performance should be to make reliable determinations of whether the prediction falls within the criteria for acceptable performance. Establishing reliable predictions of long-term performance of a waste repository requires emphasis on valid theories to predict performance. The validation process must establish the validity of the theory, the parameters used in applying the theory, the arithmetic of calculations, and the interpretation of results; but validation of such performance predictions is not possible unless there are clear criteria for acceptable performance. Validation programs should emphasize identification of the substantive issues of prediction that need to be resolved. Examples relevant to waste package performance are predicting the life of waste containers and the time distribution of container failures, establishing the criteria for defining container failure, validating theories for time-dependent waste dissolution that depend on details of the repository environment, and determining the extent of congruent dissolution of radionuclides in the UO 2 matrix of spent fuel. Prediction and validation should go hand in hand and should be done and reviewed frequently, as essential tools for the programs to design and develop repositories. 29 refs

  5. Marketing Plan for Demonstration and Validation Assets

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-05-30

    The National Security Preparedness Project (NSPP), is to be sustained by various programs, including technology demonstration and evaluation (DEMVAL). This project assists companies in developing technologies under the National Security Technology Incubator program (NSTI) through demonstration and validation of technologies applicable to national security created by incubators and other sources. The NSPP also will support the creation of an integrated demonstration and validation environment. This report documents the DEMVAL marketing and visibility plan, which will focus on collecting information about, and expanding the visibility of, DEMVAL assets serving businesses with national security technology applications in southern New Mexico.

  6. Experimental validation of UTDefect

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, A.S. [ABB Tekniska Roentgencentralen AB, Taeby (Sweden); Bostroem, A.; Wirdelius, H. [Chalmers Univ. of Technology, Goeteborg (Sweden). Div. of Mechanics

    1997-01-01

    This study reports on conducted experiments and computer simulations of ultrasonic nondestructive testing (NDT). Experiments and simulations are compared with the purpose of validating the simulation program UTDefect. UTDefect simulates ultrasonic NDT of cracks and some other defects in isotropic and homogeneous materials. Simulations for the detection of surface breaking cracks are compared with experiments in pulse-echo mode on surface breaking cracks in carbon steel plates. The echo dynamics are plotted and compared with the simulations. The experiments are performed on a plate with thickness 36 mm and the crack depths are 7.2 mm and 18 mm. L- and T-probes with frequency 1, 2 and 4 MHz and angels 45, 60 and 70 deg are used. In most cases the probe and the crack is on opposite sides of the plate, but in some cases they are on the same side. Several cracks are scanned from two directions. In total 53 experiments are reported for 33 different combinations. Generally the simulations agree well with the experiments and UTDefect is shown to be able to, within certain limits, perform simulations that are close to experiments. It may be concluded that: For corner echoes the eight 45 deg cases and the eight 60 deg cases show good agreement between experiments and UTDefect, especially for the 7.2 mm crack. The amplitudes differ more for some cases where the defect is close to the probe and for the corner of the 18 mm crack. For the two 70 deg cases there are too few experimental values to compare the curve shapes, but the amplitudes do not differ too much. The tip diffraction echoes also agree well in general. For some cases, where the defect is close to the probe, the amplitudes differ more than 10-15 dB, but for all but two cases the difference in amplitude is less than 7 dB. 6 refs.

  7. Validity in Qualitative Evaluation

    OpenAIRE

    Vasco Lub

    2015-01-01

    This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate), the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of con...

  8. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  9. Validation of the Spanish Version of the Quality of Dying and Death Questionnaire (QODD-ESP) in a Home-Based Cancer Palliative Care Program and Development of the QODD-ESP-12.

    Science.gov (United States)

    Pérez-Cruz, Pedro E; Padilla Pérez, Oslando; Bonati, Pilar; Thomsen Parisi, Oliva; Tupper Satt, Laura; Gonzalez Otaiza, Marcela; Ceballos Yáñez, Diego; Maldonado Morgado, Armando

    2017-06-01

    Improving quality of death (QOD) is a key goal in palliative care (PC). To our knowledge, no instruments to measure QOD have been validated in Spanish. The goals of this study were to validate the Spanish version of the quality of dying and death (QODD) questionnaire and to develop and validate a shortened version of this instrument by phone interview. We enrolled caregivers (CGs) of consecutive deceased cancer patients who participated in a single PC clinic. CGs were contacted by phone between 4 and 12 weeks after patients' death and completed the Spanish QODD (QODD-ESP). A question assessing quality of life during last week of life was included. A 12-item QODD (QODD-ESP-12) was developed. Reliability, convergent validity, and construct validity were estimated for both versions. About 150 (50%) of 302 CGs completed the QODD-ESP. Patient's mean age (SD) was 67 (14); 71 (47%) were females, and 131 (87%) died at home. CGs' mean age (SD) was 51 (13); 128 (85%) were females. Mean QODD-ESP score was 69 (range 35-96). Kaiser-Meyer-Olkin measure of sampling adequacy was 0.322, not supporting the use of factorial analysis to assess the existence of an underlying construct. Mean QODD-ESP-12 score was 69 (range 31-97). Correlation with last week quality of life was 0.306 (P < 0.01). Confirmatory factorial analysis of QODD-ESP-12 showed that data fitted well Downey's four factors; Chi-square test = 6.32 (degrees of freedom = 60), P = 0.394 comparative fit index = 0.988; Tucker-Lewis Index = 0.987, and root mean square error of approximation = 0.016 (95% CI 0-0.052). QODD-ESP-12 is a reliable and valid instrument with good psychometric properties and can be used to assess QOD in a Spanish-speaking cancer PC population by phone interview. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  10. Making Validated Educational Models Central in Preschool Standards.

    Science.gov (United States)

    Schweinhart, Lawrence J.

    This paper presents some ideas to preschool educators and policy makers about how to make validated educational models central in standards for preschool education and care programs that are available to all 3- and 4-year-olds. Defining an educational model as a coherent body of program practices, curriculum content, program and child, and teacher…

  11. Cosmic ray fluctuations at rigidities 4 to 180 GV

    International Nuclear Information System (INIS)

    Benko, G.; Erdoes, G.; Stehlik, M.; Katz, M.E.; Nosov, S.F.

    1986-07-01

    The power spectral density of cosmic ray fluctuations observed at both underground and ground level during the years 1976-1980 was calculated. The spectral index is independent of the phase of solar cycle in the frequency range of 5x10 -7 - 5x10 -5 Hz and its value is equal to 2. The level of fluctuations shows a weak dependence on the rigidity (R) of the particles P∼R -2/3 . The obtained experimental results are in agreement with the theoretical predictions. (author)

  12. Charge composition of cosmic rays between 4 and 100 GV

    Science.gov (United States)

    Golden, R. L.; Adams, J. H.; Badhwar, G. D.; Deney, C. L.; Lindstrom, P. J.; Heckman, H. H.

    1974-01-01

    Balloon-flight measurements were used to determine ratios of cosmic-ray L nuclei (charge Z ranging from 3 to 5) to M nuclei (Z ranging from 6 to 8) and of VH nuclei (Z from 20 to 27) to M nuclei using a magnetic spectrometer. The purpose of the measurements was to establish whether both ratios vary with rigidity as this would provide evidence for more than one basic acceleration mechanism. The results provide no indication that the VH spectrum is steeper than the M spectrum.

  13. 78 FR 77718 - Comment Request for Information Collection for Information Collection for the Data Validation...

    Science.gov (United States)

    2013-12-24

    ... Collection for Information Collection for the Data Validation Requirement for Employment and Training... collection of data validation information for the following employment and training programs: Workforce... information on program activities and outcomes is available. Data validation is intended to accomplish the...

  14. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  15. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  16. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  17. NDE reliability and advanced NDE technology validation

    International Nuclear Information System (INIS)

    Doctor, S.R.; Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Hutton, P.H.; Reid, L.D.; Simonen, F.A.; Spanner, J.C.; Vo, T.V.

    1989-01-01

    This paper reports on progress for three programs: (1) evaluation and improvement in nondestructive examination reliability for inservice inspection of light water reactors (LWR) (NDE Reliability Program), (2) field validation acceptance, and training for advanced NDE technology, and (3) evaluation of computer-based NDE techniques and regional support of inspection activities. The NDE Reliability Program objectives are to quantify the reliability of inservice inspection techniques for LWR primary system components through independent research and establish means for obtaining improvements in the reliability of inservice inspections. The areas of significant progress will be described concerning ASME Code activities, re-analysis of the PISC-II data, the equipment interaction matrix study, new inspection criteria, and PISC-III. The objectives of the second program are to develop field procedures for the AE and SAFT-UT techniques, perform field validation testing of these techniques, provide training in the techniques for NRC headquarters and regional staff, and work with the ASME Code for the use of these advanced technologies. The final program's objective is to evaluate the reliability and accuracy of interpretation of results from computer-based ultrasonic inservice inspection systems, and to develop guidelines for NRC staff to monitor and evaluate the effectiveness of inservice inspections conducted on nuclear power reactors. This program started in the last quarter of FY89, and the extent of the program was to prepare a work plan for presentation to and approval from a technical advisory group of NRC staff

  18. Ovid MEDLINE Instruction can be Evaluated Using a Validated Search Assessment Tool. A Review of: Rana, G. K., Bradley, D. R., Hamstra, S. J., Ross, P. T., Schumacher, R. E., Frohna, J. G., & Lypson, M. L. (2011. A validated search assessment tool: Assessing practice-based learning and improvement in a residency program. Journal of the Medical Library Association, 99(1, 77-81. doi:10.3163/1536-5050.99.1.013

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-01-01

    Full Text Available Objective – To determine the construct validity of a search assessment instrument that is used to evaluate search strategies in Ovid MEDLINE. Design – Cross-sectional, cohort study. Setting – The Academic Medical Center of the University of Michigan. Subjects – All 22 first-year residents in the Department of Pediatrics in 2004 (cohort 1; 10 senior pediatric residents in 2005 (cohort 2; and 9 faculty members who taught evidence based medicine (EBM and published on EBM topics. Methods – Two methods were employed to determine whether the University of Michigan MEDLINE Search Assessment Instrument (UMMSA could show differences between searchers’ construction of a MEDLINE search strategy.The first method tested the search skills of all 22 incoming pediatrics residents (cohort 1 after they received MEDLINE training in 2004, and again upon graduation in 2007. Only 15 of these residents were tested upon graduation; seven were either no longer in the residency program, or had quickly left the institution after graduation. The search test asked study participants to read a clinical scenario, identify the search question in the scenario, and perform an Ovid MEDLINE search. Two librarians scored the blinded search strategies.The second method compared the scores of the 22 residents with the scores of ten senior residents (cohort 2 and nine faculty volunteers. Unlike the first cohort, the ten senior residents had not received any MEDLINE training. The faculty members’ search strategies were used as the gold standard comparison for scoring the search skills of the two cohorts.Main Results – The search strategy scores of the 22 first-year residents, who received training, improved from 2004 to 2007 (mean improvement: 51.7 to 78.7; t(14=5.43, PConclusion – According to the authors, “the results of this study provide evidence for the validity of an instrument to evaluate MEDLINE search strategies” (p. 81, since the instrument under

  19. Validity in Qualitative Evaluation

    Directory of Open Access Journals (Sweden)

    Vasco Lub

    2015-12-01

    Full Text Available This article provides a discussion on the question of validity in qualitative evaluation. Although validity in qualitative inquiry has been widely reflected upon in the methodological literature (and is still often subject of debate, the link with evaluation research is underexplored. Elaborating on epistemological and theoretical conceptualizations by Guba and Lincoln and Creswell and Miller, the article explores aspects of validity of qualitative research with the explicit objective of connecting them with aspects of evaluation in social policy. It argues that different purposes of qualitative evaluations can be linked with different scientific paradigms and perspectives, thus transcending unproductive paradigmatic divisions as well as providing a flexible yet rigorous validity framework for researchers and reviewers of qualitative evaluations.

  20. 75 FR 59294 - Comment Request for Information Collection for The Data Validation Requirement for Employment and...

    Science.gov (United States)

    2010-09-27

    ... and reliable information on program activities and outcomes is available. Data validation is intended... handbooks provide detailed information on software installation, building and importing a validation file... DEPARTMENT OF LABOR Employment and Training Administration Comment Request for Information...

  1. A knowledge-driven approach to cluster validity assessment.

    Science.gov (United States)

    Bolshakova, Nadia; Azuaje, Francisco; Cunningham, Pádraig

    2005-05-15

    This paper presents an approach to assessing cluster validity based on similarity knowledge extracted from the Gene Ontology. The program is freely available for non-profit use on request from the authors.

  2. Determining Composite Validity Coefficients for Army Jobs and Job Families

    National Research Council Canada - National Science Library

    Zeidner, Joseph

    2002-01-01

    ...) is to compute composite validity coefficients. using criterion data derived from the 1987 - 1989 Skill Qualifications Test program, for the 7-test ASVAB for 150, 17, and 9 job family structures...

  3. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  4. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. Transient FDTD simulation validation

    OpenAIRE

    Jauregui Tellería, Ricardo; Riu Costa, Pere Joan; Silva Martínez, Fernando

    2010-01-01

    In computational electromagnetic simulations, most validation methods have been developed until now to be used in the frequency domain. However, the EMC analysis of the systems in the frequency domain many times is not enough to evaluate the immunity of current communication devices. Based on several studies, in this paper we propose an alternative method of validation of the transients in time domain allowing a rapid and objective quantification of the simulations results.

  6. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  10. Validation of Yoon's Critical Thinking Disposition Instrument.

    Science.gov (United States)

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  11. Psychology, cognition and school success: validation of the learning strategies program / Psicologia, cognição e sucesso escolar: concepção e validação dum programa de estratégias de aprendizagem

    Directory of Open Access Journals (Sweden)

    Margarida Maria Ferreira Diogo Dias Pocinho

    2010-01-01

    Full Text Available This article presents a Learning Strategies Program developed to provide academic success as well as personal well-being. Each content of learning is based on a strategy with 8 stages: Pre-test and Contract, Description, Modeling, Verbal Practice, Practice and Feedback Control, Advanced Practice and Feedback, Post-test and Contracts, and Generalization. This is a quasi-experimental design, with pre and post-test, an experimental group (n=110 and a control group (n=99. Its application was assessed on (a reading and writing; (b school achievement; (c self-esteem and study methods; (c school achievement causal attributions and, (e opinion of the teachers. The EG improved significantly compared to the CG indicating that such Program can bring not only school performance benefits but also personal ones for Portuguese students.

  12. Motivação para aprender: validação dum programa de estratégias para adolescentes com insucesso escolar/Learning motivation: validation of a motivation learning strategies program for adolescences with low achievement

    Directory of Open Access Journals (Sweden)

    Margarida Pocinho

    2009-10-01

    Full Text Available Este artigo apresenta a concepção e validação dum Programa de estratégias de motivação para a aprendizagem para adolescentes com insucesso escolar. Trata-se duma pesquisa quasi-experimental, com pré e pós-teste, grupos de controle (GC e experimental (GE, na faixa etária dos 14-15 anos a frequentar o 9º ano de escolaridade. Avaliou-se o efeito do programa na auto-estima, atribuições causais de sucesso, hábitos de estudo, rendimento escolar e opiniões dos professores. O programa inclui também perfis de interesses profissionais dos alunos. Inclui ainda perfis dos professores que melhor podem motivar os alunos para aprender, tendo em conta as variáveis acima descritas. O GE (n=110 melhorou significativamente comparativamente ao GC (n=99 indicando que o Programa traz benefícios em termos de motivação para aprender escolares e, consequentemente, benefícios em termos de sucesso académico e pessoal, nomeadamente, aos estudantes portugueses com insucesso escolar. This article presents a motivation learning strategies program for promoting the academic success of adolescents with low achievement. This is a quasi-experimental research, with pre and post-test, and with an experimental group (n=110 and control group (n=99 carried out with 9th grade, 14-15 years old students. The program effects were evaluated on self-esteem, study methods, causal attributions, school achievement and opinion of the teachers. This Program also includes adolescent’s vocational interests and teacher profiles to motivate students to learn better. The EG improved significantly when compared to the CG indicating that the Program can bring benefits for learning motivation and, consequently, for academic and personal success of Portuguese students facing academic insuccess.

  13. Verification and validation of decision support software: Expert Choice{trademark} and PCM{trademark}

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Q.H.; Martin, J.D.

    1994-11-04

    This report documents the verification and validation of two decision support programs: EXPERT CHOICE{trademark} and PCM{trademark}. Both programs use the Analytic Hierarchy Process (AHP) -- or pairwise comparison technique -- developed by Dr. Thomas L. Saaty. In order to provide an independent method for the validating the two programs, the pairwise comparison algorithm was developed for a standard mathematical program. A standard data set -- selecting a car to purchase -- was used with each of the three programs for validation. The results show that both commercial programs performed correctly.

  14. Italian Validation of Homophobia Scale (HS

    Directory of Open Access Journals (Sweden)

    Giacomo Ciocca, PsyD, PhD

    2015-09-01

    Conclusions: The Italian validation of the HS revealed the use of this self‐report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior. Ciocca G, Capuano N, Tuziak B, Mollaioli D, Limoncin E, Valsecchi D, Carosa E, Gravina GL, Gianfrilli D, Lenzi A, and Jannini EA. Italian validation of Homophobia Scale (HS. Sex Med 2015;3:213–218.

  15. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  16. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  17. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  18. ASTEC validation on PANDA SETH

    International Nuclear Information System (INIS)

    Bentaib, Ahmed; Bleyer, Alexandre; Schwarz, Siegfried

    2009-01-01

    The ASTEC code development by IRSN and GRS is aimed to provide an integral code for the simulation of the whole course of severe accidents in Light-Water Reactors. ASTEC is a complex system of codes for reactor safety assessment. In this validation, only the thermal-hydraulic module of ASTEC code is used. ASTEC is a lumped-parameter code able to represent multi-compartment containments. It uses the following main elements: zones (compartments), junctions (liquids and atmospherics) and structures. The zones are connected by junctions and contain steam, water and non condensable gases. They exchange heat with structures by different heat transfer regimes: convection, radiation and condensation. This paper presents the validation of ASTEC V1.3 on the tests T9 and T9bis, of the PANDA OECD/SETH experimental program, investigating the impact of injection velocity and steam condensation on the plume shape and on the gas distribution. Dedicated meshes were developed to simulate the test facility with the two vessels DW1, DW2 and the interconnection pipe. The obtained numerical results are analyzed and compared to the experiments. The comparison shows a good agreement between experiments and calculations. (author)

  19. Man-in-the-loop validation plan for the Millstone Unit 3 SPDS

    International Nuclear Information System (INIS)

    Blanch, P.M.; Wilkinson, C.D.

    1985-01-01

    This paper describes the man-in-the-loop validation plan for the Millstone Point Unit 3 (MP3) Safety Parameter Display System (SPDS). MP3 is a pressurized water reactor scheduled to load fuel November, 1985. The SPDS is being implemented as part of plant construction. This paper provides an overview of the validation process. Detailed validation procedures, scenarios, and evaluation forms will be incorporated into the validation plan to produce the detailed validation program. The program document will provide all of the new detailed instructions necessary to perform the man-in-the-loop validation

  20. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  1. Shift Verification and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Davidson, Gregory G [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over a burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.

  2. Fuel Cell and Hydrogen Technologies Program | Hydrogen and Fuel Cells |

    Science.gov (United States)

    NREL Fuel Cell and Hydrogen Technologies Program Fuel Cell and Hydrogen Technologies Program Through its Fuel Cell and Hydrogen Technologies Program, NREL researches, develops, analyzes, and validates fuel cell and hydrogen production, delivery, and storage technologies for transportation

  3. Efficacy of a comprehensive dental education program regarding management of avulsed permanent teeth as a valid indicator of increased success rate of treatment of avulsion in a North Indian population

    Directory of Open Access Journals (Sweden)

    Navneet Grewal

    2015-01-01

    Full Text Available Aims: To assess whether educating the parents, teachers, and intermediate school children of Amritsar city about the emergency management of tooth avulsion was an effective method of increasing success rate of treatment of avulsion. Subjects and Methods: Self-administered questionnaires were prepared for 200 parents, teachers, and for intermediate school children to assess baseline knowledge. Sociodemographic distribution of the targeted group was carried out applying Kuppuswamy scale. Two months later, flip cards and posters were distributed to the selected sample followed by a reinforcement session conducted after 1 month in the form of slide presentations on dental trauma. After 3 months, reassessment performance was distributed to the same participants for reassessing any change in baseline knowledge. Further analysis of knowledge, attitude, and practices were carried out after 6 months. The scores based on Likert scale ranging 0-3 were obtained and put to statistical analysis to analyze efficacy of this program 12 months from baseline data. Results and Conclusion: Wilcoxon signed ranked test was applied to nonparametric data to study the knowledge before and after education was carried out. There was a significant change in the knowledge level of children, teachers, and parents after the campaign and teachers showed more positive change in the practice of emergency management of tooth avulsion, endorsing the fact that comprehensive dental education programs targeting school teachers and children can change the perspective of individuals toward treatment needs for dental trauma involving avulsion.

  4. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  5. Validation Process Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); English, Christine M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gesick, Joshua C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mukkamala, Saikrishna [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2018-01-04

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  6. Functional Programming

    OpenAIRE

    Chitil, Olaf

    2009-01-01

    Functional programming is a programming paradigm like object-oriented programming and logic programming. Functional programming comprises both a specific programming style and a class of programming languages that encourage and support this programming style. Functional programming enables the programmer to describe an algorithm on a high-level, in terms of the problem domain, without having to deal with machine-related details. A program is constructed from functions that only map inputs to ...

  7. Nursing Informatics Competency Program

    Science.gov (United States)

    Dunn, Kristina

    2017-01-01

    Currently, C Hospital lacks a standardized nursing informatics competency program to validate nurses' skills and knowledge in using electronic medical records (EMRs). At the study locale, the organization is about to embark on the implementation of a new, more comprehensive EMR system. All departments will be required to use the new EMR, unlike…

  8. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  9. SSCL quality program overview

    International Nuclear Information System (INIS)

    Hedderick, R.V.; Threatt, D.C.

    1992-01-01

    The Quality Program for the Superconducing Super Collider Laboratory (SSCL) was developed for a number of reasons. The need for a quality program not only is a contractual requirement, but it also makes good economic sense to implement such a program. The quality program is the device used to coordinate the activities of different Laboratory organizations, such as Engineering and Procurement, and to improve operational reliability and safety. To be successful, the QA Program not only must satisfy Department of Energy (DOE) requirements and provide for flowdown of requirements to performing organizations, but must also be flexible enough so that the program is tailored to meet the needs of each internal organization. The keys to success are management support, acceptance by personnel, and cost effectiveness. These three items are assured by involving appropriate management at each step of program development, by personnel training and by feedback, and by programs to reduce defects and improve quality. Equally valuable is involvement of key organizations in program development. We will describe the basic SSCL Quality Program requirements, how the requirements are tailored to the needs of Laboratory organizations, and how the effectiveness of the program is validated

  10. The dialogic validation

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2005-01-01

    This paper is inspired by dialogism and the title is a paraphrase on Bakhtin's (1981) "The Dialogic Imagination". The paper investigates how dialogism can inform the process of validating inquiry-based qualitative research. The paper stems from a case study on the role of recognition...

  11. A valid licence

    NARCIS (Netherlands)

    Spoolder, H.A.M.; Ingenbleek, P.T.M.

    2010-01-01

    A valid licence Tuesday, April 20, 2010 Dr Hans Spoolder and Dr Paul Ingenbleek, of Wageningen University and Research Centres, share their thoughts on improving farm animal welfare in Europe At the presentation of the European Strategy 2020 on 3rd March, President Barroso emphasised the need for

  12. The Chimera of Validity

    Science.gov (United States)

    Baker, Eva L.

    2013-01-01

    Background/Context: Education policy over the past 40 years has focused on the importance of accountability in school improvement. Although much of the scholarly discourse around testing and assessment is technical and statistical, understanding of validity by a non-specialist audience is essential as long as test results drive our educational…

  13. Validating year 2000 compliance

    NARCIS (Netherlands)

    A. van Deursen (Arie); P. Klint (Paul); M.P.A. Sellink

    1997-01-01

    textabstractValidating year 2000 compliance involves the assessment of the correctness and quality of a year 2000 conversion. This entails inspecting both the quality of the conversion emph{process followed, and of the emph{result obtained, i.e., the converted system. This document provides an

  14. Validation and test report

    DEFF Research Database (Denmark)

    Pedersen, Jens Meldgaard; Andersen, T. Bull

    2012-01-01

    . As a consequence of extensive movement artefacts seen during dynamic contractions, the following validation and test report consists of a report that investigates the physiological responses to a static contraction in a standing and a supine position. Eight subjects performed static contractions of the ankle...

  15. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  16. Validity and Fairness

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…

  17. Validation of dengue infection severity score

    Directory of Open Access Journals (Sweden)

    Pongpan S

    2014-03-01

    Full Text Available Surangrat Pongpan,1,2 Jayanton Patumanond,3 Apichart Wisitwong,4 Chamaiporn Tawichasri,5 Sirianong Namwongprom1,6 1Clinical Epidemiology Program, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand; 2Department of Occupational Medicine, Phrae Hospital, Phrae, Thailand; 3Clinical Epidemiology Program, Faculty of Medicine, Thammasat University, Bangkok, Thailand; 4Department of Social Medicine, Sawanpracharak Hospital, Nakorn Sawan, Thailand; 5Clinical Epidemiology Society at Chiang Mai, Chiang Mai, Thailand; 6Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand Objective: To validate a simple scoring system to classify dengue viral infection severity to patients in different settings. Methods: The developed scoring system derived from 777 patients from three tertiary-care hospitals was applied to 400 patients in the validation data obtained from another three tertiary-care hospitals. Percentage of correct classification, underestimation, and overestimation was compared. The score discriminative performance in the two datasets was compared by analysis of areas under the receiver operating characteristic curves. Results: Patients in the validation data were different from those in the development data in some aspects. In the validation data, classifying patients into three severity levels (dengue fever, dengue hemorrhagic fever, and dengue shock syndrome yielded 50.8% correct prediction (versus 60.7% in the development data, with clinically acceptable underestimation (18.6% versus 25.7% and overestimation (30.8% versus 13.5%. Despite the difference in predictive performances between the validation and the development data, the overall prediction of the scoring system is considered high. Conclusion: The developed severity score may be applied to classify patients with dengue viral infection into three severity levels with clinically acceptable under- or overestimation. Its impact when used in routine

  18. Valid and Reliable Science Content Assessments for Science Teachers

    Science.gov (United States)

    Tretter, Thomas R.; Brown, Sherri L.; Bush, William S.; Saderholm, Jon C.; Holmes, Vicki-Lynn

    2013-01-01

    Science teachers' content knowledge is an important influence on student learning, highlighting an ongoing need for programs, and assessments of those programs, designed to support teacher learning of science. Valid and reliable assessments of teacher science knowledge are needed for direct measurement of this crucial variable. This paper…

  19. Content Validation of Athletic Therapy Clinical Presentations in Canada

    Science.gov (United States)

    Lafave, Mark R.; Yeo, Michelle; Westbrook, Khatija; Valdez, Dennis; Eubank, Breda; McAllister, Jenelle

    2016-01-01

    Context: Competency-based education requires strong planning and a vehicle to deliver and track students' progress across their undergraduate programs. Clinical presentations (CPs) are proposed as 1 method to deliver a competency-based curriculum in a Canadian undergraduate athletic therapy program. Objective: Validation of 253 CPs. Setting:…

  20. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  1. GCOM-W soil moisture and temperature algorithms and validation

    Science.gov (United States)

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  2. Social Validity of a Positive Behavior Interventions and Support Model

    Science.gov (United States)

    Miramontes, Nancy Y.; Marchant, Michelle; Heath, Melissa Allen; Fischer, Lane

    2011-01-01

    As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were…

  3. Experimental validation of the HARMONIE code

    International Nuclear Information System (INIS)

    Bernard, A.; Dorsselaere, J.P. van

    1984-01-01

    An experimental program of deformation, in air, of different groups of subassemblies (7 to 41 subassemblies), was performed on a scale 1 mock-up in the SPX1 geometry, in order to achieve a first experimental validation of the code HARMONIE. The agreement between tests and calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 19 subassemblies at most. The differences come mainly from friction between pads, and secondly from the foot gaps. (author)

  4. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  5. Psicologia, cognição e sucesso escolar: concepção e validação dum programa de estratégias de aprendizagem Psychology, cognition and school success: validation of the learning strategies program

    Directory of Open Access Journals (Sweden)

    Margarida Maria Ferreira Diogo Dias Pocinho

    2010-01-01

    Full Text Available Neste artigo apresentamos a concepção e validação dum programa de estratégias de aprendizagem promotor do sucesso académico, bem como do bem-estar pessoal e escolar. Cada conteúdo de aprendizagem utiliza uma estratégia com 8 estádios: Pré-teste e Contrato, Descrição, Modelação, Prática Verbal, Prática Controlada e Feedback, Prática Avançada e Feedback, Pós-teste e Contratos, e Generalização. Trata-se dum estudo quasi-experimental, com pré e pós-teste, grupo experimental e de controlo. Avaliou-se o efeito do programa na (a compreensão e expressão verbal; (b aproveitamento escolar; (c auto-estima, hábitos de estudo; (d atribuições causais do sucesso; e (e opiniões dos professores. O GE (n=110 melhorou significativamente comparativamente ao GC (n=99 indicando que o Programa traz benefícios escolares e pessoais aos estudantes portugueses.This article presents a Learning Strategies Program developed to provide academic success as well as personal well-being. Each content of learning is based on a strategy with 8 stages: Pre-test and Contract, Description, Modeling, Verbal Practice, Practice and Feedback Control, Advanced Practice and Feedback, Post-test and Contracts, and Generalization. This is a quasi-experimental design, with pre and post-test, an experimental group (n=110 and a control group (n=99. Its application was assessed on (a reading and writing; (b school achievement; (c self-esteem and study methods; (c school achievement causal attributions and, (e opinion of the teachers. The EG improved significantly compared to the CG indicating that such Program can bring not only school performance benefits but also personal ones for Portuguese students.

  6. IAEA coordinated research program on `harmonization and validation of fast reactor thermomechanical and thermohydraulic codes using experimental data`. 1. Thermohydraulic benchmark analysis on high-cycle thermal fatigue events occurred at French fast breeder reactor Phenix

    Energy Technology Data Exchange (ETDEWEB)

    Muramatsu, Toshiharu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1997-06-01

    A benchmark exercise on `Tee junction of Liquid Metal Fast Reactor (LMFR) secondary circuit` was proposed by France in the scope of the said Coordinated Research Program (CRP) via International Atomic Energy Agency (IAEA). The physical phenomenon chosen here deals with the mixture of two flows of different temperature. In a LMFR, several areas of the reactor are submitted to this problem. They are often difficult to design, because of the complexity of the phenomena involved. This is one of the major problems of the LMFRs. This problem has been encountered in the Phenix reactor on the secondary loop, where defects in a tee junction zone were detected during a campaign of inspections after an operation of 90,000 hours of the reactor. The present benchmark is based on an industrial problem and deal with thermal striping phenomena. Problems on pipes induced by thermal striping phenomena have been observed in some reactors and experimental facilities coolant circuits. This report presents numerical results on thermohydraulic characteristics of the benchmark problem, carried out using a direct numerical simulation code DINUS-3 and a boundary element code BEMSET. From the analysis with both the codes, it was confirmed that the hot sodium from the small pipe rise into the cold sodium of the main pipe with thermally instabilities. Furthermore, it was indicated that the coolant mixing region including the instabilities agrees approximately with the result by eye inspections. (author)

  7. From primer design to validation of results - is it possible by using free software only? / De la proiectarea primerilor la validarea rezultatelor - este posibil utilizând doar programe gratuite?

    Directory of Open Access Journals (Sweden)

    Man Adrian

    2015-06-01

    Full Text Available Pentru a avea succes în experimentele de biologie moleculară, sunt necesare cunoștințe bune despre diversele metode și protocoale în acest domeniu. În biologia moleculară, manipularea corespunzătoare a acizilor nucleici este esențială, începând cu calitatea extracției până la testele efective (PCR, RT-PCR, qPCR, PCR array, clonare, etc. Deși multe dintre protocoale sunt “standardizate”, în practică sunt multe variabile care pot influența rezultatele experimentelor. Datorită importanței proiectării corecte a primerilor în metodele de tip PCR, ne vom concentra pe modul de proiectare și pe verificarea lor cu ajutorul programelor de calculator, dar de asemenea prezentăm și alte programe utile disponibile gratuit, care pot ajuta cercetătorii în domeniul biologiei moleculare

  8. IDAHO NATIONAL LABORATORY PROGRAM TO OBTAIN BENCHMARK DATA ON THE FLOW PHENOMENA IN A SCALED MODEL OF A PRISMATIC GAS-COOLED REACTOR LOWER PLENUM FOR THE VALIDATION OF CFD CODES

    International Nuclear Information System (INIS)

    Hugh M. McIlroy Jr.; Donald M. McEligot; Robert J. Pink

    2008-01-01

    The experimental program that is being conducted at the Matched Index-of-Refraction (MIR) Flow Facility at Idaho National Laboratory (INL) to obtain benchmark data on measurements of flow phenomena in a scaled model of a typical prismatic gas-cooled (GCR) reactor lower plenum using 3-D Particle Image Velocimetry (PIV) is presented. A detailed description of the model, scaling, the experimental facility, 3-D PIV system, measurement uncertainties and analysis, experimental procedures and samples of the data sets that have been obtained are included. Samples of the data set that are presented include mean-velocity-field and turbulence data in an approximately 1:7 scale model of a region of the lower plenum of a typical prismatic GCR design. This experiment has been selected as the first Standard Problem endorsed by the Generation IV International Forum. Results concentrate on the region of the lower plenum near its far reflector wall (away from the outlet duct). Inlet jet Reynolds numbers (based on the jet diameter and the time-mean average flow rate) are approximately 4,300 and 12,400. The measurements reveal undeveloped, non-uniform flow in the inlet jets and complicated flow patterns in the model lower plenum. Data include three-dimensional vector plots, data displays along the coordinate planes (slices) and charts that describe the component flows at specific regions in the model. Information on inlet flow is also presented

  9. Flight code validation simulator

    Science.gov (United States)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  10. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  11. CIPS Validation Data Plan

    Energy Technology Data Exchange (ETDEWEB)

    Nam Dinh

    2012-03-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  12. CIPS Validation Data Plan

    International Nuclear Information System (INIS)

    Dinh, Nam

    2012-01-01

    This report documents analysis, findings and recommendations resulted from a task 'CIPS Validation Data Plan (VDP)' formulated as an POR4 activity in the CASL VUQ Focus Area (FA), to develop a Validation Data Plan (VDP) for Crud-Induced Power Shift (CIPS) challenge problem, and provide guidance for the CIPS VDP implementation. The main reason and motivation for this task to be carried at this time in the VUQ FA is to bring together (i) knowledge of modern view and capability in VUQ, (ii) knowledge of physical processes that govern the CIPS, and (iii) knowledge of codes, models, and data available, used, potentially accessible, and/or being developed in CASL for CIPS prediction, to devise a practical VDP that effectively supports the CASL's mission in CIPS applications.

  13. Validating MEDIQUAL Constructs

    Science.gov (United States)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  14. DDML Schema Validation

    Science.gov (United States)

    2016-02-08

    XML schema govern DDML instance documents. For information about XML, refer to RCC 125-15, XML Style Guide.2 Figure 4 provides an XML snippet of a...we have documented three main types of information .  User Stories: A user story describes a specific requirement of the schema in the terms of a...instance document is a schema -valid XML file that completely describes the information in the test case in a manner that satisfies the user story

  15. What is validation

    International Nuclear Information System (INIS)

    Clark, H.K.

    1985-01-01

    Criteria for establishing the validity of a computational method to be used in assessing nuclear criticality safety, as set forth in ''American Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors,'' ANSI/ANS-8.1-1983, are examined and discussed. Application of the criteria is illustrated by describing the procedures followed in deriving subcritical limits that have been incorporated in the Standard

  16. PRA (Probabilistic Risk Assessments) Participation versus Validation

    Science.gov (United States)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  17. Radiochemical verification and validation in the environmental data collection process

    International Nuclear Information System (INIS)

    Rosano-Reece, D.; Bottrell, D.; Bath, R.J.

    1994-01-01

    A credible and cost effective environmental data collection process should produce analytical data which meets regulatory and program specific requirements. Analytical data, which support the sampling and analysis activities at hazardous waste sites, undergo verification and independent validation before the data are submitted to regulators. Understanding the difference between verification and validation and their respective roles in the sampling and analysis process is critical to the effectiveness of a program. Verification is deciding whether the measurement data obtained are what was requested. The verification process determines whether all the requirements were met. Validation is more complicated than verification. It attempts to assess the impacts on data use, especially when requirements are not met. Validation becomes part of the decision-making process. Radiochemical data consists of a sample result with an associated error. Therefore, radiochemical validation is different and more quantitative than is currently possible for the validation of hazardous chemical data. Radiochemical data include both results and uncertainty that can be statistically compared to identify significance of differences in a more technically defensible manner. Radiochemical validation makes decisions about analyte identification, detection, and uncertainty for a batch of data. The process focuses on the variability of the data in the context of the decision to be made. The objectives of this paper are to present radiochemical verification and validation for environmental data and to distinguish the differences between the two operations

  18. NRC performance indicator program

    International Nuclear Information System (INIS)

    Singh, R.N.

    1987-01-01

    The performance indicator development work of the US Nuclear Regulatory Commission (NRC) interoffice task group involved several major activities that included selection of candidate indicators for a trial program, data collection and review, validation of the trial indicators, display method development, interactions with the industry, and selection of an optimum set of indicators for the program. After evaluating 27 potential indicators against certain ideal attributes, the task group selected 17 for the trial program. The pertinent data for these indicators were then collected from 50 plants at 30 sites. The validation of the indicators consisted of two primary processes: logical validity and statistical analysis. The six indicators currently in the program are scrams, safety system actuations, significant events, safety system failures, forced outage rate, and equipment forced outages per 100 critical hours. A report containing data on the six performance indicators and some supplemental information is issued on a quarterly basis. The NRC staff is also working on refinements of existing indicators and development of additional indicators as directed by the commission

  19. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  20. Validering av Evolution 220

    OpenAIRE

    Krakeli, Tor-Arne

    2013-01-01

    - Det har blitt kjøpt inn et nytt spektrofotometer (Evolution 220, Thermo Scientific) til BioLab Nofima. I den forbindelsen har det blitt utført en validering som involverer kalibreringsstandarder fra produsenten og en test på normal distribusjon (t-test) på to metoder (Total fosfor, Tryptofan). Denne valideringen fant Evolution 220 til å være et akseptabelt alternativ til det allerede benyttede spektrofotometeret (Helios Beta). På bakgrunn av noen instrumentbegrensninger må de aktuelle an...

  1. Simulation Validation for Societal Systems

    National Research Council Canada - National Science Library

    Yahja, Alex

    2006-01-01

    .... There are however, substantial obstacles to validation. The nature of modeling means that there are implicit model assumptions, a complex model space and interactions, emergent behaviors, and uncodified and inoperable simulation and validation knowledge...

  2. Nuclear data to support computer code validation

    International Nuclear Information System (INIS)

    Fisher, S.E.; Broadhead, B.L.; DeHart, M.D.; Primm, R.T. III

    1997-04-01

    The rate of plutonium disposition will be a key parameter in determining the degree of success of the Fissile Materials Disposition Program. Estimates of the disposition rate are dependent on neutronics calculations. To ensure that these calculations are accurate, the codes and data should be validated against applicable experimental measurements. Further, before mixed-oxide (MOX) fuel can be fabricated and loaded into a reactor, the fuel vendors, fabricators, fuel transporters, reactor owners and operators, regulatory authorities, and the Department of Energy (DOE) must accept the validity of design calculations. This report presents sources of neutronics measurements that have potential application for validating reactor physics (predicting the power distribution in the reactor core), predicting the spent fuel isotopic content, predicting the decay heat generation rate, certifying criticality safety of fuel cycle facilities, and ensuring adequate radiation protection at the fuel cycle facilities and the reactor. The U.S. in-reactor experience with MOX fuel is first presented, followed by information related to other aspects of the MOX fuel performance information that is valuable to this program, but the data base remains largely proprietary. Thus, this information is not reported here. It is expected that the selected consortium will make the necessary arrangements to procure or have access to the requisite information

  3. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  4. Audit Validation Using Ontologies

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2015-01-01

    Full Text Available Requirements to increase quality audit processes in enterprises are defined. It substantiates the need for assessment and management audit processes using ontologies. Sets of rules, ways to assess the consistency of rules and behavior within the organization are defined. Using ontologies are obtained qualifications that assess the organization's audit. Elaboration of the audit reports is a perfect algorithm-based activity characterized by generality, determinism, reproducibility, accuracy and a well-established. The auditors obtain effective levels. Through ontologies obtain the audit calculated level. Because the audit report is qualitative structure of information and knowledge it is very hard to analyze and interpret by different groups of users (shareholders, managers or stakeholders. Developing ontology for audit reports validation will be a useful instrument for both auditors and report users. In this paper we propose an instrument for validation of audit reports contain a lot of keywords that calculates indicators, a lot of indicators for each key word there is an indicator, qualitative levels; interpreter who builds a table of indicators, levels of actual and calculated levels.

  5. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  6. Validating Savings Claims of Cold Climate Zero Energy Ready Homes

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, J. [Consortium for Advanced Residential Buildings, Norwalk, CT (United States); Puttagunta, S. [Consortium for Advanced Residential Buildings, Norwalk, CT (United States)

    2015-06-05

    This study was intended to validate actual performance of three ZERHs in the Northeast to energy models created in REM/Rate v14.5 (one of the certified software programs used to generate a HERS Index) and the National Renewable Energy Laboratory’s Building Energy Optimization (BEopt™) v2.3 E+ (a more sophisticated hourly energy simulation software). This report details the validation methods used to analyze energy consumption at each home.

  7. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  8. National HTGR safety program

    International Nuclear Information System (INIS)

    Davis, D.E.; Kelley, A.P. Jr.

    1982-01-01

    This paper presents an overview of the National HTGR Program in the US with emphasis on the safety and licensing strategy being pursued. This strategy centers upon the development of an integrated approach to organizing and classifying the functions needed to produce safe and economical nuclear power production. At the highest level, four plant goals are defined - Normal Operation, Core and Plant Protection, Containment Integrity and Emergency Preparedness. The HTGR features which support the attainment of each goal are described and finally a brief summary is provided of the current status of the principal safety development program supporting the validation of the four plant goals

  9. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  10. Development validation and use of computer codes for inelastic analysis

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    A finite element scheme is a system which provides routines so carry out the operations which are common to all finite element programs. The list of items that can be provided as standard by the finite element scheme is surprisingly large and the list provided by the UNCLE finite element scheme is unusually comprehensive. This presentation covers the following: construction of the program, setting up a finite element mesh, generation of coordinates, incorporating boundary and load conditions. Program validation was done by creep calculations performed using CAUSE code. Program use is illustrated by calculating a typical inelastic analysis problem. This includes computer model of the PFR intermediate heat exchanger

  11. Program specialization

    CERN Document Server

    Marlet, Renaud

    2013-01-01

    This book presents the principles and techniques of program specialization - a general method to make programs faster (and possibly smaller) when some inputs can be known in advance. As an illustration, it describes the architecture of Tempo, an offline program specializer for C that can also specialize code at runtime, and provides figures for concrete applications in various domains. Technical details address issues related to program analysis precision, value reification, incomplete program specialization, strategies to exploit specialized program, incremental specialization, and data speci

  12. Validação de proposta de avaliação de programas de controle de infecção hospitalar Validación de propuesta de evaluación de programas de control de infección hospitalaria Validation of a proposal for evaluating hospital infection control programs

    Directory of Open Access Journals (Sweden)

    Cristiane Pavanello Rodrigues Silva

    2011-02-01

    las unidades de análisis que lo compone. Lo mismo ocurrió para actividades de control y prevención: se identificó interface con unidades de tratamiento e interface con unidades de apoyo. CONCLUSIONES: La validación de las propiedades de medidas de los indicadores de programas de control de infección hospitalaria permite desarrollar herramienta de evaluación de dichos programas de forma ética y científica para diagnóstico de calidad en el área.OBJECTIVE: To validate the construct and discriminant properties of a hospital infection prevention and control program. METHODS: The program consisted of four indicators: technical-operational structure; operational prevention and control guidelines; epidemiological surveillance system; and prevention and control activities. These indicators, with previously validated content, were applied to 50 healthcare institutions in the city of São Paulo, Southeastern Brazil, in 2009. Descriptive statistics were used to characterize the hospitals and indicator scores, and Cronbach's α coefficient was used to evaluate the internal consistency. The discriminant validity was analyzed by comparing indicator scores between groups of hospitals: with versus without quality certification. The construct validity analysis was based on exploratory factor analysis with a tetrachoric correlation matrix. RESULTS: The indicators for the technical-operational structure and epidemiological surveillance presented almost 100% conformity in the whole sample. The indicators for the operational prevention and control guidelines and the prevention and control activities presented internal consistency ranging from 0.67 to 0.80. The discriminant validity of these indicators indicated higher and statistically significant mean conformity scores among the group of institutions with healthcare certification or accreditation processes. In the construct validation, two dimensions were identified for the operational prevention and control guidelines

  13. Transferability of Skills: Convergent, Postdictive, Criterion-Related, and Construct Validation of Cross-Job Retraining Time Estimates

    National Research Council Canada - National Science Library

    Kavanagh, Michael

    1997-01-01

    ... (job learning difficulty and cross-AFS differences in aptitude requirements), (b) XJRThs exhibited some postdictive validity when evaluated against Airman Retraining Program Survey retraining ease criteria, (c...

  14. Interior LED Lighting Technology. Navy Energy Technology Validation (Techval) Program

    Science.gov (United States)

    2015-09-01

    usually on most of the time. • Consider replacing existing CFL, high-intensity discharge (HID), or halogen lamp light fixtures/ lamps with LED fixtures... lamps . What is the Technology? An LED is a semiconductor-diode that emits light when power is applied. A driver is used, much as a ballast, to...available in integrated luminaires that can be used to replace existing luminaires. LEDs are also available as direct replacement lamps for many

  15. Validation of the Voice of America Coverage Analysis Program (VOACAP)

    Science.gov (United States)

    2013-02-01

    34 ======================================================================= ================== Points of contact: Wayne Patterson 619-553-1423 wayne.patterson@navy.mil (Areps, Enviro ) 29 Amalia Barrios 619

  16. Small Portable Analyzer Diagnostic Equipment (SPADE) Program -- Diagnostic Software Validation

    Science.gov (United States)

    1984-07-01

    Electronic Equipment Electromagnetic Emission and Susceptibility Requirements for the Control of Electromagnetic Interference Electromagnetic...ONLY. ORIENTATION OF DEFECT LOOKING HHO QIlILL: t -ed’-o· Significant efforts were expended to simulate spalling failures associated with naturally

  17. Validation of an Automated Torsional and Warping Stress Analysis Program

    Science.gov (United States)

    1992-08-19

    AT ftA NC[ VIPS’ Af $UPP69T ds (ZqOoo x,~)(23.6 ui7)( .000012433) 127672 P~s Af .SL Cq6"): dws (2qOOC KcI)(21Ci;)2)G-.OOOQQ 3623):’ -2uqO KSI AT M~C...TORSIONAL ,’KMENT .50000000 ENDING AT 1,2.04000 FMD • LE3 END PHIZ .00)OOE+00 PHI: .o0000Eo0f PHI2: .38240E-04 PHI3: -. 33�E-05 I"OR. SHR. WEL

  18. Design for validation: An approach to systems validation

    Science.gov (United States)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  19. Instrument validation system of general application

    International Nuclear Information System (INIS)

    Filshtein, E.L.

    1990-01-01

    This paper describes the Instrument Validation System (IVS) as a software system which has the capability of evaluating the performance of a set of functionally related instrument channels to identify failed instruments and to quantify instrument drift. Under funding from Combustion Engineering (C-E), the IVS has been developed to the extent that a computer program exists whose use has been demonstrated. The initial development work shows promise for success and for wide application, not only to power plants, but also to industrial manufacturing and process control. Applications in the aerospace and military sector are also likely

  20. HTML Validation of Context-Free Languages

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2011-01-01

    We present an algorithm that generalizes HTML validation of individual documents to work on context-free sets of documents. Together with a program analysis that soundly approximates the output of Java Servlets and JSP web applications as context-free languages, we obtain a method for statically...... checking that such web applications never produce invalid HTML at runtime. Experiments with our prototype implementation demonstrate that the approach is useful: On 6 open source web applications consisting of a total of 104 pages, our tool finds 64 errors in less than a second per page, with 0 false...

  1. Spare Items validation

    International Nuclear Information System (INIS)

    Fernandez Carratala, L.

    1998-01-01

    There is an increasing difficulty for purchasing safety related spare items, with certifications by manufacturers for maintaining the original qualifications of the equipment of destination. The main reasons are, on the top of the logical evolution of technology, applied to the new manufactured components, the quitting of nuclear specific production lines and the evolution of manufacturers quality systems, originally based on nuclear codes and standards, to conventional industry standards. To face this problem, for many years different Dedication processes have been implemented to verify whether a commercial grade element is acceptable to be used in safety related applications. In the same way, due to our particular position regarding the spare part supplies, mainly from markets others than the american, C.N. Trillo has developed a methodology called Spare Items Validation. This methodology, which is originally based on dedication processes, is not a single process but a group of coordinated processes involving engineering, quality and management activities. These are to be performed on the spare item itself, its design control, its fabrication and its supply for allowing its use in destinations with specific requirements. The scope of application is not only focussed on safety related items, but also to complex design, high cost or plant reliability related components. The implementation in C.N. Trillo has been mainly curried out by merging, modifying and making the most of processes and activities which were already being performed in the company. (Author)

  2. SHIELD verification and validation report

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    This document outlines the verification and validation effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system code. Along with its predecessors, SHIELD has been in use at the Savannah River Site (SRS) for more than ten years. During this time the code has been extensively tested and a variety of validation documents have been issued. The primary function of this report is to specify the features and capabilities for which SHIELD is to be considered validated, and to reference the documents that establish the validation

  3. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  4. EPRI MOV performance prediction program

    International Nuclear Information System (INIS)

    Hosler, J.F.; Damerell, P.S.; Eidson, M.G.; Estep, N.E.

    1994-01-01

    An overview of the EPRI Motor-Operated Valve (MOV) Performance Prediction Program is presented. The objectives of this Program are to better understand the factors affecting the performance of MOVs and to develop and validate methodologies to predict MOV performance. The Program involves valve analytical modeling, separate-effects testing to refine the models, and flow-loop and in-plant MOV testing to provide a basis for model validation. The ultimate product of the Program is an MOV Performance Prediction Methodology applicable to common gate, globe, and butterfly valves. The methodology predicts thrust and torque requirements at design-basis flow and differential pressure conditions, assesses the potential for gate valve internal damage, and provides test methods to quantify potential for gate valve internal damage, and provides test methods to quantify potential variations in actuator output thrust with loading condition. Key findings and their potential impact on MOV design and engineering application are summarized

  5. Steam generator tube integrity program

    International Nuclear Information System (INIS)

    Dierks, D.R.; Shack, W.J.; Muscara, J.

    1996-01-01

    A new research program on steam generator tubing degradation is being sponsored by the U.S. Nuclear Regulatory Commission (NRC) at Argonne National Laboratory. This program is intended to support a performance-based steam generator tube integrity rule. Critical areas addressed by the program include evaluation of the processes used for the in-service inspection of steam generator tubes and recommendations for improving the reliability and accuracy of inspections; validation and improvement of correlations for evaluating integrity and leakage of degraded steam generator tubes, and validation and improvement of correlations and models for predicting degradation in steam generator tubes as aging occurs. The studies will focus on mill-annealed Alloy 600 tubing, however, tests will also be performed on replacement materials such as thermally-treated Alloy 600 or 690. An overview of the technical work planned for the program is given

  6. Towards automatic verification of ladder logic programs

    OpenAIRE

    Zoubek , Bohumir; Roussel , Jean-Marc; Kwiatkowska , Martha

    2003-01-01

    International audience; Control system programs are usually validated by testing prior to their deployment. Unfortunately, testing is not exhaustive and therefore it is possible that a program which passed all the required tests still contains errors. In this paper we apply techniques of automatic verification to a control program written in ladder logic. A model is constructed mechanically from the ladder logic program and subjected to automatic verification against requirements that include...

  7. Validation testing of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Hang Bae; Han, Jae Bok

    1995-01-01

    A software engineering process has been developed for the design of safety critical software for Wolsung 2/3/4 project to satisfy the requirements of the regulatory body. Among the process, this paper described the detail process of validation testing performed to ensure that the software with its hardware, developed by the design group, satisfies the requirements of the functional specification prepared by the independent functional group. To perform the tests, test facility and test software were developed and actual safety system computer was connected. Three kinds of test cases, i.e., functional test, performance test and self-check test, were programmed and run to verify each functional specifications. Test failures were feedback to the design group to revise the software and test results were analyzed and documented in the report to submit to the regulatory body. The test methodology and procedure were very efficient and satisfactory to perform the systematic and automatic test. The test results were also acceptable and successful to verify the software acts as specified in the program functional specification. This methodology can be applied to the validation of other safety-critical software. 2 figs., 2 tabs., 14 refs. (Author)

  8. Cleaning Validation of Fermentation Tanks

    DEFF Research Database (Denmark)

    Salo, Satu; Friis, Alan; Wirtanen, Gun

    2008-01-01

    Reliable test methods for checking cleanliness are needed to evaluate and validate the cleaning process of fermentation tanks. Pilot scale tanks were used to test the applicability of various methods for this purpose. The methods found to be suitable for validation of the clenlinees were visula...

  9. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  10. Validity in SSM: neglected areas

    NARCIS (Netherlands)

    Pala, O.; Vennix, J.A.M.; Mullekom, T.L. van

    2003-01-01

    Contrary to the prevailing notion in hard OR, in soft system methodology (SSM), validity seems to play a minor role. The primary reason for this is that SSM models are of a different type, they are not would-be descriptions of real-world situations. Therefore, establishing their validity, that is

  11. The Consequences of Consequential Validity.

    Science.gov (United States)

    Mehrens, William A.

    1997-01-01

    There is no agreement at present about the importance or meaning of the term "consequential validity." It is important that the authors of revisions to the "Standards for Educational and Psychological Testing" recognize the debate and relegate discussion of consequences to a context separate from the discussion of validity.…

  12. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  13. The Mistra experiment for field containment code validation first results

    International Nuclear Information System (INIS)

    Caron-Charles, M.; Blumenfeld, L.

    2001-01-01

    The MISTRA facility is a large scale experiment, designed for the purpose of thermal-hydraulics multi-D codes validation. A short description of the facility, the set up of the instrumentation and the test program are presented. Then, the first experimental results, studying helium injection in the containment and their calculations are detailed. (author)

  14. Validation of SSC using the FFTF natural-circulation tests

    International Nuclear Information System (INIS)

    Horak, W.C.; Guppy, J.G.; Kennett, R.J.

    1982-01-01

    As part of the Super System Code (SSC) validation program, the 100% power FFTF natural circulation test has been simulated using SSC. A detailed 19 channel, 2 loop model was used in SSC. Comparisons showed SSC calculations to be in good agreement with the Fast Flux Test Facility (FFTF), test data. Simulation of the test was obtained in real time

  15. A Validity Study of the Self-Esteem Inventory.

    Science.gov (United States)

    Landis, H. John

    Results of this validation study of a slightly modified version of the Coppersmith Self-Esteem Inventory substantiate its use with seventh graders to assess Goal I (concerning self-understanding and appreciation of self-worth) of the Educational Quality Assessment Program in Pennsylvania. Appendixes include the definition and rationale for Goal I,…

  16. Construct Validation--Community College Instructional Development Inventory

    Science.gov (United States)

    Xiong, Soua; Delgado, Nexi; Wood, J. Luke; Harris, Frank, III

    2017-01-01

    This white paper describes the construct validation of the Community College Instructional Development Inventory (CC-IDI). The CC-IDI is an institutional assessment tool designed to inform professional development programming for instructional faculty. The instrument was developed to serve as a standardized assessment tool to determine the…

  17. A Validation Study of the Student Oral Proficiency Assessment (SOPA).

    Science.gov (United States)

    Thompson, Lynn E.; Kenyon, Dorry M.; Rhodes, Nancy C.

    This study validated the Student Oral Proficiency Assessment (SOPA), an oral proficiency instrument designed for students in elementary foreign language programs. Elementary students who were tested with the SOPA were also administered other instruments designed to measure proficiency. These instruments included the Stanford Foreign Language Oral…

  18. Seismic analysis program group: SSAP

    International Nuclear Information System (INIS)

    Uchida, Masaaki

    2002-05-01

    A group of programs SSAP has been developed, each member of which performs seismic calculation using simple single-mass system model or multi-mass system model. For response of structures to a transverse s-wave, a single-mass model program calculating response spectrum and a multi-mass model program are available. They perform calculation using the output of another program, which produces simulated earthquakes having the so-called Ohsaki-spectrum characteristic. Another program has been added, which calculates the response of one-dimensional multi-mass systems to vertical p-wave input. It places particular emphasis on the analysis of the phenomena observed at some shallow earthquakes in which stones jump off the ground. Through a series of test calculations using these programs, some interesting information has been derived concerning the validity of superimposing single-mass model calculation, and also the condition for stones to jump. (author)

  19. The measurement of instrumental ADL: content validity and construct validity

    DEFF Research Database (Denmark)

    Avlund, K; Schultz-Larsen, K; Kreiner, S

    1993-01-01

    do not depend on help. It is also possible to add the items in a valid way. However, to obtain valid IADL-scales, we omitted items that were highly relevant to especially elderly women, such as house-work items. We conclude that the criteria employed for this IADL-measure are somewhat contradictory....... showed that 14 items could be combined into two qualitatively different additive scales. The IADL-measure complies with demands for content validity, distinguishes between what the elderly actually do, and what they are capable of doing, and is a good discriminator among the group of elderly persons who...

  20. Italian Validation of Homophobia Scale (HS).

    Science.gov (United States)

    Ciocca, Giacomo; Capuano, Nicolina; Tuziak, Bogdan; Mollaioli, Daniele; Limoncin, Erika; Valsecchi, Diana; Carosa, Eleonora; Gravina, Giovanni L; Gianfrilli, Daniele; Lenzi, Andrea; Jannini, Emmanuele A

    2015-09-01

    The Homophobia Scale (HS) is a valid tool to assess homophobia. This test is self-reporting, composed of 25 items, which assesses a total score and three factors linked to homophobia: behavior/negative affect, affect/behavioral aggression, and negative cognition. The aim of this study was to validate the HS in the Italian context. An Italian translation of the HS was carried out by two bilingual people, after which an English native translated the test back into the English language. A psychologist and sexologist checked the translated items from a clinical point of view. We recruited 100 subjects aged18-65 for the Italian validation of the HS. The Pearson coefficient and Cronbach's α coefficient were performed to test the test-retest reliability and internal consistency. A sociodemographic questionnaire including the main information as age, geographic distribution, partnership status, education, religious orientation, and sex orientation was administrated together with the translated version of HS. The analysis of the internal consistency showed an overall Cronbach's α coefficient of 0.92. In the four domains, the Cronbach's α coefficient was 0.90 in behavior/negative affect, 0.94 in affect/behavioral aggression, and 0.92 in negative cognition, whereas in the total score was 0.86. The test-retest reliability showed the following results: the HS total score was r = 0.93 (P cognition was r = 0.75 (P validation of the HS revealed the use of this self-report test to have good psychometric properties. This study offers a new tool to assess homophobia. In this regard, the HS can be introduced into the clinical praxis and into programs for the prevention of homophobic behavior.

  1. Predictive validity of the Slovene Matura

    Directory of Open Access Journals (Sweden)

    Valentin Bucik

    2001-09-01

    Full Text Available Passing Matura is the last step of the secondary school graduation, but it is also the entrance ticket for the university. Besides, the summary score of Matura exam takes part in the selection process for the particular university studies in case of 'numerus clausus'. In discussing either aim of Matura important dilemmas arise, namely, is the Matura examination sufficiently exact and rightful procedure to, firstly, use its results for settling starting studying conditions and, secondly, to select validly, reliably and sensibly the best candidates for university studies. There are some questions concerning predictive validity of Matura that should be answered, e.g. (i does Matura as an enrollment procedure add to the qualitaty of the study; (ii is it a better selection tool than entrance examinations formerly used in different faculties in the case of 'numerus clausus'; and (iii is it reasonable to expect high predictive validity of Matura results for success at the university at all. Recent results show that in the last few years the dropout-rate is lower than before, the pass-rate between the first and the second year is higher and the average duration of study per student is shorter. It is clear, however, that it is not possible to simply predict the study success from the Matura results. There are too many factors influencing the success in the university studies. In most examined study programs the correlation between Matura results and study success is positive but moderate, therefore it can not be said categorically that only candidates accepted according to the Matura results are (or will be the best students. Yet it has been shown that Matura is a standardized procedure, comparable across different candidates entering university, and that – when compared entrance examinations – it is more objective, reliable, and hen ce more valid and fair a procedure. In addition, comparable procedures of university recruiting and selection can be

  2. The assessment validity of the curriculum in gender and education

    Directory of Open Access Journals (Sweden)

    Rosa María González

    2013-01-01

    Full Text Available The study of validity applied to the studies specialized in gender in an education that has been offered by the National Pedagogical University since 1999 basing itself on two major areas: the external validity and internal validity. The first one contemplates four sections that include the review of gender studies in education as a field of knowledge, the comparative analysis of this educational program with similar (national and international, reviewing its relevance to public gender policy and education, as well as an overview of the demands of the professional workplace – the work of specialists in the field.Moreover, the assessment of internal validity followed a strategy that consisted of verifying consistency between course objectives with the overall objective of the curriculum, as well as reviewing the correspondence between the scope proposed for each theme for each course and the characteristics of the graduate profile.

  3. Program History

    Science.gov (United States)

    Learn how the National Cancer Institute transitioned the former Cooperative Groups Program to the National Clinical Trials Network (NCTN) program. The NCTN gives funds and other support to cancer research organizations to conduct cancer clinical trials.

  4. Program auto

    International Nuclear Information System (INIS)

    Rawool-Sullivan, M.W.; Plagnol, E.

    1990-01-01

    The program AUTO was developed to be used in the analysis of dE vs E type spectra. This program is written in FORTRAN and calculates dE vs E lines in MeV. The provision is also made in the program to convert these lines from MeV to ADC channel numbers to facilitate the comparison with the raw data from the experiments. Currently the output of this program can be plotted with the display program, called VISU, but it can also be used independent of the program VISU, with little or no modification in the actual fortran code. The program AUTO has many useful applications. In this article the program AUTO is described along with its applications

  5. Una Escala de Evaluación Familiar Eco-Sistémica para Programas Sociales: Confiabilidad y Validez de la NCFAS en Población de Alto Riesgo Psicosocial An Eco-Systemic Family Assessment Scale for Social Programs: Reliability and Validity of NCFAS in a High Psychosocial Risk Population

    Directory of Open Access Journals (Sweden)

    Edgar Valencia

    2010-05-01

    Full Text Available Se presenta evidencia sobre la consistencia interna y validez de constructo de la versión en español de la Escala de Evaluación Familiar de Carolina del Norte (NCFAS. El estudio utilizó los registros de información de 528 participantes de 8 programas chilenos de intervención familiar para la prevención del maltrato y negligencia infantil. Los resultados indican que la escala tiene una consistencia interna apropiada, mostrando un comportamiento similar al de la versión original de la escala. El análisis factorial exploratorio replicó parcialmente las dimensiones teóricas del instrumento. Se sugiere distribuir los ítems de la dimensión Interacciones Familiares e incorporar un nuevo factor llamado Bienestar del Cuidador. Se discute la necesidad de adaptar la NCFAS al contexto latinoamericano en futuras investigaciones.Evidence about the internal consistency and construct validity of the Spanish version of the North Carolina Family Assessment Scale (NCFAS is presented. The study used records of 8 Chilean family preservation programs for prevention of child abuse and neglect (N = 528. The results indicate that the scale has appropriate internal consistency, showing a behavior similar to the one of the original version of the scale. The exploratory factor analysis partially supports the theoretical domains of the assessment tool. Distributing the items of the Family Interactions domain and including a new factor called Caregiver's Well-being is suggested. The necessity of adapting the NCFAS to a Latin-American context in future research is discussed.

  6. 38 CFR 1.15 - Standards for program evaluation.

    Science.gov (United States)

    2010-07-01

    ... program operates. (3) Validity. The degree of statistical validity should be assessed within the research... intent, contain a method to measure fulfillment of the objectives, ascertain the degree to which goals... the data. (f) Each program evaluation requires a systematic research design to collect the data...

  7. Convergent validity test, construct validity test and external validity test of the David Liberman algorithm

    Directory of Open Access Journals (Sweden)

    David Maldavsky

    2013-08-01

    Full Text Available The author first exposes a complement of a previous test about convergent validity, then a construct validity test and finally an external validity test of the David Liberman algorithm.  The first part of the paper focused on a complementary aspect, the differential sensitivity of the DLA 1 in an external comparison (to other methods, and 2 in an internal comparison (between two ways of using the same method, the DLA.  The construct validity test exposes the concepts underlined to DLA, their operationalization and some corrections emerging from several empirical studies we carried out.  The external validity test examines the possibility of using the investigation of a single case and its relation with the investigation of a more extended sample.

  8. Validation of EAF-2005 data

    International Nuclear Information System (INIS)

    Kopecky, J.

    2005-01-01

    Full text: Validation procedures applied on EAF-2003 starter file, which lead to the production of EAF-2005 library, are described. The results in terms of reactions with assigned quality scores in EAF-20005 are given. Further the extensive validation against the recent integral data is discussed together with the status of the final report 'Validation of EASY-2005 using integral measurements'. Finally, the novel 'cross section trend analysis' is presented with some examples of its use. This action will lead to the release of improved library EAF-2005.1 at the end of 2005, which shall be used as the starter file for EAF-2007. (author)

  9. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    Science.gov (United States)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  10. XML Graphs in Program Analysis

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael I.

    2011-01-01

    of XML graphs against different XML schema languages, and provide a software package that enables others to make use of these ideas. We also survey the use of XML graphs for program analysis with four very different languages: XACT (XML in Java), Java Servlets (Web application programming), XSugar......XML graphs have shown to be a simple and effective formalism for representing sets of XML documents in program analysis. It has evolved through a six year period with variants tailored for a range of applications. We present a unified definition, outline the key properties including validation...

  11. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Science.gov (United States)

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  12. Validation of Autonomous Space Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — System validation addresses the question "Will the system do the right thing?" When system capability includes autonomy, the question becomes more pointed. As NASA...

  13. Magnetic Signature Analysis & Validation System

    National Research Council Canada - National Science Library

    Vliet, Scott

    2001-01-01

    The Magnetic Signature Analysis and Validation (MAGSAV) System is a mobile platform that is used to measure, record, and analyze the perturbations to the earth's ambient magnetic field caused by object such as armored vehicles...

  14. Contextual Validity in Hybrid Logic

    DEFF Research Database (Denmark)

    Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin

    2013-01-01

    interpretations. Moreover, such indexicals give rise to a special kind of validity—contextual validity—that interacts with ordinary logi- cal validity in interesting and often unexpected ways. In this paper we model these interactions by combining standard techniques from hybrid logic with insights from the work...... of Hans Kamp and David Kaplan. We introduce a simple proof rule, which we call the Kamp Rule, and first we show that it is all we need to take us from logical validities involving now to contextual validities involving now too. We then go on to show that this deductive bridge is strong enough to carry us...... to contextual validities involving yesterday, today and tomorrow as well....

  15. Validating Farmers' Indigenous Social Networks for Local Seed Supply in Central Rift Valley of Ethiopia

    NARCIS (Netherlands)

    Seboka, B.; Deressa, A.

    2000-01-01

    Indigenous social networks of Ethiopian farmers participate in seed exchange based on mutual interdependence and trust. A government-imposed extension program must validate the role of local seed systems in developing a national seed industry

  16. Validity, reliability, and feasibility of clinical staging scales in dementia: a systematic review

    DEFF Research Database (Denmark)

    Rikkert, Marcel G M Olde; Tona, Klodiana Daphne; Janssen, Lieneke

    2011-01-01

    New staging systems of dementia require adaptation of disease management programs and adequate staging instruments. Therefore, we systematically reviewed the literature on validity and reliability of clinically applicable, multidomain, and dementia staging instruments. A total of 23 articles...

  17. Translation and validation of the Malay version of the Stroke Knowledge Test

    Directory of Open Access Journals (Sweden)

    Siti Noorkhairina Sowtali

    2016-04-01

    Conclusions: Malay version Stroke Knowledge Test was a valid and reliable tool to assess educational needs and to evaluate stroke knowledge among participants of group-based stroke education programs in Malaysia.

  18. Precision Glass Molding: Validation of an FE Model for Thermo-Mechanical Simulation

    DEFF Research Database (Denmark)

    Sarhadi, Ali; Hattel, Jesper Henri; Hansen, Hans Nørgaard

    2014-01-01

    glass molding process including heating, pressing, and cooling stages. Temperature- dependent viscoelastic and structural relaxation behavior of the glass material are implemented through a FORTRAN material subroutine (UMAT) into the commercial FEM program ABAQUS, and the FE model is validated...

  19. Validity and Reliability of Korean Version of Health Empowerment Scale (K-HES for Older Adults

    Directory of Open Access Journals (Sweden)

    Chorong Park, MSN, RN

    2013-09-01

    Conclusion: The K-HES had acceptable validity and reliability. The brevity and ease of administration of the K-HES makes it a suitable tool for evaluating empowerment-based education programs targeted towards older populations.

  20. MARS Validation Plan and Status

    International Nuclear Information System (INIS)

    Ahn, Seung-hoon; Cho, Yong-jin

    2008-01-01

    The KINS Reactor Thermal-hydraulic Analysis System (KINS-RETAS) under development is directed toward a realistic analysis approach of best-estimate (BE) codes and realistic assumptions. In this system, MARS is pivoted to provide the BE Thermal-Hydraulic (T-H) response in core and reactor coolant system to various operational transients and accidental conditions. As required for other BE codes, the qualification is essential to ensure reliable and reasonable accuracy for a targeted MARS application. Validation is a key element of the code qualification, and determines the capability of a computer code in predicting the major phenomena expected to occur. The MARS validation was made by its developer KAERI, on basic premise that its backbone code RELAP5/MOD3.2 is well qualified against analytical solutions, test or operational data. A screening was made to select the test data for MARS validation; some models transplanted from RELAP5, if already validated and found to be acceptable, were screened out from assessment. It seems to be reasonable, but does not demonstrate whether code adequacy complies with the software QA guidelines. Especially there may be much difficulty in validating the life-cycle products such as code updates or modifications. This paper presents the plan for MARS validation, and the current implementation status

  1. Validation: an overview of definitions

    International Nuclear Information System (INIS)

    Pescatore, C.

    1995-01-01

    The term validation is featured prominently in the literature on radioactive high-level waste disposal and is generally understood to be related to model testing using experiments. In a first class, validation is linked to the goal of predicting the physical world as faithfully as possible but is unattainable and unsuitable for setting goals for the safety analyses. In a second class, validation is associated to split-sampling or to blind-tests predictions. In the third class of definition, validation focuses on the quality of the decision-making process. Most prominent in the present review is the observed lack of use of the term validation in the field of low-level radioactive waste disposal. The continued informal use of the term validation in the field of high level wastes disposals can become cause for misperceptions and endless speculations. The paper proposes either abandoning the use of this term or agreeing to a definition which would be common to all. (J.S.). 29 refs

  2. RisoeScan 1.0 - User manual and toolset for retrospective validation

    Energy Technology Data Exchange (ETDEWEB)

    Helt-Hansen, J

    2004-12-01

    The RisoeScan software is used for dose measurements with radiochromic films that color visibly. This report consists of two documents for use with the RisoeScan software. The User Manual tells how to use the program and the Toolset for Retrospective Validation describes how to perform a retrospective validation of the software. (au)

  3. Validating the Alcohol Use Disorders Identification Test with Persons Who Have a Serious Mental Illness

    Science.gov (United States)

    O'Hare, Thomas; Sherrer, Margaret V.; LaButti, Annamaria; Emrick, Kelly

    2004-01-01

    Objective/Method: The use of brief, reliable, valid, and practical measures of substance use is critical for conducting individual assessments and program evaluation for integrated mental health-substance abuse services for persons with serious mental illness. This investigation examines the internal consistency reliability, concurrent validity,…

  4. Validating a Finite Element Model of a Structure Subjected to Mine Blast with Experimental Modal Analysis

    Science.gov (United States)

    2017-11-01

    Howle, Dmitriy Krayterman, Justin E Pritchett, and Ryan Sorenson 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...The Under-body Blast Methodology (UBM) for the Test and Evaluation (T&E) program was established to provide a capability for the US Army Test and...and must be validated. The UBM for the T&E program has completed efforts to validate soil models but not structural dynamics models. Modal testing

  5. Site characterization and validation - validation drift fracture data, stage 4

    International Nuclear Information System (INIS)

    Bursey, G.; Gale, J.; MacLeod, R.; Straahle, A.; Tiren, S.

    1991-08-01

    This report describes the mapping procedures and the data collected during fracture mapping in the validation drift. Fracture characteristics examined include orientation, trace length, termination mode, and fracture minerals. These data have been compared and analysed together with fracture data from the D-boreholes to determine the adequacy of the borehole mapping procedures and to assess the nature and degree of orientation bias in the borehole data. The analysis of the validation drift data also includes a series of corrections to account for orientation, truncation, and censoring biases. This analysis has identified at least 4 geologically significant fracture sets in the rock mass defined by the validation drift. An analysis of the fracture orientations in both the good rock and the H-zone has defined groups of 7 clusters and 4 clusters, respectively. Subsequent analysis of the fracture patterns in five consecutive sections along the validation drift further identified heterogeneity through the rock mass, with respect to fracture orientations. These results are in stark contrast to the results form the D-borehole analysis, where a strong orientation bias resulted in a consistent pattern of measured fracture orientations through the rock. In the validation drift, fractures in the good rock also display a greater mean variance in length than those in the H-zone. These results provide strong support for a distinction being made between fractures in the good rock and the H-zone, and possibly between different areas of the good rock itself, for discrete modelling purposes. (au) (20 refs.)

  6. Material Programming

    DEFF Research Database (Denmark)

    Vallgårda, Anna; Boer, Laurens; Tsaknaki, Vasiliki

    2017-01-01

    . Consequently we ask what the practice of programming and giving form to such materials would be like? How would we be able to familiarize ourselves with the dynamics of these materials and their different combinations of cause and effect? Which tools would we need and what would they look like? Will we program......, and color, but additionally being capable of sensing, actuating, and computing. Indeed, computers will not be things in and by themselves, but embedded into the materials that make up our surroundings. This also means that the way we interact with computers and the way we program them, will change...... these computational composites through external computers and then transfer the code them, or will the programming happen closer to the materials? In this feature we outline a new research program that floats between imagined futures and the development of a material programming practice....

  7. Functional Programming in Computer Science

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Loren James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Davis, Marion Kei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-19

    We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functional language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.

  8. Effective Programming

    DEFF Research Database (Denmark)

    Frost, Jacob

    To investigate the use of VTLoE as a basis for formal derivation of functional programs with effects. As a part of the process, a number of issues central to effective formal programming are considered. In particular it is considered how to develop a proof system suitable for pratical reasoning......, how to implement this system in the generic proof assistant Isabelle and finally how to apply the logic and the implementation to programming....

  9. Program Fullerene

    DEFF Research Database (Denmark)

    Wirz, Lukas; Peter, Schwerdtfeger,; Avery, James Emil

    2013-01-01

    Fullerene (Version 4.4), is a general purpose open-source program that can generate any fullerene isomer, perform topological and graph theoretical analysis, as well as calculate a number of physical and chemical properties. The program creates symmetric planar drawings of the fullerene graph, an......-Fowler, and Brinkmann-Fowler vertex insertions. The program is written in standard Fortran and C++, and can easily be installed on a Linux or UNIX environment....

  10. Verification and Validation for Flight-Critical Systems (VVFCS)

    Science.gov (United States)

    Graves, Sharon S.; Jacobsen, Robert A.

    2010-01-01

    On March 31, 2009 a Request for Information (RFI) was issued by NASA s Aviation Safety Program to gather input on the subject of Verification and Validation (V & V) of Flight-Critical Systems. The responses were provided to NASA on or before April 24, 2009. The RFI asked for comments in three topic areas: Modeling and Validation of New Concepts for Vehicles and Operations; Verification of Complex Integrated and Distributed Systems; and Software Safety Assurance. There were a total of 34 responses to the RFI, representing a cross-section of academic (26%), small & large industry (47%) and government agency (27%).

  11. SAS validation and analysis of in-pile TUCOP experiments

    International Nuclear Information System (INIS)

    Morman, J.A.; Tentner, A.M.; Dever, D.J.

    1985-01-01

    The validation of the SAS4A accident analysis code centers on its capability to calculate the wide range of tests performed in the TREAT (Transient Reactor Test Facility) in-pile experiments program. This paper presents the SAS4A analysis of a simulated TUCOP (Transient-Under-Cooled-Over-Power) experiment using seven full-length PFR mixed oxide fuel pins in a flowing sodium loop. Calculations agree well with measured thermal-hydraulic, pin failure time and post-failure fuel motion data. The extent of the agreement confirms the validity of the models used in the SAS4A code to describe TUCOP accidents

  12. Programming F#

    CERN Document Server

    Smith, Chris

    2009-01-01

    Why learn F#? This multi-paradigm language not only offers you an enormous productivity boost through functional programming, it also lets you develop applications using your existing object-oriented and imperative programming skills. With Programming F#, you'll quickly discover the many advantages of Microsoft's new language, which includes access to all the great tools and libraries of the .NET platform. Learn how to reap the benefits of functional programming for your next project -- whether it's quantitative computing, large-scale data exploration, or even a pursuit of your own. With th

  13. PLC Programming

    International Nuclear Information System (INIS)

    Lee, Seong Jae; Wi, Seong Dong; Yoo, Jong Seon; Kim, Se Chan

    2001-02-01

    This book tells of PLC programming for KGL-WIN with summary of PLC, performance and function of PLC like characteristic of KGL-WIN, connection method with PLC, basic performance of K200S/K300S/K1000S, diagram of input and output H/W, writing project, staring the program, editing of program, on-line function, debugging and instructions like control, timer and counter, data transmission, comparison, rotation and moving, system, data operating data conversion and application program.

  14. Programming Interactivity

    CERN Document Server

    Noble, Joshua

    2009-01-01

    Make cool stuff. If you're a designer or artist without a lot of programming experience, this book will teach you to work with 2D and 3D graphics, sound, physical interaction, and electronic circuitry to create all sorts of interesting and compelling experiences -- online and off. Programming Interactivity explains programming and electrical engineering basics, and introduces three freely available tools created specifically for artists and designers: Processing, a Java-based programming language and environment for building projects on the desktop, Web, or mobile phonesArduino, a system t

  15. RELAP-7 Software Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support; Choi, Yong-Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support; Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk, Reliability, and Regulatory Support

    2014-09-25

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.

  16. Installation and validation of MCNP-4A

    International Nuclear Information System (INIS)

    Marks, N.A.

    1997-01-01

    MCNP-4A is a multi-purpose Monte Carlo program suitable for the modelling of neutron, photon, and electron transport problems. It is a particularly useful technique when studying systems containing irregular shapes. MCNP has been developed over the last 25 years by Los Alamos, and is distributed internationally via RSIC at Oak Ridge. This document describes the installation of MCNP-4A (henceforth referred to as MCNP) on the Silicon Graphics workstation (bluey.ansto.gov.au). A limited number of benchmarks pertaining to fast and thermal systems were performed to check the installation and validate the code. The results are compared to deterministic calculations performed using the AUS neutronics code system developed at ANSTO. (author)

  17. IV&V Project Assessment Process Validation

    Science.gov (United States)

    Driskell, Stephen

    2012-01-01

    The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

  18. The validation of an infrared simulation system

    CSIR Research Space (South Africa)

    De Waal, A

    2013-08-01

    Full Text Available theoretical validation framework. This paper briefly describes the procedure used to validate software models in an infrared system simulation, and provides application examples of this process. The discussion includes practical validation techniques...

  19. Fusion Simulation Program

    International Nuclear Information System (INIS)

    Greenwald, Martin

    2011-01-01

    Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. (1). Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a critical

  20. Plasma Simulation Program

    Energy Technology Data Exchange (ETDEWEB)

    Greenwald, Martin

    2011-10-04

    Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. [1]. Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a

  1. Status of CHAP: composite HTGR analysis program

    International Nuclear Information System (INIS)

    Secker, P.A.; Gilbert, J.S.

    1975-12-01

    Development of an HTGR accident simulation program is in progress for the prediction of the overall HTGR plant transient response to various initiating events. The status of the digital computer program named CHAP (Composite HTGR Analysis Program) as of June 30, 1975, is given. The philosophy, structure, and capabilities of the CHAP code are discussed. Mathematical descriptions are given for those HTGR components that have been modeled. Component model validation and evaluation using auxiliary analysis codes are also discussed

  2. Using of Finite Automation at Programming PLC

    Directory of Open Access Journals (Sweden)

    Karol Rastocny

    2004-01-01

    Full Text Available The paper is concerning with systematic advances at programming programmable logic controllers (PLC, which comes out from algebraic description of behaviour of sequential circuit, in the way of finite automaton. This kind of access is streamlining the work of a programmer and enabling to use formalisms in the of whole process of system development, that is from process of analysing demands to process of verification and validation created program. The paper considers about using of ladder diagram at programming PLC.

  3. Process validation for radiation processing

    International Nuclear Information System (INIS)

    Miller, A.

    1999-01-01

    Process validation concerns the establishment of the irradiation conditions that will lead to the desired changes of the irradiated product. Process validation therefore establishes the link between absorbed dose and the characteristics of the product, such as degree of crosslinking in a polyethylene tube, prolongation of shelf life of a food product, or degree of sterility of the medical device. Detailed international standards are written for the documentation of radiation sterilization, such as EN 552 and ISO 11137, and the steps of process validation that are described in these standards are discussed in this paper. They include material testing for the documentation of the correct functioning of the product, microbiological testing for selection of the minimum required dose and dose mapping for documentation of attainment of the required dose in all parts of the product. The process validation must be maintained by reviews and repeated measurements as necessary. This paper presents recommendations and guidance for the execution of these components of process validation. (author)

  4. Engineering Software Suite Validates System Design

    Science.gov (United States)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  5. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  6. RESEM-CA: Validation and testing

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Vineeta; Carroll, William L.; Bourassa, Norman

    2002-09-01

    This report documents the results of an extended comparison of RESEM-CA energy and economic performance predictions with the recognized benchmark tool DOE2.1E to determine the validity and effectiveness of this tool for retrofit design and analysis. The analysis was a two part comparison of patterns of (1) monthly and annual energy consumption of a simple base-case building and controlled variations in it to explore the predictions of load components of each program, and (2) a simplified life-cycle cost analysis of the predicted effects of selected Energy Conservation Measures (ECMs). The study tries to analyze and/or explain the differences that were observed. On the whole, this validation study indicates that RESEM is a promising tool for retrofit analysis. As a result of this study some factors (incident solar radiation, outside air film coefficient, IR radiation) have been identified where there is a possibility of algorithmic improvements. These would have to be made in a way that does not sacrifice the speed of the tool, necessary for extensive parametric search of optimum ECM measures.

  7. Verification and Validation Strategy for LWRS Tools

    Energy Technology Data Exchange (ETDEWEB)

    Carl M. Stoots; Richard R. Schultz; Hans D. Gougar; Thomas K Larson; Michael Corradini; Laura Swiler; David Pointer; Jess Gehin

    2012-09-01

    One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verified and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.

  8. Computer Programs.

    Science.gov (United States)

    Anderson, Tiffoni

    This module provides information on development and use of a Material Safety Data Sheet (MSDS) software program that seeks to link literacy skills education, safety training, and human-centered design. Section 1 discusses the development of the software program that helps workers understand the MSDSs that accompany the chemicals with which they…

  9. BASIC Programming.

    Science.gov (United States)

    Jennings, Carol Ann

    Designed for use by both secondary- and postsecondary-level business teachers, this curriculum guide consists of 10 units of instructional materials dealing with Beginners All-Purpose Symbol Instruction Code (BASIC) programing. Topics of the individual lessons are numbering BASIC programs and using the PRINT, END, and REM statements; system…

  10. Rapid Robot Design Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies will create a comprehensive software infrastructure for rapid validation of robot designs. The software will support push-button validation...

  11. Choreographic Programming

    DEFF Research Database (Denmark)

    Montesi, Fabrizio

    , as they offer a concise view of the message flows enacted by a system. For this reason, in the last decade choreographies have been used in the development of programming languages, giving rise to a programming paradigm that in this dissertation we refer to as Choreographic Programming. Recent studies show...... endpoint described in a choreography can then be automatically generated, ensuring that such implementations are safe by construction. However, current formal models for choreographies do not deal with critical aspects of distributed programming, such as asynchrony, mobility, modularity, and multiparty...... sessions; it remains thus unclear whether choreographies can still guarantee safety when dealing with such nontrivial features. This PhD dissertation argues for the suitability of choreographic programming as a paradigm for the development of safe distributed systems. We proceed by investigating its...

  12. All Validity Is Construct Validity. Or Is It?

    Science.gov (United States)

    Kane, Michael

    2012-01-01

    Paul E. Newton's article on the consensus definition of validity tackles a number of big issues and makes a number of strong claims. I agreed with much of what he said, and I disagreed with a number of his claims, but I found his article to be consistently interesting and thought provoking (whether I agreed or not). I will focus on three general…

  13. Canadian hydrogen safety program

    International Nuclear Information System (INIS)

    MacIntyre, I.; Tchouvelev, A.V.; Hay, D.R.; Wong, J.; Grant, J.; Benard, P.

    2007-01-01

    The Canadian hydrogen safety program (CHSP) is a project initiative of the Codes and Standards Working Group of the Canadian transportation fuel cell alliance (CTFCA) that represents industry, academia, government, and regulators. The Program rationale, structure and contents contribute to acceptance of the products, services and systems of the Canadian Hydrogen Industry into the Canadian hydrogen stakeholder community. It facilitates trade through fair insurance policies and rates, effective and efficient regulatory approval procedures and accommodation of the interests of the general public. The Program integrates a consistent quantitative risk assessment methodology with experimental (destructive and non-destructive) failure rates and consequence-of-release data for key hydrogen components and systems into risk assessment of commercial application scenarios. Its current and past six projects include Intelligent Virtual Hydrogen Filling Station (IVHFS), Hydrogen clearance distances, comparative quantitative risk comparison of hydrogen and compressed natural gas (CNG) refuelling options; computational fluid dynamics (CFD) modeling validation, calibration and enhancement; enhancement of frequency and probability analysis, and Consequence analysis of key component failures of hydrogen systems; and fuel cell oxidant outlet hydrogen sensor project. The Program projects are tightly linked with the content of the International Energy Agency (IEA) Task 19 Hydrogen Safety. (author)

  14. Energy Innovation Acceleration Program

    Energy Technology Data Exchange (ETDEWEB)

    Wolfson, Johanna [Fraunhofer USA Inc., Center for Sustainable Energy Systems, Boston, MA (United States)

    2015-06-15

    The Energy Innovation Acceleration Program (IAP) – also called U-Launch – has had a significant impact on early stage clean energy companies in the Northeast and on the clean energy economy in the Northeast, not only during program execution (2010-2014), but continuing into the future. Key results include: Leverage ratio of 105:1; $105M in follow-on funding (upon $1M investment by EERE); At least 19 commercial products launched; At least 17 new industry partnerships formed; At least $6.5M in revenue generated; >140 jobs created; 60% of assisted companies received follow-on funding within 1 year of program completion; In addition to the direct measurable program results summarized above, two primary lessons emerged from our work executing Energy IAP:; Validation and demonstration awards have an outsized, ‘tipping-point’ effect for startups looking to secure investments and strategic partnerships. An ecosystem approach is valuable, but an approach that evaluates the needs of individual companies and then draws from diverse ecosystem resources to fill them, is most valuable of all.

  15. A Medical Research and Evaluation Facility (MREF) and Studies Supporting the Medical Chemical Defense Program: Task 95-39: Methods Development and Validation of Two Mouse Bioassays for Use in Quantifying Botulinum Toxins (A, B, C, D and E) and Toxin Antibody Titers

    National Research Council Canada - National Science Library

    Olson, Carl

    1997-01-01

    Ths task was conducted for the U.S. Army Medical Materiel Development Activity (USAMMDA) to validate two mouse bioassays for quantify botulinum toxin potency and neutralizing antibodies to botulimun toxins...

  16. Validering av vattenkraftmodeller i ARISTO

    OpenAIRE

    Lundbäck, Maja

    2013-01-01

    This master thesis was made to validate hydropower models of a turbine governor, Kaplan turbine and a Francis turbine in the power system simulator ARISTO at Svenska Kraftnät. The validation was made in three steps. The first step was to make sure the models was implement correctly in the simulator. The second was to compare the simulation results from the Kaplan turbine model to data from a real hydropower plant. The comparison was made to see how the models could generate simulation result ...

  17. PIV Data Validation Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  18. Icobj Programming

    OpenAIRE

    Boussinot , Frédéric

    1996-01-01

    A simple and fully graphical programming method is presented, using a powerful means to combine behaviors. This programming is based on the notion of an «icobj» which has a behavioral aspect («object» part), a graphical aspect («icon» part), with an «animation» aspect. Icobj programming provides parallelism, broadcast event communication and migration through the network. An experimental system based on this approach is described in details. Its implementation with reactive scripts is also pr...

  19. Programming Python

    CERN Document Server

    Lutz, Mark

    2011-01-01

    If you've mastered Python's fundamentals, you're ready to start using it to get real work done. Programming Python will show you how, with in-depth tutorials on the language's primary application domains: system administration, GUIs, and the Web. You'll also explore how Python is used in databases, networking, front-end scripting layers, text processing, and more. This book focuses on commonly used tools and libraries to give you a comprehensive understanding of Python's many roles in practical, real-world programming. You'll learn language syntax and programming techniques in a clear and co

  20. Space Suit Joint Torque Measurement Method Validation

    Science.gov (United States)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  1. Verification and validation of control system software

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.; Kisner, R.A.; Bhadtt, S.C.

    1991-01-01

    The following guidelines are proposed for verification and validation (V ampersand V) of nuclear power plant control system software: (a) use risk management to decide what and how much V ampersand V is needed; (b) classify each software application using a scheme that reflects what type and how much V ampersand V is needed; (c) maintain a set of reference documents with current information about each application; (d) use Program Inspection as the initial basic verification method; and (e) establish a deficiencies log for each software application. The following additional practices are strongly recommended: (a) use a computer-based configuration management system to track all aspects of development and maintenance; (b) establish reference baselines of the software, associated reference documents, and development tools at regular intervals during development; (c) use object-oriented design and programming to promote greater software reliability and reuse; (d) provide a copy of the software development environment as part of the package of deliverables; and (e) initiate an effort to use formal methods for preparation of Technical Specifications. The paper provides background information and reasons for the guidelines and recommendations. 3 figs., 3 tabs

  2. Automated reasoning applications to design validation and sneak function analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Argonne National Laboratory (ANL) is actively involved in the LMFBR Man-Machine Integration (MMI) Safety Program. The objective of this program is to enhance the operational safety and reliability of fast-breeder reactors by optimum integration of men and machines through the application of human factors principles and control engineering to the design, operation, and the control environment. ANL is developing methods to apply automated reasoning and computerization in the validation and sneak function analysis process. This project provides the element definitions and relations necessary for an automated reasoner (AR) to reason about design validation and sneak function analysis. This project also provides a demonstration of this AR application on an Experimental Breeder Reactor-II (EBR-II) system, the Argonne Cooling System

  3. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    Science.gov (United States)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  4. Some guidance on preparing validation plans for the DART Full System Models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy (Sandia National Laboratories, Albuquerque, NM)

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  5. Assessment of juveniles testimonies’ validity

    Directory of Open Access Journals (Sweden)

    Dozortseva E.G.

    2015-12-01

    Full Text Available The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA by means of Criteria-Based Content Analysis (CBCA and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice

  6. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  7. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  8. Validity of Management Control Topoi

    DEFF Research Database (Denmark)

    Nørreklit, Lennart; Nørreklit, Hanne; Israelsen, Poul

    2004-01-01

    The validity of research and company topoi for constructing/analyzing relaity is analyzed as the integration of the four aspects (dimensions): fact, possibility (logic), value and comunication. Main stream, agency theory and social constructivism are critizied for reductivism (incomplete integrat...

  9. Verification and validation of DEPOSITION 2.0

    International Nuclear Information System (INIS)

    Eadie, W.J.

    1994-01-01

    The purpose of this document is to verify and validate the usage of the computer program, DEPOSITION 2.0 for use in assessing line loss in CAM and fixed head lines throughout certain Hanford Site facilities. The scope of use is limited to this function. DEPOSITION 2.0 is the second version of this code to be used on the Hanford Site, the program now incorporates additional user-friendly features beyond those which were available in an earlier version, DEPOSITION 1.03

  10. Validation of holistic nursing competencies: role-delineation study, 2012.

    Science.gov (United States)

    Erickson, Helen Lorraine; Erickson, Margaret Elizabeth; Campbell, Joan A; Brekke, Mary E; Sandor, M Kay

    2013-12-01

    The American Holistic Nurses Credentialing Corporation (AHNCC), certifying body for nurses practicing within the precepts of holistic nursing, uses a systematic process to guide program development. A previous publication described their early work that distinguished basic and advanced holistic nursing and development of related examinations. A more recent publication described the work of AHNCC from 2004 to 2012, including a role-delineation study (RDS) that was undertaken to identify and validate competencies currently used by holistic nurses. A final report describes the RDS design, methods, and raw data information. This article discusses AHNCC's goals for undertaking the 2012 Holistic Nursing RDS and the implications for the certification programs.

  11. NVN 5694 intra laboratory validation. Feasibility study for interlaboratory- validation

    International Nuclear Information System (INIS)

    Voors, P.I.; Baard, J.H.

    1998-11-01

    Within the project NORMSTAR 2 a number of Dutch prenormative protocols have been defined for radioactivity measurements. Some of these protocols, e.g. the Dutch prenormative protocol NVN 5694, titled Methods for radiochemical determination of polonium-210 and lead-210, have not been validated, neither by intralaboratory nor interlaboratory studies. Validation studies are conducted within the framework of the programme 'Normalisatie and Validatie van Milieumethoden 1993-1997' (Standardization and Validation of test methods for environmental parameters) of the Dutch Ministry of Housing, Physical Planning and the Environment (VROM). The aims of this study were (a) a critical evaluation of the protocol, (b) investigation on the feasibility of an interlaboratory study, and (c) the interlaboratory validation of NVN 5694. The evaluation of the protocol resulted in a list of deficiencies varying from missing references to incorrect formulae. From the survey by interview it appeared that for each type of material, there are 4 to 7 laboratories willing to participate in a interlaboratory validation study. This reflects the situation in 1997. Consequently, if 4 or 6 (the minimal number) laboratories are participating and each laboratory analyses 3 subsamples, the uncertainty in the repeatability standard deviation is 49 or 40 %, respectively. If the ratio of reproducibility standard deviation to the repeatability standard deviation is equal to 1 or 2, then the uncertainty in the reproducibility standard deviation increases from 42 to 67 % and from 34 to 52 % for 4 or 6 laboratories, respectively. The intralaboratory validation was established on four different types of materials. Three types of materials (milkpowder condensate and filter) were prepared in the laboratory using the raw material and certified Pb-210 solutions, and one (sediment) was obtained from the IAEA. The ECN-prepared reference materials were used after testing on homogeneity. The pre-normative protocol can

  12. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  13. Physical Education Teacher Change: Initial Validation of the Teacher Change Questionnaire-Physical Education

    Science.gov (United States)

    Kern, Ben D.; Graber, Kim C.

    2017-01-01

    Program satisfaction, self-efficacy to change, and willingness to change, are dispositions that influence physical education teacher change. The study purpose was to validate an instrument measuring program satisfaction, self-efficacy to change, and willingness to change relative to teachers' likelihood to change. A 15-item Teacher Change…

  14. Validation of a Criterion Referenced Test for Young Handicapped Children: PIPER.

    Science.gov (United States)

    Strum, Irene; Shapiro, Madelaine

    The purpose of this study was to validate the Prescriptive Instructional Program for Educational Readiness (PIPER) for utilization as a criterion referenced test (CRT) among learning disabled children. The program consisted of behavioral objectives and diagnostic and/or mastery tasks and activities for each objective in the area of gross motor…

  15. Validation of the GCOM-W SCA and JAXA soil moisture algorithms

    Science.gov (United States)

    Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  16. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  17. Validity of instruments to assess students' travel and pedestrian safety

    Directory of Open Access Journals (Sweden)

    Baranowski Tom

    2010-05-01

    Full Text Available Abstract Background Safe Routes to School (SRTS programs are designed to make walking and bicycling to school safe and accessible for children. Despite their growing popularity, few validated measures exist for assessing important outcomes such as type of student transport or pedestrian safety behaviors. This research validated the SRTS school travel survey and a pedestrian safety behavior checklist. Methods Fourth grade students completed a brief written survey on how they got to school that day with set responses. Test-retest reliability was obtained 3-4 hours apart. Convergent validity of the SRTS travel survey was assessed by comparison to parents' report. For the measure of pedestrian safety behavior, 10 research assistants observed 29 students at a school intersection for completion of 8 selected pedestrian safety behaviors. Reliability was determined in two ways: correlations between the research assistants' ratings to that of the Principal Investigator (PI and intraclass correlations (ICC across research assistant ratings. Results The SRTS travel survey had high test-retest reliability (κ = 0.97, n = 96, p Conclusions These validated instruments can be used to assess SRTS programs. The pedestrian safety behavior checklist may benefit from further formative work.

  18. Program overview

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The program overview describes the following resources and facilities; laser facilities, main laser room, target room, energy storage, laboratory area, building support systems, general plant project, and the new trailer complex

  19. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  20. Science Programs

    Science.gov (United States)

    Laboratory Delivering science and technology to protect our nation and promote world stability Science & ; Innovation Collaboration Careers Community Environment Science & Innovation Facilities Science Pillars Research Library Science Briefs Science News Science Highlights Lab Organizations Science Programs Applied

  1. SPOT Program

    Science.gov (United States)

    Smith, Jason T.; Welsh, Sam J.; Farinetti, Antonio L.; Wegner, Tim; Blakeslee, James; Deboeck, Toni F.; Dyer, Daniel; Corley, Bryan M.; Ollivierre, Jarmaine; Kramer, Leonard; hide

    2010-01-01

    A Spacecraft Position Optimal Tracking (SPOT) program was developed to process Global Positioning System (GPS) data, sent via telemetry from a spacecraft, to generate accurate navigation estimates of the vehicle position and velocity (state vector) using a Kalman filter. This program uses the GPS onboard receiver measurements to sequentially calculate the vehicle state vectors and provide this information to ground flight controllers. It is the first real-time ground-based shuttle navigation application using onboard sensors. The program is compact, portable, self-contained, and can run on a variety of UNIX or Linux computers. The program has a modular objec-toriented design that supports application-specific plugins such as data corruption remediation pre-processing and remote graphics display. The Kalman filter is extensible to additional sensor types or force models. The Kalman filter design is also strong against data dropouts because it uses physical models from state and covariance propagation in the absence of data. The design of this program separates the functionalities of SPOT into six different executable processes. This allows for the individual processes to be connected in an a la carte manner, making the feature set and executable complexity of SPOT adaptable to the needs of the user. Also, these processes need not be executed on the same workstation. This allows for communications between SPOT processes executing on the same Local Area Network (LAN). Thus, SPOT can be executed in a distributed sense with the capability for a team of flight controllers to efficiently share the same trajectory information currently being computed by the program. SPOT is used in the Mission Control Center (MCC) for Space Shuttle Program (SSP) and International Space Station Program (ISSP) operations, and can also be used as a post -flight analysis tool. It is primarily used for situational awareness, and for contingency situations.

  2. Sprego programming

    OpenAIRE

    Csernoch, Mária; Biró, Piroska

    2015-01-01

    Spreadsheet management is a border-land between office applications and programming, however, it is rather communicated that spreadsheet is nothing more than an easily handled fun piece. Consequently, the complexity of spreadsheet handling, the unprepared end-users, their problem solving abilities and approaches do not match. To overcome these problems we have developed and introduced Sprego (Spreadsheet Lego). Sprego is a simplified functional programming language in spreadsheet environment,...

  3. Recombinant Programming

    OpenAIRE

    Pawlak , Renaud; Cuesta , Carlos; Younessi , Houman

    2004-01-01

    This research report presents a promising new approach to computation called Recombinant Programming. The novelty of our approach is that it separates the program into two layers of computation: the recombination and the interpretation layer. The recombination layer takes sequences as inputs and allows the programmer to recombine these sequences through the definition of cohesive code units called extensions. The output of such recombination is a mesh that can be used by the interpretation la...

  4. Amblyopia prevention screening program in Northwest Iran (Ardabil

    Directory of Open Access Journals (Sweden)

    Habib Ojaghi

    2016-01-01

    Conclusions: The present investigation showed that coverage of amblyopia screening program was not enough in Ardabil Province. To increase the screening accuracy, standard instruments and examination room must be used; more optometrists must be involved in this program and increasing the validity of obtained results for future programming.

  5. Seismic Category I Structures Program

    International Nuclear Information System (INIS)

    Endebrock, E.G.; Dove, R.C.; Anderson, C.A.

    1984-01-01

    The Seismic Category I Structures Program currently being carried out at the Los Alamos National Laboratory is sponsored by the Mechanical/Structural Engineering Branch, Division of Engineering Technology of the Nuclear Regulatory Commission (NRC). This project is part of a program designed to increase confidence in the assessment of Category I nuclear power plant structural behavior beyond the design limit. The program involves the design, construction, and testing of heavily reinforced concrete models of auxiliary buildings, fuel-handling buildings, etc., but doe not include the reactor containment building. The overall goal of the program is to supply to the Nuclear Regulatory Commission experimental information and a validated procedure to establish the sensitivity of the dynamic response of these structures to earthquakes of magnitude beyond the design basis earthquake

  6. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    Science.gov (United States)

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  7. Validation of computer codes used in the safety analysis of Canadian research reactors

    International Nuclear Information System (INIS)

    Bishop, W.E.; Lee, A.G.

    1998-01-01

    AECL has embarked on a validation program for the suite of computer codes that it uses in performing the safety analyses for its research reactors. Current focus is on codes used for the analysis of the two MAPLE reactors under construction at Chalk River but the program will be extended to include additional codes that will be used for the Irradiation Research Facility. The program structure is similar to that used for the validation of codes used in the safety analyses for CANDU power reactors. (author)

  8. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  9. An information architecture for validating courseware

    OpenAIRE

    Melia, Mark; Pahl, Claus

    2007-01-01

    Courseware validation should locate Learning Objects inconsistent with the courseware instructional design being used. In order for validation to take place it is necessary to identify the implicit and explicit information needed for validation. In this paper, we identify this information and formally define an information architecture to model courseware validation information explicitly. This promotes tool-support for courseware validation and its interoperability with the courseware specif...

  10. Construct Validity: Advances in Theory and Methodology

    OpenAIRE

    Strauss, Milton E.; Smith, Gregory T.

    2009-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review f...

  11. User's guide for signal validation software: Final report

    International Nuclear Information System (INIS)

    Swisher, V.I.

    1987-09-01

    Northeast Utilities has implemented a real-time signal validation program into the safety parameter display systems (SPDS) at Millstone Units 2 and 3. Signal validation has been incorporated to improve the reliability of the information being used in the SPDS. Signal validation uses Parity Space Vector Analysis to process SPDS sensor data. The Parity Space algorithm determines consistency among independent, redundant input measurements. This information is then used to calculate a validated estimate of that parameter. Additional logic is incorporated to compare partially redundant measurement data. In both plants the SPDS has been designed to monitor the status of critical safety functions (CSFs) and provide information that can be used with plant-specific emergency operating procedures (EOPs). However the CSF logic, EOPs, and complement of plant sensors vary for these plants due to their different design characteristics (MP2 - 870 MWe Combustion Engineering PWR, MP3 - 1150 MWe Westinghouse PWR). These differences in plant design and information requirements result in a variety of signal validation applications

  12. The Nursing Diagnosis of risk for pressure ulcer: content validation

    Directory of Open Access Journals (Sweden)

    Cássia Teixeira dos Santos

    2016-01-01

    Full Text Available Abstract Objective: to validate the content of the new nursing diagnosis, termed risk for pressure ulcer. Method: the content validation with a sample made up of 24 nurses who were specialists in skin care from six different hospitals in the South and Southeast of Brazil. Data collection took place electronically, through an instrument constructed using the SurveyMonkey program, containing a title, definition, and 19 risk factors for the nursing diagnosis. The data were analyzed using Fehring's method and descriptive statistics. The project was approved by a Research Ethics Committee. Results: title, definition and seven risk factors were validated as "very important": physical immobilization, pressure, surface friction, shearing forces, skin moisture, alteration in sensation and malnutrition. Among the other risk factors, 11 were validated as "important": dehydration, obesity, anemia, decrease in serum albumin level, prematurity, aging, smoking, edema, impaired circulation, and decrease in oxygenation and in tissue perfusion. The risk factor of hyperthermia was discarded. Conclusion: the content validation of these components of the nursing diagnosis corroborated the importance of the same, being able to facilitate the nurse's clinical reasoning and guiding clinical practice in the preventive care for pressure ulcers.

  13. Assessing students' communication skills: validation of a global rating.

    Science.gov (United States)

    Scheffer, Simone; Muehlinghaus, Isabel; Froehmel, Annette; Ortwein, Heiderose

    2008-12-01

    Communication skills training is an accepted part of undergraduate medical programs nowadays. In addition to learning experiences its importance should be emphasised by performance-based assessment. As detailed checklists have been shown to be not well suited for the assessment of communication skills for different reasons, this study aimed to validate a global rating scale. A Canadian instrument was translated to German and adapted to assess students' communication skills during an end-of-semester-OSCE. Subjects were second and third year medical students at the reformed track of the Charité-Universitaetsmedizin Berlin. Different groups of raters were trained to assess students' communication skills using the global rating scale. Validity testing included concurrent validity and construct validity: Judgements of different groups of raters were compared to expert ratings as a defined gold standard. Furthermore, the amount of agreement between scores obtained with this global rating scale and a different instrument for assessing communication skills was determined. Results show that communication skills can be validly assessed by trained non-expert raters as well as standardised patients using this instrument.

  14. Image quality validation of Sentinel 2 Level-1 products: performance status at the beginning of the constellation routine phase

    Science.gov (United States)

    Francesconi, Benjamin; Neveu-VanMalle, Marion; Espesset, Aude; Alhammoud, Bahjat; Bouzinac, Catherine; Clerc, Sébastien; Gascon, Ferran

    2017-09-01

    Sentinel-2 is an Earth Observation mission developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. The mission is based on a constellation of 2-satellites: Sentinel-2A launched in June 2015 and Sentinel-2B launched in March 2017. It offers an unprecedented combination of systematic global coverage of land and coastal areas, a high revisit of five days at the equator and 2 days at mid-latitudes under the same viewing conditions, high spatial resolution, and a wide field of view for multispectral observations from 13 bands in the visible, near infrared and short wave infrared range of the electromagnetic spectrum. The mission performances are routinely and closely monitored by the S2 Mission Performance Centre (MPC), including a consortium of Expert Support Laboratories (ESL). This publication focuses on the Sentinel-2 Level-1 product quality validation activities performed by the MPC. It presents an up-to-date status of the Level-1 mission performances at the beginning of the constellation routine phase. Level-1 performance validations routinely performed cover Level-1 Radiometric Validation (Equalisation Validation, Absolute Radiometry Vicarious Validation, Absolute Radiometry Cross-Mission Validation, Multi-temporal Relative Radiometry Vicarious Validation and SNR Validation), and Level-1 Geometric Validation (Geolocation Uncertainty Validation, Multi-spectral Registration Uncertainty Validation and Multi-temporal Registration Uncertainty Validation). Overall, the Sentinel-2 mission is proving very successful in terms of product quality thereby fulfilling the promises of the Copernicus program.

  15. Validation needs of seismic probabilistic risk assessment (PRA) methods applied to nuclear power plants

    International Nuclear Information System (INIS)

    Kot, C.A.; Srinivasan, M.G.; Hsieh, B.J.

    1985-01-01

    An effort to validate seismic PRA methods is in progress. The work concentrates on the validation of plant response and fragility estimates through the use of test data and information from actual earthquake experience. Validation needs have been identified in the areas of soil-structure interaction, structural response and capacity, and equipment fragility. Of particular concern is the adequacy of linear methodology to predict nonlinear behavior. While many questions can be resolved through the judicious use of dynamic test data, other aspects can only be validated by means of input and response measurements during actual earthquakes. A number of past, ongoing, and planned testing programs which can provide useful validation data have been identified, and validation approaches for specific problems are being formulated

  16. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  17. Validation of New Cancer Biomarkers

    DEFF Research Database (Denmark)

    Duffy, Michael J; Sturgeon, Catherine M; Söletormos, Georg

    2015-01-01

    BACKGROUND: Biomarkers are playing increasingly important roles in the detection and management of patients with cancer. Despite an enormous number of publications on cancer biomarkers, few of these biomarkers are in widespread clinical use. CONTENT: In this review, we discuss the key steps...... in advancing a newly discovered cancer candidate biomarker from pilot studies to clinical application. Four main steps are necessary for a biomarker to reach the clinic: analytical validation of the biomarker assay, clinical validation of the biomarker test, demonstration of clinical value from performance...... of the biomarker test, and regulatory approval. In addition to these 4 steps, all biomarker studies should be reported in a detailed and transparent manner, using previously published checklists and guidelines. Finally, all biomarker studies relating to demonstration of clinical value should be registered before...

  18. The validated sun exposure questionnaire

    DEFF Research Database (Denmark)

    Køster, B; Søndergaard, J; Nielsen, J B

    2017-01-01

    Few questionnaires used in monitoring sun-related behavior have been tested for validity. We established criteria validity of a developed questionnaire for monitoring population sun-related behavior. During May-August 2013, 664 Danes wore a personal electronic UV-dosimeter for one week...... that measured the outdoor time and dose of erythemal UVR exposure. In the following week, they answered a questionnaire on their sun-related behavior in the measurement week. Outdoor time measured by dosimetry correlated strongly with both outdoor time and the developed exposure scale measured...... in the questionnaire. Exposure measured in SED by dosimetry correlated strongly with the exposure scale. In a linear regression model of UVR (SED) received, 41 percent of the variation was explained by skin type, age, week of participation and the exposure scale, with the exposure scale as the main contributor...

  19. Drive: Theory and Construct Validation.

    Science.gov (United States)

    Siegling, Alex B; Petrides, K V

    2016-01-01

    This article explicates the theory of drive and describes the development and validation of two measures. A representative set of drive facets was derived from an extensive corpus of human attributes (Study 1). Operationalised using an International Personality Item Pool version (the Drive:IPIP), a three-factor model was extracted from the facets in two samples and confirmed on a third sample (Study 2). The multi-item IPIP measure showed congruence with a short form, based on single-item ratings of the facets, and both demonstrated cross-informant reliability. Evidence also supported the measures' convergent, discriminant, concurrent, and incremental validity (Study 3). Based on very promising findings, the authors hope to initiate a stream of research in what is argued to be a rather neglected niche of individual differences and non-cognitive assessment.

  20. Validation of nursing management diagnoses.

    Science.gov (United States)

    Morrison, R S

    1995-01-01

    Nursing management diagnosis based on nursing and management science, merges "nursing diagnosis" and "organizational diagnosis". Nursing management diagnosis is a judgment about nursing organizational problems. The diagnoses provide a basis for nurse manager interventions to achieve outcomes for which a nurse manager is accountable. A nursing organizational problem is a discrepancy between what should be happening and what is actually happening that prevents the goals of nursing from being accomplished. The purpose of this study was to validate 73 nursing management diagnoses identified previously in 1992: 71 of the 72 diagnoses were considered valid by at least 70% of 136 participants. Diagnoses considered to have high priority for future research and development were identified by summing the mean scores for perceived frequency of occurrence and level of disruption. Further development of nursing management diagnoses and testing of their effectiveness in enhancing decision making is recommended.

  1. Validation and Error Characterization for the Global Precipitation Measurement

    Science.gov (United States)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  2. Validation of radiation sterilization process

    International Nuclear Information System (INIS)

    Kaluska, I.

    2007-01-01

    The standards for quality management systems recognize that, for certain processes used in manufacturing, the effectiveness of the process cannot be fully verified by subsequent inspection and testing of the product. Sterilization is an example of such a process. For this reason, sterilization processes are validated for use, the performance of sterilization process is monitored routinely and the equipment is maintained according to ISO 13 485. Different aspects of this norm are presented

  3. Satellite imager calibration and validation

    CSIR Research Space (South Africa)

    Vhengani, L

    2010-10-01

    Full Text Available and Validation Lufuno Vhengani*, Minette Lubbe, Derek Griffith and Meena Lysko Council for Scientific and Industrial Research, Defence Peace Safety and Security, Pretoria, South Africa E-mail: * lvhengani@csir.co.za Abstract: The success or failure... techniques specific to South Africa. 1. Introduction The success or failure of any earth observation mission depends on the quality of its data. To achieve optimum levels of reliability most sensors are calibrated pre-launch. However...

  4. Microservices Validation: Methodology and Implementation

    OpenAIRE

    Savchenko, D.; Radchenko, G.

    2015-01-01

    Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservic...

  5. Integer programming

    CERN Document Server

    Conforti, Michele; Zambelli, Giacomo

    2014-01-01

    This book is an elegant and rigorous presentation of integer programming, exposing the subject’s mathematical depth and broad applicability. Special attention is given to the theory behind the algorithms used in state-of-the-art solvers. An abundance of concrete examples and exercises of both theoretical and real-world interest explore the wide range of applications and ramifications of the theory. Each chapter is accompanied by an expertly informed guide to the literature and special topics, rounding out the reader’s understanding and serving as a gateway to deeper study. Key topics include: formulations polyhedral theory cutting planes decomposition enumeration semidefinite relaxations Written by renowned experts in integer programming and combinatorial optimization, Integer Programming is destined to become an essential text in the field.

  6. The NASA automation and robotics technology program

    Science.gov (United States)

    Holcomb, Lee B.; Montemerlo, Melvin D.

    1986-01-01

    The development and objectives of the NASA automation and robotics technology program are reviewed. The objectives of the program are to utilize AI and robotics to increase the probability of mission success; decrease the cost of ground control; and increase the capability and flexibility of space operations. There is a need for real-time computational capability; an effective man-machine interface; and techniques to validate automated systems. Current programs in the areas of sensing and perception, task planning and reasoning, control execution, operator interface, and system architecture and integration are described. Programs aimed at demonstrating the capabilities of telerobotics and system autonomy are discussed.

  7. A validated RP-HPLC method for the determination of Irinotecan hydrochloride residues for cleaning validation in production area

    Directory of Open Access Journals (Sweden)

    Sunil Reddy

    2013-03-01

    Full Text Available Introduction: cleaning validation is an integral part of current good manufacturing practices in pharmaceutical industry. The main purpose of cleaning validation is to prove the effectiveness and consistency of cleaning in a given pharmaceutical production equipment to prevent cross contamination and adulteration of drug product with other active ingredient. Objective: a rapid, sensitive and specific reverse phase HPLC method was developed and validated for the quantitative determination of irinotecan hydrochloride in cleaning validation swab samples. Method: the method was validated using waters symmetry shield RP-18 (250mm x 4.6mm 5 µm column with isocratic mobile phase containing a mixture of 0.02 M potassium di-hydrogen ortho-phosphate, pH adjusted to 3.5 with ortho-phosphoric acid, methanol and acetonitrile (60:20:20 v/v/v. The flow rate of mobile phase was 1.0 mL/min with column temperature of 25°C and detection wavelength at 220nm. The sample injection volume was 100 µl. Results: the calibration curve was linear over a concentration range from 0.024 to 0.143 µg/mL with a correlation coefficient of 0.997. The intra-day and inter-day precision expressed as relative standard deviation were below 3.2%. The recoveries obtained from stainless steel, PCGI, epoxy, glass and decron cloth surfaces were more than 85% and there was no interference from the cotton swab. The detection limit (DL and quantitation limit (QL were 0.008 and 0.023 µg ml-1, respectively. Conclusion: the developed method was validated with respect to specificity, linearity, limit of detection and quantification, accuracy, precision and solution stability. The overall procedure can be used as part of a cleaning validation program in pharmaceutical manufacture of irinotecan hydrochloride.

  8. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  9. Experimental validation of calculation schemes connected with PWR absorbers and burnable poisons; Validation experimentale des schemas de calcul relatifs aux absorbants et poisons consommables dans les REP

    Energy Technology Data Exchange (ETDEWEB)

    Klenov, P.

    1995-10-01

    In France 80% of electricity is produced by PWR reactors. For a better exploitation of these reactors a modular computer code Apollo-II has been developed. his code compute the flux transport by discrete ordinate method or by probabilistic collisions on extended configurations such as reactor cells, assemblies or little cores. For validation of this code on mixed oxide fuel lattices with absorbers an experimental program Epicure in the reactor Eole was induced. This thesis is devoted to the validation of the Apollo code according to the results of the Epicure program. 43 refs., 65 figs., 1 append.

  10. Programming Algol

    CERN Document Server

    Malcolme-Lawes, D J

    2014-01-01

    Programming - ALGOL describes the basics of computer programming using Algol. Commands that could be added to Algol and could increase its scope are described, including multiplication and division and the use of brackets. The idea of labeling or naming a command is also explained, along with a command allowing two alternative results. Most of the important features of Algol syntax are discussed, and examples of compound statements (that is, sets of commands enclosed by a begin ... end command) are given.Comprised of 11 chapters, this book begins with an introduction to the digital computer an

  11. Programming Interactivity

    CERN Document Server

    Noble, Joshua

    2012-01-01

    Ready to create rich interactive experiences with your artwork, designs, or prototypes? This is the ideal place to start. With this hands-on guide, you'll explore several themes in interactive art and design-including 3D graphics, sound, physical interaction, computer vision, and geolocation-and learn the basic programming and electronics concepts you need to implement them. No previous experience is necessary. You'll get a complete introduction to three free tools created specifically for artists and designers: the Processing programming language, the Arduino microcontroller, and the openFr

  12. Department of Energy: Photovoltaics program - FY 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-31

    The National Photovoltaic Program supports efforts to make PV an important part of the US economy through three main program elements: Research and Development, Technology Development, and Systems Engineering and Applications. (1) Research and Development activities generate new ideas, test the latest scientific theories, and push the limits of PV efficiencies in laboratory and prototype materials and devices. (2) Technology Development activities apply laboratory innovations to products to improve PV technology and the manufacturing techniques used to produce PV systems for the market. (3) Systems Engineering and Applications activities help improve PV systems and validate these improvements through tests, measurements, and deployment of prototypes. In addition, applications research validates, sales, maintenance, and financing mechanisms worldwide. (4) Environmental, Health, Safety and Resource Characterization activities help to define environmental, health and safety issues for those facilities engaged in the manufacture of PV products and organizations engaged in PV research and development. All PV Program activities are planned and executed in close collaboration and partnership with the U.S. PV industry. The overall PV Program is planned to be a balanced effort of research, manufacturing development, and market development. Critical to the success of this strategy is the National Photovoltaic Program`s effort to reduce the cost of electricity generated by photovoltaic. The program is doing this in three primary ways: by making devices more efficient, by making PV systems less expensive, and by validating the technology through measurements, tests, and prototypes.

  13. Validity of instruments to assess students' travel and pedestrian safety.

    Science.gov (United States)

    Mendoza, Jason A; Watson, Kathy; Baranowski, Tom; Nicklas, Theresa A; Uscanga, Doris K; Hanfling, Marcus J

    2010-05-18

    Safe Routes to School (SRTS) programs are designed to make walking and bicycling to school safe and accessible for children. Despite their growing popularity, few validated measures exist for assessing important outcomes such as type of student transport or pedestrian safety behaviors. This research validated the SRTS school travel survey and a pedestrian safety behavior checklist. Fourth grade students completed a brief written survey on how they got to school that day with set responses. Test-retest reliability was obtained 3-4 hours apart. Convergent validity of the SRTS travel survey was assessed by comparison to parents' report. For the measure of pedestrian safety behavior, 10 research assistants observed 29 students at a school intersection for completion of 8 selected pedestrian safety behaviors. Reliability was determined in two ways: correlations between the research assistants' ratings to that of the Principal Investigator (PI) and intraclass correlations (ICC) across research assistant ratings. The SRTS travel survey had high test-retest reliability (kappa = 0.97, n = 96, p < 0.001) and convergent validity (kappa = 0.87, n = 81, p < 0.001). The pedestrian safety behavior checklist had moderate reliability across research assistants' ratings (ICC = 0.48) and moderate correlation with the PI (r = 0.55, p = < 0.01). When two raters simultaneously used the instrument, the ICC increased to 0.65. Overall percent agreement (91%), sensitivity (85%) and specificity (83%) were acceptable. These validated instruments can be used to assess SRTS programs. The pedestrian safety behavior checklist may benefit from further formative work.

  14. SLED program

    International Nuclear Information System (INIS)

    Farkas, Z.D.

    1977-04-01

    A FORTRAN program is described which, for a given cavity and timing, yields all fields as a (piecewise) function of time, and which, for any mix of SLEDded and non-SLEDded klystrons of any given energy/klystron, yields the SLED operation parameters. The note explains the input and output parameters as they appear in the code output. 3 figures, 19 tables

  15. ORGEL program

    Energy Technology Data Exchange (ETDEWEB)

    none

    1963-09-01

    Parameter optimization studies for an ORGEL power plant are reported, and the ESSOR test reactor used in the program is described. Research at Ispra in reactor physics, technology, metallurgy, heat transfer, chemistry, and physical chemistry associated with ORGEL development is also summarized. (D.C.W.)

  16. Program evaluation

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    This book contains the proceedings from the panel on program evaluation. Some of the papers included are the following: Seattle City Light's Industrial Retrofit Demonstration Project Uses Quasi-Experimental Research Design and Metering to Measure Savings, Evaluation for PUCs, and The Takeback Effect Low-income Weatherizations Fact or Fiction

  17. Sprego Programming

    Directory of Open Access Journals (Sweden)

    Maria Csernoch

    2015-02-01

    Full Text Available Spreadsheet management is a border-land between office applications and programming, however, it is rather communicated that spreadsheet is nothing more than an easily handled fun piece. Consequently, the complexity of spreadsheet handling, the unprepared end-users, their problem solving abilities and approaches do not match. To overcome these problems we have developed and introduced Sprego (Spreadsheet Lego. Sprego is a simplified functional programming language in spreadsheet environment, and such as can be used both as introductory language and the language of end-user programmers. The essence of Sprego is that we use as few and simple functions as possible and based on these functions build multilevel formulas. With this approach, similar to high level programming, we are able solve advanced problems, developing algorithmic skills, computational thinking. The advantage of Sprego is the simplicity of the language, when the emphasis is not on the coding but on the problem. Beyond that spreadsheets would provide real life problems with authentic data and tables which students are more interested in than the artificial environment and semi-authentic problems of high level programming languages.

  18. Polytypic Programming

    NARCIS (Netherlands)

    Jeuring, J.T.; Jansson, P.

    1996-01-01

    Many functions have to be written over and over again for different datatypes, either because datatypes change during the development of programs, or because functions with similar functionality are needed on different datatypes. Examples of such functions are pretty printers, debuggers, equality

  19. Concepts of Model Verification and Validation

    International Nuclear Information System (INIS)

    Thacker, B.H.; Doebling, S.W.; Hemez, F.M.; Anderson, M.C.; Pepin, J.E.; Rodriguez, E.A.

    2004-01-01

    Model verification and validation (VandV) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model VandV procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model VandV program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model VandV is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define VandV methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for VandV applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of VandV procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model

  20. IEEE Validation of the Continuing Education Achievement of Engineers Registry System. Procedures for Use with a CPT 8000 Word Processor and Communications Package.

    Science.gov (United States)

    Institute of Electrical and Electronics Engineers, Inc., New York, NY.

    The Institute of Electrical and Electronics Engineers (IEEE) validation program is designed to motivate persons practicing in electrical and electronics engineering to pursue quality technical continuing education courses offered by any responsible sponsor. The rapid acceptance of the validation program necessitated the additional development of a…