WorldWideScience

Sample records for model validation program

  1. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  2. Qualitative Validation of the IMM Model for ISS and STS Programs

    Science.gov (United States)

    Kerstman, E.; Walton, M.; Reyes, D.; Boley, L.; Saile, L.; Young, M.; Arellano, J.; Garcia, Y.; Myers, J. G.

    2016-01-01

    To validate and further improve the Integrated Medical Model (IMM), medical event data were obtained from 32 ISS and 122 STS person-missions. Using the crew characteristics from these observed missions, IMM v4.0 was used to forecast medical events and medical resource utilization. The IMM medical condition incidence values were compared to the actual observed medical event incidence values, and the IMM forecasted medical resource utilization was compared to actual observed medical resource utilization. Qualitative comparisons of these parameters were conducted for both the ISS and STS programs. The results of these analyses will provide validation of IMM v4.0 and reveal areas of the model requiring adjustments to improve the overall accuracy of IMM outputs. This validation effort should result in enhanced credibility of the IMM and improved confidence in the use of IMM as a decision support tool for human space flight.

  3. Research program to develop and validate conceptual models for flow and transport through unsaturated, fractured rock

    International Nuclear Information System (INIS)

    Glass, R.J.; Tidwell, V.C.

    1991-01-01

    As part of the Yucca Mountain Project, our research program to develop and validate conceptual models for flow and transport through unsaturated fractured rock integrates fundamental physical experimentation with conceptual model formulation and mathematical modeling. Our research is directed toward developing and validating macroscopic, continuum-based models and supporting effective property models because of their widespread utility within the context of this project. Success relative to the development and validation of effective property models is predicated on a firm understanding of the basic physics governing flow through fractured media, specifically in the areas of unsaturated flow and transport in a single fracture and fracture-matrix interaction. 43 refs

  4. Research program to develop and validate conceptual models for flow and transport through unsaturated, fractured rock

    International Nuclear Information System (INIS)

    Glass, R.J.; Tidwell, V.C.

    1991-09-01

    As part of the Yucca Mountain Project, our research program to develop and validate conceptual models for flow and transport through unsaturated fractured rock integrates fundamental physical experimentation with conceptual model formulation and mathematical modeling. Our research is directed toward developing and validating macroscopic, continuum-based models and supporting effective property models because of their widespread utility within the context of this project. Success relative to the development and validation of effective property models is predicted on a firm understanding of the basic physics governing flow through fractured media, specifically in the areas of unsaturated flow and transport in a single fracture and fracture-matrix interaction

  5. Research program to develop and validate conceptual models for flow and transport through unsaturated, fractured rock

    International Nuclear Information System (INIS)

    Glass, R.J.; Tidwell, V.C.

    1991-01-01

    As part of the Yucca Mountain Project, our research program to develop and validate conceptual models for flow and transport through unsaturated fractured rock integrates fundamental physical experimentation with conceptual model formulation and mathematical modeling. Our research is directed toward developing and validating macroscopic, continuum-based models and supporting effective property models because of their widespread utility within the context of this project. Success relative to the development and validation of effective property models is predicted on a firm understanding of the basic physics governing flow through fractured media, specifically in the areas of unsaturated flow and transport in a single fracture and fracture-matrix interaction

  6. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  7. Validation of the TEXSAN thermal-hydraulic analysis program

    International Nuclear Information System (INIS)

    Burns, S.P.; Klein, D.E.

    1992-01-01

    The TEXSAN thermal-hydraulic analysis program has been developed by the University of Texas at Austin (UT) to simulate buoyancy driven fluid flow and heat transfer in spent fuel and high level nuclear waste (HLW) shipping applications. As part of the TEXSAN software quality assurance program, the software has been subjected to a series of test cases intended to validate its capabilities. The validation tests include many physical phenomena which arise in spent fuel and HLW shipping applications. This paper describes some of the principal results of the TEXSAN validation tests and compares them to solutions available in the open literature. The TEXSAN validation effort has shown that the TEXSAN program is stable and consistent under a range of operating conditions and provides accuracy comparable with other heat transfer programs and evaluation techniques. The modeling capabilities and the interactive user interface employed by the TEXSAN program should make it a useful tool in HLW transportation analysis

  8. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    Science.gov (United States)

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  9. Assessment model validity document. NAMMU: A program for calculating groundwater flow and transport through porous media

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Morris, S.T.; Porter, J.D.

    1998-05-01

    NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised

  10. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    Science.gov (United States)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  11. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  12. Making Validated Educational Models Central in Preschool Standards.

    Science.gov (United States)

    Schweinhart, Lawrence J.

    This paper presents some ideas to preschool educators and policy makers about how to make validated educational models central in standards for preschool education and care programs that are available to all 3- and 4-year-olds. Defining an educational model as a coherent body of program practices, curriculum content, program and child, and teacher…

  13. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    Energy Technology Data Exchange (ETDEWEB)

    Wingefors, S.; Andersson, J.; Norrby, S. [Swedish Nuclear Power lnspectorate, Stockholm (Sweden). Office of Nuclear Waste Safety; Eisenberg, N.A.; Lee, M.P.; Federline, M.V. [U.S. Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Material Safety and Safeguards; Sagar, B.; Wittmeyer, G.W. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  14. Regulatory perspectives on model validation in high-level radioactive waste management programs: A joint NRC/SKI white paper

    International Nuclear Information System (INIS)

    Wingefors, S.; Andersson, J.; Norrby, S.

    1999-03-01

    Validation (or confidence building) should be an important aspect of the regulatory uses of mathematical models in the safety assessments of geologic repositories for the disposal of spent nuclear fuel and other high-level radioactive wastes (HLW). A substantial body of literature exists indicating the manner in which scientific validation of models is usually pursued. Because models for a geologic repository performance assessment cannot be tested over the spatial scales of interest and long time periods for which the models will make estimates of performance, the usual avenue for model validation- that is, comparison of model estimates with actual data at the space-time scales of interest- is precluded. Further complicating the model validation process in HLW programs are the uncertainties inherent in describing the geologic complexities of potential disposal sites, and their interactions with the engineered system, with a limited set of generally imprecise data, making it difficult to discriminate between model discrepancy and inadequacy of input data. A successful strategy for model validation, therefore, should attempt to recognize these difficulties, address their resolution, and document the resolution in a careful manner. The end result of validation efforts should be a documented enhancement of confidence in the model to an extent that the model's results can aid in regulatory decision-making. The level of validation needed should be determined by the intended uses of these models, rather than by the ideal of validation of a scientific theory. This white Paper presents a model validation strategy that can be implemented in a regulatory environment. It was prepared jointly by staff members of the U.S. Nuclear Regulatory Commission and the Swedish Nuclear Power Inspectorate-SKI. This document should not be viewed as, and is not intended to be formal guidance or as a staff position on this matter. Rather, based on a review of the literature and previous

  15. Intelligent Testing of Traffic Light Programs: Validation in Smart Mobility Scenarios

    OpenAIRE

    Javier Ferrer; José García-Nieto; Enrique Alba; Francisco Chicano

    2016-01-01

    In smart cities, the use of intelligent automatic techniques to find efficient cycle programs of traffic lights is becoming an innovative front for traffic flow management. However, this automatic programming of traffic lights requires a validation process of the generated solutions, since they can affect the mobility (and security) of millions of citizens. In this paper, we propose a validation strategy based on genetic algorithms and feature models for the automatic generation of different ...

  16. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  17. Assessment model validity document - HYDRASTAR. A stochastic continuum program for groundwater flow

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Eriksson, Lars [Equa Simulation AB, Sundbyberg (Sweden)

    2001-12-01

    The prevailing document addresses validation of the stochastic continuum model HYDRASTAR designed for Monte Carlo simulations of groundwater flow in fractured rocks. Here, validation is defined as a process to demonstrate that a model concept is fit for its purpose. Preferably, the validation is carried out by comparison of model predictions with independent field observations and experimental measurements. In addition, other sources can also be used to confirm that the model concept gives acceptable results. One method is to compare results with the ones achieved using other model concepts for the same set of input data. Another method is to compare model results with analytical solutions. The model concept HYDRASTAR has been used in several studies including performance assessments of hypothetical repositories for spent nuclear fuel. In the performance assessments, the main tasks for HYDRASTAR have been to calculate groundwater travel time distributions, repository flux distributions, path lines and their exit locations. The results have then been used by other model concepts to calculate the near field release and far field transport. The aim and framework for the validation process includes describing the applicability of the model concept for its purpose in order to build confidence in the concept. Preferably, this is made by comparisons of simulation results with the corresponding field experiments or field measurements. Here, two comparisons with experimental results are reported. In both cases the agreement was reasonably fair. In the broader and more general context of the validation process, HYDRASTAR results have been compared with other models and analytical solutions. Commonly, the approximation calculations agree well with the medians of model ensemble results. Additional indications that HYDRASTAR is suitable for its purpose were obtained from the comparisons with results from other model concepts. Several verification studies have been made for

  18. Concepts of Model Verification and Validation

    International Nuclear Information System (INIS)

    Thacker, B.H.; Doebling, S.W.; Hemez, F.M.; Anderson, M.C.; Pepin, J.E.; Rodriguez, E.A.

    2004-01-01

    Model verification and validation (VandV) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model VandV procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model VandV program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model VandV is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define VandV methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for VandV applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of VandV procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model

  19. Some guidance on preparing validation plans for the DART Full System Models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy (Sandia National Laboratories, Albuquerque, NM)

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  20. An experimental program for testing the validity of flow and transport models in unsaturated tuff: The Yucca Mountain Project

    International Nuclear Information System (INIS)

    Shephard, L.E.; Glass, R.J.; Siegel, M.D.; Tidwell, V.C.

    1990-01-01

    Groundwater flow and contaminant transport through the unsaturated zone are receiving increased attention as options for waste disposal in saturated media continue to be considered as a potential means for resolving the nation's waste management concerns. An experimental program is being developed to test the validity of conceptual flow and transport models that are being formulated to predict the long-term performance at Yucca Mountain. This program is in the developmental stage and will continue to evolve as information is acquired and knowledge is improved with reference to flow and transport in unsaturated fractured media. The general approach for directing the validation effort entails identifying those processes which may cause the site to fail relative to imposed regulatory requirements, evaluating the key assumptions underlying the conceptual models used or developed to describe these processes, and developing new conceptual models as needed. Emphasis is currently being placed in four general areas: flow and transport in unsaturated fractures; fracture-matrix interactions; infiltration flow instability; and evaluation of scale effects in heterogeneous fractured media. Preliminary results and plans or each of these areas for both the laboratory and field investigation components will be presented in the manuscript. 1 ref

  1. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  2. Statistical methods for mechanistic model validation: Salt Repository Project

    International Nuclear Information System (INIS)

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  3. Validation studies of the DOE-2 Building Energy Simulation Program. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, R.; Winkelmann, F.

    1998-06-01

    This report documents many of the validation studies (Table 1) of the DOE-2 building energy analysis simulation program that have taken place since 1981. Results for several versions of the program are presented with the most recent study conducted in 1996 on version DOE-2.1E and the most distant study conducted in 1981 on version DOE-1.3. This work is part of an effort related to continued development of DOE-2, particularly in its use as a simulation engine for new specialized versions of the program such as the recently released RESFEN 3.1. RESFEN 3.1 is a program specifically dealing with analyzing the energy performance of windows in residential buildings. The intent in providing the results of these validation studies is to give potential users of the program a high degree of confidence in the calculated results. Validation studies in which calculated simulation data is compared to measured data have been conducted throughout the development of the DOE-2 program. Discrepancies discovered during the course of such work has resulted in improvements in the simulation algorithms. Table 2 provides a listing of additions and modifications that have been made to various versions of the program since version DOE-2.1A. One of the most significant recent changes in the program occurred with version DOE-2.1E. An improved algorithm for calculating the outside surface film coefficient was implemented. In addition, integration of the WINDOW 4 program was accomplished resulting in improved ability in analyzing window energy performance. Validation and verification of a program as sophisticated as DOE-2 must necessarily be limited because of the approximations inherent in the program. For example, the most accurate model of the heat transfer processes in a building would include a three-dimensional analysis. To justify such detailed algorithmic procedures would correspondingly require detailed information describing the building and/or HVAC system and energy plant parameters

  4. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  5. Intelligent Testing of Traffic Light Programs: Validation in Smart Mobility Scenarios

    Directory of Open Access Journals (Sweden)

    Javier Ferrer

    2016-01-01

    Full Text Available In smart cities, the use of intelligent automatic techniques to find efficient cycle programs of traffic lights is becoming an innovative front for traffic flow management. However, this automatic programming of traffic lights requires a validation process of the generated solutions, since they can affect the mobility (and security of millions of citizens. In this paper, we propose a validation strategy based on genetic algorithms and feature models for the automatic generation of different traffic scenarios checking the robustness of traffic light cycle programs. We have concentrated on an extensive urban area in the city of Malaga (in Spain, in which we validate a set of candidate cycle programs generated by means of four optimization algorithms: Particle Swarm Optimization for Traffic Lights, Differential Evolution for Traffic Lights, random search, and Sumo Cycle Program Generator. We can test the cycles of traffic lights considering the different states of the city, weather, congestion, driver expertise, vehicle’s features, and so forth, but prioritizing the most relevant scenarios among a large and varied set of them. The improvement achieved in solution quality is remarkable, especially for CO2 emissions, in which we have obtained a reduction of 126.99% compared with the experts’ solutions.

  6. Development of an Auto-Validation Program for MARS Code Assessments

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2006-01-01

    MARS (Multi-dimensional Analysis of Reactor Safety) code is a best-estimate thermal hydraulic system analysis code developed at KAERI. It is important for a thermal hydraulic computer code to be assessed against theoretical and experimental data to verify and validate the performance and the integrity of the structure, models and correlations of the code. The code assessment efforts for complex thermal hydraulics code such as MARS code can be tedious, time-consuming and require large amount of human intervention in data transfer to see the results in graphic forms. Code developers produce many versions of a code during development and each version need to be verified for integrity. Thus, for MARS code developers, it is desirable to have an automatic way of carrying out the code assessment calculations. In the present work, an Auto-Validation program that carries out the code assessment efforts has been developed. The program uses the user supplied configuration file (with '.vv' extension) which contain commands to read input file, to execute the user selected MARS program, and to generate result graphs. The program can be useful if a same set of code assessments is repeated with different versions of the code. The program is written with the Delphi program language. The program runs under the Microsoft Windows environment

  7. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  8. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  9. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  10. INL Experimental Program Roadmap for Thermal Hydraulic Code Validation

    Energy Technology Data Exchange (ETDEWEB)

    Glenn McCreery; Hugh McIlroy

    2007-09-01

    Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role of expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related

  11. An integrated approach to validation of safeguards and security program performance

    International Nuclear Information System (INIS)

    Altman, W.D.; Hunt, J.S.; Hockert, J.W.

    1988-01-01

    Department of Energy (DOE) requirements for safeguards and security programs are becoming increasingly performance oriented. Master Safeguards and Security Agreemtns specify performance levels for systems protecting DOE security interests. In order to measure and validate security system performance, Lawrence Livermore National Laboratory (LLNL) has developed cost effective validation tools and a comprehensive validation approach that synthesizes information gained from different activities such as force on force exercises, limited scope performance tests, equipment testing, vulnerability analyses, and computer modeling; into an overall assessment of the performance of the protection system. The analytic approach employs logic diagrams adapted from the fault and event trees used in probabilistic risk assessment. The synthesis of the results from the various validation activities is accomplished using a method developed by LLNL, based upon Bayes' theorem

  12. Supplementary investigations on the validation of the atmospheric radionuclide transport model (ARTM)

    International Nuclear Information System (INIS)

    Richter, Cornelia; Thielen, Harald; Sogalla, Martin

    2015-09-01

    In the medium-term time scale the Gaussian plume model used so far for atmospheric dispersion calculations in the General Administrative Provision (AVV) relating to Section 47 of the Radiation Protection Ordinance (StrISchV) as well as in the Incident Calculation Bases (SBG) relating to Section 49 StrISchV is to be replaced by a Lagrangian particle model. Meanwhile the Atmospheric Radionuclide Transportation Model (ARTM) is available, which allows the simulation of the atmospheric dispersion of operational releases from nuclear installations. ARTM is based on the program package AUSTAL2000 which is designed for the simulation of atmospheric dispersion of non-radioactive operational releases from industrial plants and was adapted to the application of airborne radioactive releases. The research project 3612S50007 serves, on the one hand, to validate ARTM systematically. On the other hand, the development of science and technology were investigated and, if reasonable and possible, were implemented to the program system. The dispersion model and the user interface were advanced and optimized. The program package was provided to the users as a free download. Notably t he work program comprises the validation of the approach used in ARTM to model short emission periods, which are of interest in view of the SBG. The simulation results of the diagnostic wind and turbulence model TALdia, which is part of the GO-ARTM program package, were evaluated with focus on the influence of buildings on the flow field. The user interface was upgraded with a wind field viewer. To simplify the comparison with the model still in use, a Gaussian plum e model was implemented into the graphical user interface. The ARTM web page was maintained, user questions and feedback were answered and analysed concerning possible improvements and further developments of the program package. Numerous improvements were implemented. An ARTM user workshop was hosted by the Federal Office for Radiation

  13. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  14. Fission Product Experimental Program: Validation and Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Leclaire, N.; Ivanova, T.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Girault, E. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-02-15

    From 1998 to 2004, a series of critical experiments referred to as the fission product (FP) experimental program was performed at the Commissariat a l'Energie Atomique Valduc research facility. The experiments were designed by Institut de Radioprotection et de Surete Nucleaire (IRSN) and funded by AREVA NC and IRSN within the French program supporting development of a technical basis for burnup credit validation. The experiments were performed with the following six key fission products encountered in solution either individually or as mixtures: {sup 103}Rh, {sup 133}Cs, {sup nat}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd. The program aimed at compensating for the lack of information on critical experiments involving FPs and at establishing a basis for FPs credit validation. One hundred forty-five critical experiments were performed, evaluated, and analyzed with the French CRISTAL criticality safety package and the American SCALE5. 1 code system employing different cross-section libraries. The aim of the paper is to show the experimental data potential to improve the ability to perform validation of full burnup credit calculation. The paper describes three Phases of the experimental program; the results of preliminary evaluation, the calculation, and the sensitivity/uncertainty study of the FP experiments used to validate the APOLLO2-MORET 4 route in the CRISTAL criticality package for burnup credit applications. (authors)

  15. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  16. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  17. Building Technologies Program Multi-Year Program Plan Technology Validation and Market Introduction 2008

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2008-01-01

    Building Technologies Program Multi-Year Program Plan 2008 for technology validation and market introduction, including ENERGY STAR, building energy codes, technology transfer application centers, commercial lighting initiative, EnergySmart Schools, EnergySmar

  18. Content Validity of National Post Marriage Educational Program Using Mixed Methods

    Science.gov (United States)

    MOHAJER RAHBARI, Masoumeh; SHARIATI, Mohammad; KERAMAT, Afsaneh; YUNESIAN, Masoud; ESLAMI, Mohammad; MOUSAVI, Seyed Abbas; MONTAZERI, Ali

    2015-01-01

    Background: Although the validity of content of program is mostly conducted with qualitative methods, this study used both qualitative and quantitative methods for the validation of content of post marriage training program provided for newly married couples. Content validity is a preliminary step of obtaining authorization required to install the program in country's health care system. Methods: This mixed methodological content validation study carried out in four steps with forming three expert panels. Altogether 24 expert panelists were involved in 3 qualitative and quantitative panels; 6 in the first item development one; 12 in the reduction kind, 4 of them were common with the first panel, and 10 executive experts in the last one organized to evaluate psychometric properties of CVR and CVI and Face validity of 57 educational objectives. Results: The raw data of post marriage program had been written by professional experts of Ministry of Health, using qualitative expert panel, the content was more developed by generating 3 topics and refining one topic and its respective content. In the second panel, totally six other objectives were deleted, three for being out of agreement cut of point and three on experts' consensus. The validity of all items was above 0.8 and their content validity indices (0.8–1) were completely appropriate in quantitative assessment. Conclusion: This study provided a good evidence for validation and accreditation of national post marriage program planned for newly married couples in health centers of the country in the near future. PMID:26056672

  19. Thermal hydraulic model validation for HOR mixed core fuel management

    International Nuclear Information System (INIS)

    Gibcus, H.P.M.; Vries, J.W. de; Leege, P.F.A. de

    1997-01-01

    A thermal-hydraulic core management model has been developed for the Hoger Onderwijsreactor (HOR), a 2 MW pool-type university research reactor. The model was adopted for safety analysis purposes in the framework of HEU/LEU core conversion studies. It is applied in the thermal-hydraulic computer code SHORT (Steady-state HOR Thermal-hydraulics) which is presently in use in designing core configurations and for in-core fuel management. An elaborate measurement program was performed for establishing the core hydraulic characteristics for a variety of conditions. The hydraulic data were obtained with a dummy fuel element with special equipment allowing a.o. direct measurement of the true core flow rate. Using these data the thermal-hydraulic model was validated experimentally. The model, experimental tests, and model validation are discussed. (author)

  20. Validating a Finite Element Model of a Structure Subjected to Mine Blast with Experimental Modal Analysis

    Science.gov (United States)

    2017-11-01

    Howle, Dmitriy Krayterman, Justin E Pritchett, and Ryan Sorenson 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...The Under-body Blast Methodology (UBM) for the Test and Evaluation (T&E) program was established to provide a capability for the US Army Test and...and must be validated. The UBM for the T&E program has completed efforts to validate soil models but not structural dynamics models. Modal testing

  1. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  2. Validation of nuclear models used in space radiation shielding applications

    International Nuclear Information System (INIS)

    Norman, Ryan B.; Blattnig, Steve R.

    2013-01-01

    A program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as these models are developed over time. In this work, simple validation metrics applicable to testing both model accuracy and consistency with experimental data are developed. The developed metrics treat experimental measurement uncertainty as an interval and are therefore applicable to cases in which epistemic uncertainty dominates the experimental data. To demonstrate the applicability of the metrics, nuclear physics models used by NASA for space radiation shielding applications are compared to an experimental database consisting of over 3600 experimental cross sections. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by examining subsets of the model parameter space.

  3. Validation of the hdm models forcrack initiation and development, rutting and roughness of the pavement

    Directory of Open Access Journals (Sweden)

    Ognjenović Slobodan

    2017-01-01

    Full Text Available Worldwide practice recommends validation of the HDM models with some other software that can be used for comparison of the forecasting results. The program package MATLAB is used in this case, as it enables for modelling of all the HDM models. A statistic validation of the results of the forecasts concerning the condition of the pavements in HDM with the on-field measuring results was also performed. This paper shall present the results of the validation of the coefficients of calibration of the deterioration models in HDM 4 on the Macedonian highways.

  4. Development and Validation of a Polarimetric-MCScene 3D Atmospheric Radiation Model

    Energy Technology Data Exchange (ETDEWEB)

    Berk, Alexander [Spectral Sciences, Inc., Burlington, MA (United States); Hawes, Frederick [Spectral Sciences, Inc., Burlington, MA (United States); Fox, Marsha [Spectral Sciences, Inc., Burlington, MA (United States)

    2016-03-15

    Polarimetric measurements can substantially enhance the ability of both spectrally resolved and single band imagery to detect the proliferation of weapons of mass destruction, providing data for locating and identifying facilities, materials, and processes of undeclared and proliferant nuclear weapons programs worldwide. Unfortunately, models do not exist that efficiently and accurately predict spectral polarized signatures for the materials of interest embedded in complex 3D environments. Having such a model would enable one to test hypotheses and optimize both the enhancement of scene contrast and the signal processing for spectral signature extraction. The Phase I set the groundwork for development of fully validated polarimetric spectral signature and scene simulation models. This has been accomplished 1. by (a) identifying and downloading state-of-the-art surface and atmospheric polarimetric data sources, (b) implementing tools for generating custom polarimetric data, and (c) identifying and requesting US Government funded field measurement data for use in validation; 2. by formulating an approach for upgrading the radiometric spectral signature model MODTRAN to generate polarimetric intensities through (a) ingestion of the polarimetric data, (b) polarimetric vectorization of existing MODTRAN modules, and (c) integration of a newly developed algorithm for computing polarimetric multiple scattering contributions; 3. by generating an initial polarimetric model that demonstrates calculation of polarimetric solar and lunar single scatter intensities arising from the interaction of incoming irradiances with molecules and aerosols; 4. by developing a design and implementation plan to (a) automate polarimetric scene construction and (b) efficiently sample polarimetric scattering and reflection events, for use in a to be developed polarimetric version of the existing first-principles synthetic scene simulation model, MCScene; and 5. by planning a validation field

  5. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  6. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    Science.gov (United States)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  7. Validation of a proposal for evaluating hospital infection control programs.

    Science.gov (United States)

    Silva, Cristiane Pavanello Rodrigues; Lacerda, Rúbia Aparecida

    2011-02-01

    To validate the construct and discriminant properties of a hospital infection prevention and control program. The program consisted of four indicators: technical-operational structure; operational prevention and control guidelines; epidemiological surveillance system; and prevention and control activities. These indicators, with previously validated content, were applied to 50 healthcare institutions in the city of São Paulo, Southeastern Brazil, in 2009. Descriptive statistics were used to characterize the hospitals and indicator scores, and Cronbach's α coefficient was used to evaluate the internal consistency. The discriminant validity was analyzed by comparing indicator scores between groups of hospitals: with versus without quality certification. The construct validity analysis was based on exploratory factor analysis with a tetrachoric correlation matrix. The indicators for the technical-operational structure and epidemiological surveillance presented almost 100% conformity in the whole sample. The indicators for the operational prevention and control guidelines and the prevention and control activities presented internal consistency ranging from 0.67 to 0.80. The discriminant validity of these indicators indicated higher and statistically significant mean conformity scores among the group of institutions with healthcare certification or accreditation processes. In the construct validation, two dimensions were identified for the operational prevention and control guidelines: recommendations for preventing hospital infection and recommendations for standardizing prophylaxis procedures, with good correlation between the analysis units that formed the guidelines. The same was found for the prevention and control activities: interfaces with treatment units and support units were identified. Validation of the measurement properties of the hospital infection prevention and control program indicators made it possible to develop a tool for evaluating these programs

  8. Geochemical databases. Part 1. Pmatch: a program to manage thermochemical data. Part 2. The experimental validation of geochemical computer models

    International Nuclear Information System (INIS)

    Pearson, F.J. Jr.; Avis, J.D.; Nilsson, K.; Skytte Jensen, B.

    1993-01-01

    This work is carried out under cost-sharing contract with European Atomic Energy Community in the framework of its programme on Management and Storage of Radioactive Wastes. Part 1: PMATCH, A Program to Manage Thermochemical Data, describes the development and use of a computer program, by means of which new thermodynamic data from literature may be referenced to a common frame and thereby become internally consistent with an existing database. The report presents the relevant thermodynamic expressions and their use in the program is discussed. When there is not sufficient thermodynamic data available to describe a species behaviour under all conceivable conditions, the problems arising are thoroughly discussed and the available data is handled by approximating expressions. Part II: The Experimental Validation of Geochemical Computer models are the results of experimental investigations of the equilibria established in aqueous suspensions of mixtures of carbonate minerals (Calcium, magnesium, manganese and europium carbonates) compared with theoretical calculations made by means of the geochemical JENSEN program. The study revealed that the geochemical computer program worked well, and that its database was of sufficient validity. However, it was observed that experimental difficulties could hardly be avoided, when as here a gaseous component took part in the equilibria. Whereas the magnesium and calcium carbonates did not demonstrate mutual solid solubility, this produced abnormal effects when manganese and calcium carbonates were mixed resulting in a diminished solubility of both manganese and calcium. With tracer amounts of europium added to a suspension of calcite in sodium carbonate solutions long term experiments revealed a transition after 1-2 months, whereby the tracer became more strongly adsorbed onto calcite. The transition is interpreted as the nucleation and formation of a surface phase incorporating the 'species' NaEu(Co 3 ) 2

  9. Validation of FORTRAN emulators for the G2 varian control programs

    International Nuclear Information System (INIS)

    Delorme, G.

    1996-01-01

    The extensive use of the Gentilly full scope simulator for training and verification of plant procedures, forced the development of a reliable desktop simulator for software maintenance purposes. For that we needed emulators for the control programs which run on the DCC Varian computers in the full scope simulator. This paper presents the validation results for the Reactor Regulating System (RRS) program. This emulator was programmed in a modular fashion providing ease of maintenance and of transportation to another environment. The results obtained with specific tests or with integrated testing involving complex control rule interactions, compared favorably with the ones obtained using the actual plant control programs running on the full scope simulator, which constitutes an irrefutable validation procedure. This RRS package along with the other emulators being validated In this manner could be used in safety codes with confidence. (author)

  10. Generalizability of GMAT[R] Validity to Programs outside the U.S.

    Science.gov (United States)

    Talento-Miller, Eileen

    2008-01-01

    This study explores the predictive validity of GMAT[R] scores for predicting performance in graduate management programs outside the United States. Results suggest that the validity estimates based on the combination of GMAT[R] scores were about a third of a standard deviation higher for non-U.S. programs compared with existing data on U.S.…

  11. Validated TRNSYS Model for Solar Assisted Space Heating System

    International Nuclear Information System (INIS)

    Abdalla, Nedal

    2014-01-01

    The present study involves a validated TRNSYS model for solar assisted space heating system as applied to a residential building in Jordan using new detailed radiation models of the TRNSYS 17.1 and geometric building model Trnsys3d for the Google SketchUp 3D drawing program. The annual heating load for a building (Solar House) which is located at the Royal ScientiFIc Society (RS5) in Jordan is estimated under climatological conditions of Amman. The aim of this Paper is to compare measured thermal performance of the Solar House with that modeled using TRNSYS. The results showed that the annual measured space heating load for the building was 6,188 kWh while the heati.ng load for the modeled building was 6,391 kWh. Moreover, the measured solar fraction for the solar system was 50% while the modeled solar fraction was 55%. A comparison of modeled and measured data resulted in percentage mean absolute errors for solar energy for space heating, auxiliary heating and solar fraction of 13%, 7% and 10%, respectively. The validated model will be useful for long-term performance simulation under different weather and operating conditions.(author)

  12. Sustained Implementation Support Scale: Validation of a Measure of Program Characteristics and Workplace Functioning for Sustained Program Implementation.

    Science.gov (United States)

    Hodge, Lauren M; Turner, Karen M T; Sanders, Matthew R; Filus, Ania

    2017-07-01

    An evaluation measure of enablers and inhibitors to sustained evidence-based program (EBP) implementation may provide a useful tool to enhance organizations' capacity. This paper outlines preliminary validation of such a measure. An expert informant and consumer feedback approach was used to tailor constructs from two existing measures assessing key domains associated with sustained implementation. Validity and reliability were evaluated for an inventory composed of five subscales: Program benefits, Program burden, Workplace support, Workplace cohesion, and Leadership style. Exploratory and confirmatory factor analysis with a sample of 593 Triple P-Positive Parenting Program-practitioners led to a 28-item scale with good reliability and good convergent, discriminant, and predictive validity. Practitioners sustaining implementation at least 3 years post-training were more likely to have supervision/peer support, reported higher levels of program benefit, workplace support, and positive leadership style, and lower program burden compared to practitioners who were non-sustainers.

  13. Validation study of safety assessment model for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Munakata, Masahiro; Takeda, Seiji; Kimura, Hideo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-12-01

    The JAERI-AECL collaboration research program has been conducted to validate a groundwater flow and radionuclide transport models for safety assessment. JAERI have developed a geostatistical model for radionuclide transport through a heterogeneous geological media and verify using experimental results of field tracer tests. The simulated tracer plumes explain favorably the experimental tracer plumes. A regional groundwater flow and transport model using site-scale parameter obtained from tracer tests have been verified by comparing simulation results with observation ones of natural environmental tracer. (author)

  14. Precision Glass Molding: Validation of an FE Model for Thermo-Mechanical Simulation

    DEFF Research Database (Denmark)

    Sarhadi, Ali; Hattel, Jesper Henri; Hansen, Hans Nørgaard

    2014-01-01

    glass molding process including heating, pressing, and cooling stages. Temperature- dependent viscoelastic and structural relaxation behavior of the glass material are implemented through a FORTRAN material subroutine (UMAT) into the commercial FEM program ABAQUS, and the FE model is validated...

  15. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  16. NPOESS Preparatory Project Validation Program for Atmsophere Data Products from VIIRS

    Science.gov (United States)

    Starr, D.; Wong, E.

    2009-12-01

    The National Polar-orbiting Operational Environmental Satellite Suite (NPOESS) Program, in partnership with National Aeronautical Space Administration (NASA), will launch the NPOESS Preparatory Project (NPP), a risk reduction and data continuity mission, prior to the first operational NPOESS launch. The NPOESS Program, in partnership with Northrop Grumman Aerospace Systems (NGAS), will execute the NPP Validation program to ensure the data products comply with the requirements of the sponsoring agencies. Data from the NPP Visible/Infrared Imager/Radiometer Suite (VIIRS) will be used to produce Environmental Data Records (EDR's) for aerosol and clouds, specifically Aerosol Optical Thickness (AOT), Aerosol Particle Size Parameter (APSP), and Suspended Matter (SM); and Cloud Optical Thickness (COT), Cloud Effective Particle Size (CEPS), Cloud Top Temperature (CTT), Height (CTH) and Pressure (CTP), and Cloud Base Height (CBH). The Aerosol and Cloud EDR Validation Program is a multifaceted effort to characterize and validate these data products. The program involves systematic comparison to heritage data products, e.g., MODIS, and ground-based correlative data, such as AERONET and ARM data products, and potentially airborne field measurements. To the extent possible, the domain is global. The program leverages various investments that have and are continuing to be made by national funding agencies in such resources, as well as the operational user community and the broad Earth science user community. This presentation will provide an overview of the approaches, data and schedule for the validation of the NPP VIIRS Aerosol and Cloud environmental data products.

  17. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  18. Supplementary investigations on the validation of the atmospheric radionuclide transport model (ARTM); Ergaenzende Untersuchungen zur Validierung des Atmosphaerischen Radionuklid-Transport-Modells (ARTM)

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Thielen, Harald; Sogalla, Martin

    2015-09-15

    In the medium-term time scale the Gaussian plume model used so far for atmospheric dispersion calculations in the General Administrative Provision (AVV) relating to Section 47 of the Radiation Protection Ordinance (StrISchV) as well as in the Incident Calculation Bases (SBG) relating to Section 49 StrISchV is to be replaced by a Lagrangian particle model. Meanwhile the Atmospheric Radionuclide Transportation Model (ARTM) is available, which allows the simulation of the atmospheric dispersion of operational releases from nuclear installations. ARTM is based on the program package AUSTAL2000 which is designed for the simulation of atmospheric dispersion of non-radioactive operational releases from industrial plants and was adapted to the application of airborne radioactive releases. The research project 3612S50007 serves, on the one hand, to validate ARTM systematically. On the other hand, the development of science and technology were investigated and, if reasonable and possible, were implemented to the program system. The dispersion model and the user interface were advanced and optimized. The program package was provided to the users as a free download. Notably t he work program comprises the validation of the approach used in ARTM to model short emission periods, which are of interest in view of the SBG. The simulation results of the diagnostic wind and turbulence model TALdia, which is part of the GO-ARTM program package, were evaluated with focus on the influence of buildings on the flow field. The user interface was upgraded with a wind field viewer. To simplify the comparison with the model still in use, a Gaussian plum e model was implemented into the graphical user interface. The ARTM web page was maintained, user questions and feedback were answered and analysed concerning possible improvements and further developments of the program package. Numerous improvements were implemented. An ARTM user workshop was hosted by the Federal Office for Radiation

  19. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  20. Appplication of a general fluid mechanics program to NTP system modeling

    International Nuclear Information System (INIS)

    Lee, S.K.

    1993-01-01

    An effort is currently underway at NASA and the Department of Energy (DOE) to develop an accurate model for predicting nuclear thermal propulsion (NTP) system performance. The objective of the effort is to develop several levels of computer programs which vary in detail and complexity according to user's needs. The current focus is on the Level 1 steady-state, parametric system model. This system model will combine a general fluid mechanics program, SAFSIM, with the ability to analyze turbines, pumps, nozzles, and reactor physics. SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program that simulates integrated performance of systems involving fluid mechanics, heat transfer, and reactor dynamics. SAFSIM has the versatility to allow simulation of almost any system, including a nuclear reactor system. The focus of this paper is the validation of SAFSIM's capabilities as a base computational engine for a nuclear thermal propulsion system model. Validation is being accomplished by modeling of a nuclear engine test using SAFSIM and comparing the results to known experimental data

  1. Design and validation of general biology learning program based on scientific inquiry skills

    Science.gov (United States)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  2. Laboratory research program to aid in developing and testing the validity of conceptual models for flow and transport through unsaturated porous media

    International Nuclear Information System (INIS)

    Glass, R.J.

    1991-01-01

    As part of the Yucca Mountain Project, a laboratory research program is being developed at Sandia National Laboratories that will integrate fundamental physical experimentation with conceptual model formulation and mathematical modeling and aid in subsequent model validation for unsaturated zone water and contaminant transport. Experimental systems are being developed to explore flow and transport processes and assumptions of fundamental importance to various conceptual models. Experimentation will run concurrently in two types of systems: fractured and nonfractured tuffaceous systems; and analogue systems having specific characteristics of the tuff systems but designed to maximize experimental control and resolution of data measurement. Areas in which experimentation currently is directed include infiltration flow instability, water and solute movement in unsaturated fractures, fracture-matrix interaction, and scaling laws to define effective large-scale properties for heterogeneous, fractured media. 16 refs

  3. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    Science.gov (United States)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation

  4. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S

    2014-01-01

    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  5. Theory and Validation for the Collision Module

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1999-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE.......This report describes basic modelling principles, the theoretical background and validation examples for the Collision Module for the computer program DAMAGE....

  6. Initialization of the Euler model MODIS with field data from the 'EPRI plume model validation project'

    International Nuclear Information System (INIS)

    Petersen, G.; Eppel, D.; Lautenschlager, M.; Mueller, A.

    1985-01-01

    The program deck MODIS (''MOment DIStribution'') is designed to be used as operational tool for modelling the dispersion of a point source under general atmospheric conditions. The concentration distribution is determined by calculating its cross-wind moments on a vertical grid oriented in the main wind direction. The model contains a parametrization for horizontal and vertical coefficients based on a second order closure model. The Eulerian time scales, preliminary determined by fitting measured plume cross sections, are confirmed by comparison with data from the EPRI plume model validation project. (orig.) [de

  7. Validation of coastal oceanographic models at Laxemar-Simpevarp. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Engqvist, Anders; Andrejev, Oleg

    2008-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is undertaking site characterization at two different locations, the Forsmark and the Laxemar-Simpevarp areas, with the objective of siting a geological repository for spent nuclear fuel. The characterization work is divided into an initial site investigation phase and a complete site investigation phase. In this context, the water exchange of the coastal zone is one link of the chain of possible nuclide transport mechanisms that must be assessed in the site description of potential repository areas. For the purpose of validating the pair of nested 3D-models and the coupled discrete basin (CDB-) model employed to simulate the water exchange in the near-shore coastal zone in the Laxemar-Simpevarp area, an encompassing measurement program entailing data from six stations (of which two are close) has been performed. The design of this program was to first assess to what degree the forcing of the fine resolution (FR-) model of the Laxemar- Simpevarp study area at its interfacial boundary to the coarse resolution (CR-) model of the entire Baltic was reproduced. In addition to this, it is of particular interest how the time-varying density-determining properties, salinity and temperature, at the borders are propagated into the FR-domain and further influence the water exchange with the interior, more secluded, basins. An important part of the validation process has been to carefully evaluate which measurement data that can be considered reliable. The result was that some periods of foremost near-surface salinity data had to be discarded due to growth of algae on the conductivity sensors. Interference with ship traffic and lack of absolute calibration of the salinity meters necessitated dismissal of measurement data too. In this study so-called Mesan data have been consistently used for the meteorological forcing of the 3D-models. Relative the assessed data that can be accepted as adequate, the outcome of the

  8. Validation of coastal oceanographic models at Laxemar-Simpevarp. Site descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-12-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is undertaking site characterization at two different locations, the Forsmark and the Laxemar-Simpevarp areas, with the objective of siting a geological repository for spent nuclear fuel. The characterization work is divided into an initial site investigation phase and a complete site investigation phase. In this context, the water exchange of the coastal zone is one link of the chain of possible nuclide transport mechanisms that must be assessed in the site description of potential repository areas. For the purpose of validating the pair of nested 3D-models and the coupled discrete basin (CDB-) model employed to simulate the water exchange in the near-shore coastal zone in the Laxemar-Simpevarp area, an encompassing measurement program entailing data from six stations (of which two are close) has been performed. The design of this program was to first assess to what degree the forcing of the fine resolution (FR-) model of the Laxemar- Simpevarp study area at its interfacial boundary to the coarse resolution (CR-) model of the entire Baltic was reproduced. In addition to this, it is of particular interest how the time-varying density-determining properties, salinity and temperature, at the borders are propagated into the FR-domain and further influence the water exchange with the interior, more secluded, basins. An important part of the validation process has been to carefully evaluate which measurement data that can be considered reliable. The result was that some periods of foremost near-surface salinity data had to be discarded due to growth of algae on the conductivity sensors. Interference with ship traffic and lack of absolute calibration of the salinity meters necessitated dismissal of measurement data too. In this study so-called Mesan data have been consistently used for the meteorological forcing of the 3D-models. Relative the assessed data that can be accepted as adequate, the outcome of the

  9. Towards program theory validation: Crowdsourcing the qualitative analysis of participant experiences.

    Science.gov (United States)

    Harman, Elena; Azzam, Tarek

    2018-02-01

    This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant's experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  11. Modelling the work flow of a nuclear waste management program

    Energy Technology Data Exchange (ETDEWEB)

    Hoeyer Mortensen, K. [Aarhus Univ., Computer Science Dept. (Denmark); Pinci, V. [Meta Software Corporation, Cambridge, MA (United States)

    1997-03-01

    In this paper we describe a modelling project to improve a nuclear waste management program in charge of the creation of a new system for the permanent disposal of nuclear waste. SADT (Structural Analysis and Design Technique) is used in order to provide a work-flow description of the functions to be performed by the waste management program. This description is then translated into a number of Coloured Petri Nets (CPN or CP-nets) corresponding to different program functions where additional behavioural inscriptions provide basis for simulation. Each of these CP-nets is simulated to produce timed event charts that are useful for understanding the behaviour of the program functions under different scenarios. Then all the CPN models are linked together to form a single stand-alone application that is useful for validating the interaction and cooperation between the different program functions. A technique for linking executable CPN models is developed for supporting large modelling projects and parallel development of independent CPN models. (au) 11 refs.

  12. Modelling the work flow of a nuclear waste management program

    International Nuclear Information System (INIS)

    Hoeyer Mortensen, K.; Pinci, V.

    1997-03-01

    In this paper we describe a modelling project to improve a nuclear waste management program in charge of the creation of a new system for the permanent disposal of nuclear waste. SADT (Structural Analysis and Design Technique) is used in order to provide a work-flow description of the functions to be performed by the waste management program. This description is then translated into a number of Coloured Petri Nets (CPN or CP-nets) corresponding to different program functions where additional behavioural inscriptions provide basis for simulation. Each of these CP-nets is simulated to produce timed event charts that are useful for understanding the behaviour of the program functions under different scenarios. Then all the CPN models are linked together to form a single stand-alone application that is useful for validating the interaction and cooperation between the different program functions. A technique for linking executable CPN models is developed for supporting large modelling projects and parallel development of independent CPN models. (au) 11 refs

  13. Laboratory research program to aid in developing and testing the validity of conceptual models for flow and transport through unsaturated porous media

    International Nuclear Information System (INIS)

    Glass, R.J.

    1990-01-01

    As part of the Yucca Mountain Project, a laboratory research program is being developed at Sandia National Laboratories that will integrate fundamental physical experimentation with conceptual formulation and mathematical modeling and aid in subsequent model validation for unsaturated zone water and contaminant transport. Experimental systems are being developed to explore flow and transport processes and assumptions of fundamental importance to various conceptual models. Experimentation will run concurrently in two types of systems: fractured and nonfractured tuffaceous systems; and analogue systems having specific characteristics of the tuff systems but designed to maximize experimental control and resolution of data measurement. Questions to which experimentation currently is directed include infiltration flow instability, water and solute movement in unsaturated fractures, fracture-matrix interaction, and the definition of effective large-scale properties for heterogeneous, fractured media. 16 refs

  14. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  15. An introduction to use of the USACE HTRW program's data validation guidelines engineering manual

    International Nuclear Information System (INIS)

    Becker, L.D.; Coats, K.H.

    1994-01-01

    Data validation has been defined by regulatory agencies as a systematic process (consisting of data editing, screening, checking, auditing, verification, certification, and review) for comparing data to established criteria in order to provide assurance that data are adequate for their intended use. A problem for the USACE HTRW Program was that clearly defined data validation guidelines were available only for analytical data quality level IV. These functional data validation guidelines were designed for validation of data produced using protocols from the US E.P.A.'s Contract Laboratory Program (CLP). Unfortunately, USACE experience demonstrates that these level IV functional data validation guidelines were being used to validate data not produced under the CLP. The resulting data validation product was less than satisfactory for USACE HTRW needs. Therefore, the HTRW-MCX initiated an Engineering Manual (EM) for validation of analytical data quality levels other than IV. This EM is entitle ''USACE HTRW Data Validation Guidelines.'' Use of the EM is required for validation of analytical data relating to projects under the jurisdiction of the Department of the Army, Corps of Engineers, Hazardous, Toxic, and Radioactive Waste Program. These data validation guidelines include procedures and checklists for technical review of analytical data at quality levels I, II, III, and V

  16. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    Science.gov (United States)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  17. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  18. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  19. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  20. Photovoltaic subsystem marketing and distribution model: programming manual. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1982-07-01

    Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.

  1. Coloured Petri Nets and CPN Tools for Modelling and Validation of Concurrent Systems

    DEFF Research Database (Denmark)

    Jensen, Kurt; Kristensen, Lars Michael; Wells, Lisa Marie

    2007-01-01

    Coloured Petri Nets (CPNs) is a language for the modeling and validation og systems in which concurrency, communication, and synchronisation play a major role. Coloured Petri Nets is a descrete-event modeling language combining Petri Nets with the funcitonal programming language Standard ML. Petr...... with user-defined Standard ML functions. A license for CPN Tools can be obtained free of charge, also for commercial use....

  2. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  4. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  5. Validation of the WATEQ4 geochemical model for uranium

    International Nuclear Information System (INIS)

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite [UO 2 (OH) 2 . H 2 O], UO 2 (OH) 2 , and rutherfordine ((UO 2 CO 3 ) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions

  6. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  7. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    Science.gov (United States)

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  9. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  10. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  11. Validation of the Positive Parenting Scale (PPS for evaluating face-to-face and online parenting support programs

    Directory of Open Access Journals (Sweden)

    Arminda Suárez

    2016-11-01

    Full Text Available Following the study presenting the Online Parental Support Scale, as part of the evaluation of the ‘Positive Parent’ online program (http://educarenpositivo.es, this article describes the validation of a new scale that evaluates the principles of positive parenting in users of face-to-face and online parenting support programs. To validate the Positive Parenting Scale (PPS, 323 Spanish and Latin American parents participated, who were enrolled in the online program. To obtain the factor structure, we used exploratory structural equation modeling (ESEM with oblimin rotation, and for confirmatory purposes we used as the estimation method the Weighted Least Squares Mean and Variance Adjusted with moving measurement window (WLSMW. We also performed a ROC analysis of rating and continuous diagnostic test results by means of area under the curve (AUC, and tested it by multivariate analysis of Covariance (MANCOVA. The main results showed an optimal factorization of the construct involving a four-factor model with adequate reliability: family involvement, affection and recognition, communication and stress management, and shared activities. Furthermore, discriminative capacity of the scale was proved depending on the levels of Internet experience and educational use of the Internet. The scale shows adequate psychometric properties and its content includes the key aspects of the exercise of positive parenting, which is very useful to evaluate the effectiveness of programs based on this approach.

  12. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  13. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    Science.gov (United States)

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  15. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  16. Validation of coastal oceanographic models at Forsmark. Site descriptive modelling SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, Anders (A och I Engqvist Konsult HB, Vaxholm (SE)); Andrejev, Oleg (Finnish Inst. of Marine Research, Helsinki (FI))

    2008-01-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is undertaking site characterisation at two different locations, the Forsmark and the Simpevarp areas, with the objective of siting a geological repository for spent nuclear fuel. The characterisation work is divided into an initial site investigation phase and a complete site investigation phase. In this context, the water exchange of the coastal zone is one link of the chain of possible nuclide transport mechanisms that must be assessed in the site description of potential repository areas. For the purpose of validating the pair of nested 3D-models employed to simulate the water exchange in the near-shore coastal zone in the Forsmark area, an encompassing measurement program entailing six stations has been performed. The design of this program was to first assess to what degree the forcing of the fine resolution (FR) model of the Forsmark study area at its interfacial boundary to the coarse resolution (CR) model of the entire Baltic was reproduced. In addition to this scrutiny it is of particular interest how the time-varying density-determining properties, salinity and temperature, at the borders are propagated into the FR-domain, since this corresponds to the most efficient mode of water exchange. An important part of the validation process has been to carefully evaluate which measurement data that can be considered reliable. The result was that several periods of foremost near-surface salinity data had to be discarded due to growth of algae on the conductivity sensors. Lack of thorough absolute calibration of the salinity meters also necessitates dismissal of measurement data. Relative the assessed data that can be accepted as adequate, the outcome of the validation can be summarized in five points: (i) The surface-most salinity of the CR-model drifts downward a little less than one practical salinity unit (psu) per year, requiring that the ensuing correlation analysis be subdivided into periods of a

  17. Validation of coastal oceanographic models at Forsmark. Site descriptive modelling SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Engqvist, Anders; Andrejev, Oleg

    2008-01-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is undertaking site characterisation at two different locations, the Forsmark and the Simpevarp areas, with the objective of siting a geological repository for spent nuclear fuel. The characterisation work is divided into an initial site investigation phase and a complete site investigation phase. In this context, the water exchange of the coastal zone is one link of the chain of possible nuclide transport mechanisms that must be assessed in the site description of potential repository areas. For the purpose of validating the pair of nested 3D-models employed to simulate the water exchange in the near-shore coastal zone in the Forsmark area, an encompassing measurement program entailing six stations has been performed. The design of this program was to first assess to what degree the forcing of the fine resolution (FR) model of the Forsmark study area at its interfacial boundary to the coarse resolution (CR) model of the entire Baltic was reproduced. In addition to this scrutiny it is of particular interest how the time-varying density-determining properties, salinity and temperature, at the borders are propagated into the FR-domain, since this corresponds to the most efficient mode of water exchange. An important part of the validation process has been to carefully evaluate which measurement data that can be considered reliable. The result was that several periods of foremost near-surface salinity data had to be discarded due to growth of algae on the conductivity sensors. Lack of thorough absolute calibration of the salinity meters also necessitates dismissal of measurement data. Relative the assessed data that can be accepted as adequate, the outcome of the validation can be summarized in five points: (i) The surface-most salinity of the CR-model drifts downward a little less than one practical salinity unit (psu) per year, requiring that the ensuing correlation analysis be subdivided into periods of a

  18. Development and validation of the computer program TNHXY

    International Nuclear Information System (INIS)

    Xolocostli M, V.; Valle G, E. del; Alonso V, G.

    2003-01-01

    This work describes the development and validation of the computer program TNHXY (Neutron Transport with Nodal Hybrid schemes in X Y geometry), which solves the discrete-ordinates neutron transport equations using a discontinuous Bi-Linear (DBiL) nodal hybrid method. One of the immediate applications of TNHXY is in the analysis of nuclear fuel assemblies, in particular those of BWRs. Its validation was carried out by reproducing some results for test or benchmark problems that some authors have solved using other numerical techniques. This allows to ensure that the program will provide results with similar accuracy for other problems of the same type. To accomplish this two benchmark problems have been solved. The first problem consists in a BWR fuel assembly in a 7x7 array without and with control rod. The results obtained with TNHXY are consistent with those reported for the TWOTRAN code. The second benchmark problem is a Mixed Oxide (MOX) fuel assembly in a 10x10 array. This last problem is known as the WPPR benchmark problem of the NEA Data Bank and the results are compared with those obtained with commercial codes like HELIOS, MCNP-4B and CPM-3. (Author)

  19. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  20. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  1. Determination and validation of mTOR kinase-domain 3D structure by homology modeling

    Directory of Open Access Journals (Sweden)

    Lakhlili W

    2015-07-01

    Full Text Available Wiame Lakhlili,1 Gwénaël Chevé,2 Abdelaziz Yasri,2 Azeddine Ibrahimi1 1Laboratoire de Biotechnologie (MedBiotech, Faculté de Médecine et de Pharmacie de Rabat, Université Mohammed V de Rabat, Rabat, Morroco; 2OriBase Pharma, Cap Gamma, Parc Euromédecine, Montpellier, France Abstract: The AKT/mammalian target of rapamycin (mTOR pathway is considered as one of the commonly activated and deregulated signaling pathways in human cancer. mTOR is associated with other proteins in two molecular complexes: mTOR complex 1/Raptor and the mTOR complex 2/Rictor. Using the crystal structure of the related lipid kinase PI3Kγ, we built a model of the catalytic region of mTOR. The modeling of the three-dimensional (3D structure of the mTOR was performed by homology modeling program SWISS-MODEL. The quality and validation of the obtained model were performed using PROCHECK and PROVE softwares. The overall stereochemical property of the protein was assessed by the Ramachandran plot. The model validation was also done by docking of known inhibitors. In this paper, we describe and validate a 3D model for the mTOR catalytic site.Keywords: mTOR, homology modeling, mTOR kinase-domain, docking

  2. Impact of Cross-Axis Structural Dynamics on Validation of Linear Models for Space Launch System

    Science.gov (United States)

    Pei, Jing; Derry, Stephen D.; Zhou Zhiqiang; Newsom, Jerry R.

    2014-01-01

    A feasibility study was performed to examine the advisability of incorporating a set of Programmed Test Inputs (PTIs) during the Space Launch System (SLS) vehicle flight. The intent of these inputs is to provide validation to the preflight models for control system stability margins, aerodynamics, and structural dynamics. During October 2009, Ares I-X program was successful in carrying out a series of PTI maneuvers which provided a significant amount of valuable data for post-flight analysis. The resulting data comparisons showed excellent agreement with the preflight linear models across the frequency spectrum of interest. However unlike Ares I-X, the structural dynamics associated with the SLS boost phase configuration are far more complex and highly coupled in all three axes. This presents a challenge when implementing this similar system identification technique to SLS. Preliminary simulation results show noticeable mismatches between PTI validation and analytical linear models in the frequency range of the structural dynamics. An alternate approach was examined which demonstrates the potential for better overall characterization of the system frequency response as well as robustness of the control design.

  3. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available threeprocessesfortransformingtheinformationbetweentheentities. Reality/ Problem Entity Conceptual Model Computerized Model Model Validation ModelVerification Model Qualification Computer Implementation Analysisand Modelling Simulationand Experimentation “Substantiationthata....C.Refsgaard ,ModellingGuidelines-terminology andguidingprinciples, AdvancesinWaterResources, Vol27,No1,January2004,?pp.71-82(12),Elsevier. et.al. [5]N.Oreskes,et.al.,Verification,Validation,andConfirmationof NumericalModelsintheEarthSciences,Science,Vol263, Number...

  4. International Energy Agency Ocean Energy Systems Task 10 Wave Energy Converter Modeling Verification and Validation

    DEFF Research Database (Denmark)

    Wendt, Fabian F.; Yu, Yi-Hsiang; Nielsen, Kim

    2017-01-01

    This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 ...

  5. Validation of Ulchin Units 1, 2 CONTEMPT Model Prior to the Production of EQ Envelope Curve

    International Nuclear Information System (INIS)

    Hwang, Su Hyun; Kim, Min Ki; Hong, Soon Joon; Lee, Byung Chul; Suh, Jeong Kwan; Lee, Jae Yong; Song, Dong Soo

    2010-01-01

    The Ulchin Units 1, 2 will be refurbished with RSG (Replacement of Steam Generator) and PU (Power Uprate). The current EQ (Environmental Qualification) envelope curve should be modified according to RSG and PU. The containment P/T (Pressure/Temperature) analysis in Ulchin Units 1, 2 FSAR was done using EDF computer program PAREO6. PAREO6 uses the same assumptions as the US NRC CONTEMPT program, and the results given by both programs are in good agreement. It is utilized to determine pressure and temperature variations in a PWR containment subsequent to a reactor coolant or secondary system pipe break. But PAREO6 cannot be available to the production of EQ envelope curve, so CONTEMPT code should be used instead of PAREO6. It is essential to validate the CONTEMPT OSG (Original Steam Generator) model prior to the production of EQ envelope curve considering RSG and PU. This study has been performed to validate the CONTEMPT model of Ulchin Units 1, 2 by comparing the CONTEMPT results with the PAERO6 results in Ulchin Units 1, 2 FSAR

  6. Validation of Ulchin Units 1, 2 CONTEMPT Model Prior to the Production of EQ Envelope Curve

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Su Hyun; Kim, Min Ki; Hong, Soon Joon; Lee, Byung Chul [FNC Technology Co., SNU, Seoul (Korea, Republic of); Suh, Jeong Kwan; Lee, Jae Yong; Song, Dong Soo [KEPCO Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The Ulchin Units 1, 2 will be refurbished with RSG (Replacement of Steam Generator) and PU (Power Uprate). The current EQ (Environmental Qualification) envelope curve should be modified according to RSG and PU. The containment P/T (Pressure/Temperature) analysis in Ulchin Units 1, 2 FSAR was done using EDF computer program PAREO6. PAREO6 uses the same assumptions as the US NRC CONTEMPT program, and the results given by both programs are in good agreement. It is utilized to determine pressure and temperature variations in a PWR containment subsequent to a reactor coolant or secondary system pipe break. But PAREO6 cannot be available to the production of EQ envelope curve, so CONTEMPT code should be used instead of PAREO6. It is essential to validate the CONTEMPT OSG (Original Steam Generator) model prior to the production of EQ envelope curve considering RSG and PU. This study has been performed to validate the CONTEMPT model of Ulchin Units 1, 2 by comparing the CONTEMPT results with the PAERO6 results in Ulchin Units 1, 2 FSAR

  7. VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS

    Directory of Open Access Journals (Sweden)

    Tagor Malem Sembiring

    2015-10-01

    Full Text Available ABSTRACT VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS. The coupled neutronic and thermal-hydraulic (T/H code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR ejection at peripheral core using a full core geometry model, the C1 and C2 cases.  By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM and the improved quasistatic method (IQS. All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16% occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4% for C2 case.  All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. Keywords: nodal method, coupled neutronic and thermal-hydraulic code, PWR, transient case, control rod ejection.   ABSTRAK VALIDASI MODEL GEOMETRI TERAS PENUH PAKET PROGRAM NODAL3 DALAM PROBLEM BENCHMARK GAYUT WAKTU PWR. Paket program kopel neutronik dan termohidraulika (T/H, NODAL3, telah divalidasi dengan beberapa kasus benchmark statis PWR dan kasus benchmark gayut waktu PWR NEACRP.  Akan tetapi, paket program NODAL3 belum divalidasi dalam kasus benchmark gayut waktu akibat penarikan sebuah perangkat batang kendali (CR di tepi teras menggunakan model geometri teras penuh, yaitu kasus C1 dan C2. Dengan penelitian ini, akurasi paket program

  8. Alteration of 'R7T7' type nuclear glasses: statistical approach, experimental validation, local evolution model

    International Nuclear Information System (INIS)

    Thierry, F.

    2003-02-01

    The aim of this work is to propose an evolution of nuclear (R7T7-type) glass alteration modeling. The first part of this thesis is about development and validation of the 'r(t)' model. This model which predicts the decrease of alteration rates in confined conditions is based upon a coupling between a first-order dissolution law and a diffusion barrier effect of the alteration gel layer. The values and the uncertainties regarding the main adjustable parameters of the model (α, Dg and C*) have been determined from a systematic study of the available experimental data. A program called INVERSION has been written for this purpose. This work lead to characterize the validity domain of the 'r(t)' model and to parametrize it. Validation experiments have been undertaken, confirming the validity of the parametrization over 200 days. A new model is proposed in the second part of this thesis. It is based on an inhibition of glass dissolution reaction by silicon coupled with a local description of silicon retention in the alteration gel layer. This model predicts the evolutions of boron and silicon concentrations in solution as well as the concentrations and retention profiles in the gel layer. These predictions have been compared to measurements of retention profiles by the secondary ion mass spectrometry (SIMS) method. The model has been validated on fractions of gel layer which reactivity present low or moderate disparities. (author)

  9. Programming Models in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-13

    These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.

  10. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  11. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  12. Validation of Linguistic and Communicative Oral Language Tests for Spanish-English Bilingual Programs.

    Science.gov (United States)

    Politzer, Robert L.; And Others

    1983-01-01

    The development, administration, and scoring of a communicative test and its validation with tests of linguistic and sociolinguistic competence in English and Spanish are reported. Correlation with measures of home language use and school achievement are also presented, and issues of test validation for bilingual programs are discussed. (MSE)

  13. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. The OECD validation program of the H295R steroidogenesis assay: Phase 3. Final inter-laboratory validation study

    DEFF Research Database (Denmark)

    Hecker, Markus; Hollert, Henner; Cooper, Ralph

    2011-01-01

    In response to increasing concerns regarding the potential of chemicals to interact with the endocrine system of humans and wildlife, various national and international programs have been initiated with the aim to develop new guidelines for the screening and testing of these chemicals in vertebra......In response to increasing concerns regarding the potential of chemicals to interact with the endocrine system of humans and wildlife, various national and international programs have been initiated with the aim to develop new guidelines for the screening and testing of these chemicals...... in vertebrates. Here, we report on the validation of an in vitro assay, the H295R steroidogenesis assay, to detect chemicals with the potential to inhibit or induce the production of the sex steroid hormones testosterone (T) and 17β-estradiol (E2) in preparation for the development of an Organization...... for Economic Cooperation and Development (OECD) test guideline.A previously optimized and pre-validated protocol was used to assess the potential of 28 chemicals of diverse structures and properties to validate the H295R steroidogenesis assay. These chemicals are comprised of known endocrine-active chemicals...

  15. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  16. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  17. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  18. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  19. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  20. Stakeholder validation of a model of readiness for transition to adult care.

    Science.gov (United States)

    Schwartz, Lisa A; Brumley, Lauren D; Tuchman, Lisa K; Barakat, Lamia P; Hobbie, Wendy L; Ginsberg, Jill P; Daniel, Lauren C; Kazak, Anne E; Bevans, Katherine; Deatrick, Janet A

    2013-10-01

    That too few youth with special health care needs make the transition to adult-oriented health care successfully may be due, in part, to lack of readiness to transfer care. There is a lack of theoretical models to guide development and implementation of evidence-based guidelines, assessments, and interventions to improve transition readiness. To further validate the Social-ecological Model of Adolescent and Young Adult Readiness to Transition (SMART) via feedback from stakeholders (patients, parents, and providers) from a medically diverse population in need of life-long follow-up care, survivors of childhood cancer. Mixed-methods participatory research design. A large Mid-Atlantic children's hospital. Adolescent and young adult survivors of childhood cancer (n = 14), parents (n = 18), and pediatric providers (n = 10). Patients and parents participated in focus groups; providers participated in individual semi-structured interviews. Validity of SMART was assessed 3 ways: (1) ratings on importance of SMART components for transition readiness using a 5-point scale (0-4; ratings >2 support validity), (2) nominations of 3 "most important" components, and (3) directed content analysis of focus group/interview transcripts. Qualitative data supported the validity of SMART, with minor modifications to definitions of components. Quantitative ratings met criteria for validity; stakeholders endorsed all components of SMART as important for transition. No additional SMART variables were suggested by stakeholders and the "most important" components varied by stakeholders, thus supporting the comprehensiveness of SMART and need to involve multiple perspectives. SMART represents a comprehensive and empirically validated framework for transition research and program planning, supported by survivors of childhood cancer, parents, and pediatric providers. Future research should validate SMART among other populations with special health care needs.

  1. Alteration of 'R7T7' type nuclear glasses: statistical approach, experimental validation, local evolution model; Alteration des verres nucleaires de type 'R7T7': demarche statistique, validation experimentale, modele local d'evolution

    Energy Technology Data Exchange (ETDEWEB)

    Thierry, F

    2003-02-01

    The aim of this work is to propose an evolution of nuclear (R7T7-type) glass alteration modeling. The first part of this thesis is about development and validation of the 'r(t)' model. This model which predicts the decrease of alteration rates in confined conditions is based upon a coupling between a first-order dissolution law and a diffusion barrier effect of the alteration gel layer. The values and the uncertainties regarding the main adjustable parameters of the model ({alpha}, Dg and C*) have been determined from a systematic study of the available experimental data. A program called INVERSION has been written for this purpose. This work lead to characterize the validity domain of the 'r(t)' model and to parametrize it. Validation experiments have been undertaken, confirming the validity of the parametrization over 200 days. A new model is proposed in the second part of this thesis. It is based on an inhibition of glass dissolution reaction by silicon coupled with a local description of silicon retention in the alteration gel layer. This model predicts the evolutions of boron and silicon concentrations in solution as well as the concentrations and retention profiles in the gel layer. These predictions have been compared to measurements of retention profiles by the secondary ion mass spectrometry (SIMS) method. The model has been validated on fractions of gel layer which reactivity present low or moderate disparities. (author)

  2. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  3. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  4. Verification and Validation of a Three-Dimensional Orthotropic Plasticity Constitutive Model Using a Unidirectional Composite

    Directory of Open Access Journals (Sweden)

    Canio Hoffarth

    2017-03-01

    Full Text Available A three-dimensional constitutive model has been developed for modeling orthotropic composites subject to impact loads. It has three distinct components—a deformation model involving elastic and plastic deformations; a damage model; and a failure model. The model is driven by tabular data that is generated either using laboratory tests or via virtual testing. A unidirectional composite—T800/F3900, commonly used in the aerospace industry, is used in the verification and validation tests. While the failure model is under development, these tests indicate that the implementation of the deformation and damage models in a commercial finite element program, LS-DYNA, is efficient, robust and accurate.

  5. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  6. The generalized algebraic modal combination (GAC) rule validation program

    International Nuclear Information System (INIS)

    Mertens, P.G.; Culot, M.V.; Sahgal, S.; Tinic, S.

    1991-01-01

    With R.G. 1.92 the NRC imposes to use the absolute values of the modal responses when performing Response Spectra modal combination with coupling factors derived from the current heuristic, stationary or pseudo-stationary random vibration models. This results in overly conservative calculations in the case of closely spaced modes of opposite signs, a case frequently encountered in dynamic analyses in particular when systems with close modal frequencies have a small mass ratio. A new generalised algebraic combination (GAC) formula and its associated coupling factor have been theoretically derived by the first author. It is based on a non-stationary, non-white noise random vibration model which fully accounts for all the time and frequency dependent aspects of the time histories. This should allow the conservative use of algebraic signs in the modal combination over the whole frequency range, and allow a derogation to the current NRC R.G. 1.92 practice to use absolute signs. The use of the industry wide accepted RS method with the GAC rule will result in more economical and safer NPPs through the reduction of an excessive and unrealistic number of seismic restraints and avoidance of prematurely fatigued plants. It is envisaged to use the GAC seismic response combination method for the evaluation of the seismic response of auxiliary class one lines attached to the primary coolant loop piping of the Beznau 1 and 2 nuclear power plants. Since the plant is in operation, it is imperative to use a methodology which is conservative but still as realistic as possible. The paper presents an introduction to the GAC rule and some aspects of the validation program, which will jointly be undertaken by WESI and NOK for obtaining acceptance by the Swiss Safety Authorities for a seismic qualification program. (author)

  7. Validation of the kinetic model for predicting the composition of chlorinated water discharged from power plant cooling systems

    International Nuclear Information System (INIS)

    Lietzke, M.H.

    1977-01-01

    The purpose of this report is to present a validation of a previously described kinetic model which was developed to predict the composition of chlorinated fresh water discharged from power plant cooling systems. The model was programmed in two versions: as a stand-alone program and as a part of a unified transport model developed from consistent mathematical models to simulate the dispersion of heated water and radioisotopic and chemical effluents from power plant discharges. The results of testing the model using analytical data taken during operation of the once-through cooling system of the Quad Cities Nuclear Station are described. Calculations are also presented on the Three Mile Island Nuclear Station which uses cooling towers

  8. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....

  9. Development, validation, and utility of an instrument to assess core competencies in the Leadership Education in Neurodevelopmental and Related Disabilities (LEND) program.

    Science.gov (United States)

    Leff, Stephen S; Baum, Katherine T; Bevans, Katherine B; Blum, Nathan J

    2015-02-01

    To describe the development and psychometric evaluation of the Core Competency Measure (CCM), an instrument designed to assess professional competencies as defined by the Maternal Child Health Bureau (MCHB) and targeted by Leadership Education in Neurodevelopmental and Related Disabilities (LEND) programs. The CCM is a 44-item self-report measure comprised of six subscales to assess clinical, interdisciplinary, family-centered/cultural, community, research, and advocacy/policy competencies. The CCM was developed in an iterative fashion through participatory action research, and then nine cohorts of LEND trainees (N = 144) from 14 different disciplines completed the CCM during the first week of the training program. A 6-factor confirmatory factor analysis model was fit to data from the 44 original items. After three items were removed, the model adequately fit the data (comparative fit indices = .93, root mean error of approximation = .06) with all factor loadings exceeding .55. The measure was determined to be quite reliable as adequate internal consistency and test-retest reliability were found for each subscale. The instrument's construct validity was supported by expected differences in self-rated competencies among fellows representing various disciplines, and the convergent validity was supported by the pattern of inter-correlations between subscale scores. The CCM appears to be a reliable and valid measure of MCHB core competencies for our sample of LEND trainees. It provides an assessment of key training areas addressed by the LEND program. Although the measure was developed within only one LEND Program, with additional research it has the potential to serve as a standardized tool to evaluate the strengths and limitations of MCHB training, both within and between programs.

  10. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  11. Derivation and external validation of a case mix model for the standardized reporting of 30-day stroke mortality rates.

    Science.gov (United States)

    Bray, Benjamin D; Campbell, James; Cloud, Geoffrey C; Hoffman, Alex; James, Martin; Tyrrell, Pippa J; Wolfe, Charles D A; Rudd, Anthony G

    2014-11-01

    Case mix adjustment is required to allow valid comparison of outcomes across care providers. However, there is a lack of externally validated models suitable for use in unselected stroke admissions. We therefore aimed to develop and externally validate prediction models to enable comparison of 30-day post-stroke mortality outcomes using routine clinical data. Models were derived (n=9000 patients) and internally validated (n=18 169 patients) using data from the Sentinel Stroke National Audit Program, the national register of acute stroke in England and Wales. External validation (n=1470 patients) was performed in the South London Stroke Register, a population-based longitudinal study. Models were fitted using general estimating equations. Discrimination and calibration were assessed using receiver operating characteristic curve analysis and correlation plots. Two final models were derived. Model A included age (<60, 60-69, 70-79, 80-89, and ≥90 years), National Institutes of Health Stroke Severity Score (NIHSS) on admission, presence of atrial fibrillation on admission, and stroke type (ischemic versus primary intracerebral hemorrhage). Model B was similar but included only the consciousness component of the NIHSS in place of the full NIHSS. Both models showed excellent discrimination and calibration in internal and external validation. The c-statistics in external validation were 0.87 (95% confidence interval, 0.84-0.89) and 0.86 (95% confidence interval, 0.83-0.89) for models A and B, respectively. We have derived and externally validated 2 models to predict mortality in unselected patients with acute stroke using commonly collected clinical variables. In settings where the ability to record the full NIHSS on admission is limited, the level of consciousness component of the NIHSS provides a good approximation of the full NIHSS for mortality prediction. © 2014 American Heart Association, Inc.

  12. Software verification, model validation, and hydrogeologic modelling aspects in nuclear waste disposal system simulations. A paradigm shift

    International Nuclear Information System (INIS)

    Sheng, G.M.

    1994-01-01

    This work reviewed the current concept of nuclear waste disposal in stable, terrestrial geologic media with a system of natural and man-made multi-barriers. Various aspects of this concept and supporting research were examined with the emphasis on the Canadian Nuclear Fuel Waste Management Program. Several of the crucial issues and challenges facing the current concept were discussed. These include: The difficulties inherent in a concept that centres around lithologic studies; the unsatisfactory state of software quality assurance in the present computer simulation programs; and the lack of a standardized, comprehensive, and systematic procedure to carry out a rigorous process of model validation and assessment of simulation studies. An outline of such an approach was presented and some of the principles, tools and techniques for software verification were introduced and described. A case study involving an evaluation of the Canadian performance assessment computer program is presented. A new paradigm to nuclear waste disposal was advocated to address the challenges facing the existing concept. The RRC (Regional Recharge Concept) was introduced and its many advantages were described and shown through a modelling exercise. (orig./HP)

  13. Validation experiments of the chimney model for the operational simulation of hydrogen recombiners

    International Nuclear Information System (INIS)

    Simon, Berno

    2013-01-01

    The calculation program REKO-DIREKT allows the simulation of the operational behavior of a hydrogen recombiner during accidents with hydrogen release. The interest is focused on the interaction between the catalyst insertion and the chimney that influences the natural ventilation and thus the throughput through the recombiner significantly. For validation experiments were performed with a small-scale recombiner model in the test facility REKO-4. The results show the correlation between the hydrogen concentration at the recombiner entrance, the temperature on catalyst sheets and the entrance velocity using different chimney heights. The entrance velocity increases with the heights of the installed chimney that influences the natural ventilation significantly. The results allow the generation of a wide data base for validation of the computer code REKO-DIREKT.

  14. Evaluating Domestic Hot Water Distribution System Options with Validated Analysis Models

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, E. [Alliance for Residential Building Innovation, Davis, CA (United States); Hoeschele, E. [Alliance for Residential Building Innovation, Davis, CA (United States)

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. Transient System Simulation Tool (TRNSYS) is a full distribution system developed that has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. In this study, the Building America team built upon previous analysis modeling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall, 124 different TRNSYS models were simulated. The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  15. Applicability of U.S. Army tracer test data to model validation needs of ERDA

    International Nuclear Information System (INIS)

    Shearer, D.L.; Minott, D.H.

    1976-06-01

    This report covers the first phase of an atmospheric dispersion model validation project sponsored by the Energy Research and Development Administration (ERDA). The project will employ dispersion data generated during an extensive series of field tracer experiments that were part of a meteorological research program which was conducted by the U. S. Army Dugway Proving Ground, Utah, from the late 1950's to the early 1970's. The tests were conducted at several locations in the U. S., South America, Germany, and Norway chosen to typify the effects of certain environmental factors on atmospheric dispersion. The purpose of the Phase I work of this project was to identify applicable portions of the Army data, obtain and review that data, and make recommendations for its uses for atmospheric dispersion model validations. This report presents key information in three formats. The first is a tabular listing of the Army dispersion test reports summarizing the test data contained in each report. This listing is presented in six separate tables with each tabular list representing a different topical area that is based on model validation requirements and the nature of the Army data base. The second format for presenting key information is a series of discussions of the Army test information assigned to each of the six topical areas. These discussions relate the extent and quality of the available data, as well as its prospective use for model validation. The third format is a series of synopses for each Army test report

  16. Validation of new 3-D neutronics model in APROS for hexagonal geometry

    International Nuclear Information System (INIS)

    Rintala, J.

    2010-01-01

    APROS - Advanced PROcess Simulation environment-is a widely used simulation tool for nuclear power plant modelling. Earlier the three-dimensional neutronics calculation has been based on model using the difference method. The original three-dimensional core model is mainly used in power plant simulator applications, where it fits well because of its speed. For safety analysis purposes, however, a new model was considered to be an important improvement to meet the accuracy requirements. A sophisticated nodal model used already in HEXTRAN and TRAB-3D was decided to be implemented into APROS. The hexagonal part of the model has now been implemented and tested. For practical reasons, the model was programmed from scratch into APROS and also some small improvements were added and thus, an extensive validation program was necessary to prove the correct behaviour of the model. In this paper, the most important results from AER kinetic benchmarks 2 and 3 calculations are shown as well as the calculation results against data achieved LR-0 test reactor space-time kinetic experiments. Since the model is similar to the one in HEXTRAN, the results in benchmarks are compared to the results by it. In LR-0 calculations, results by both, original and new model are presented and compared to the measurements. The results shows that the implementation of the model has been successful and the new model improves the accuracy of three-dimensional neutronics calculation in APROS into the level required in safety analyses. (Author)

  17. Social Validity of a Positive Behavior Interventions and Support Model

    Science.gov (United States)

    Miramontes, Nancy Y.; Marchant, Michelle; Heath, Melissa Allen; Fischer, Lane

    2011-01-01

    As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were…

  18. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    Science.gov (United States)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product

  19. R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy

    2008-09-01

    Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model

  20. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  1. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  3. A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.

    Science.gov (United States)

    Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher

    2017-08-01

    The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.

  4. Validity of scale modeling for large deformations in shipping containers

    International Nuclear Information System (INIS)

    Burian, R.J.; Black, W.E.; Lawrence, A.A.; Balmert, M.E.

    1979-01-01

    The principal overall objective of this phase of the continuing program for DOE/ECT is to evaluate the validity of applying scaling relationships to accurately assess the response of unprotected model shipping containers severe impact conditions -- specifically free fall from heights up to 140 ft onto a hard surface in several orientations considered most likely to produce severe damage to the containers. The objective was achieved by studying the following with three sizes of model casks subjected to the various impact conditions: (1) impact rebound response of the containers; (2) structural damage and deformation modes; (3) effect on the containment; (4) changes in shielding effectiveness; (5) approximate free-fall threshold height for various orientations at which excessive damage occurs; (6) the impact orientation(s) that tend to produce the most severe damage; and (7) vunerable aspects of the casks which should be examined. To meet the objective, the tests were intentionally designed to produce extreme structural damage to the cask models. In addition to the principal objective, this phase of the program had the secondary objectives of establishing a scientific data base for assessing the safety and environmental control provided by DOE nuclear shipping containers under impact conditions, and providing experimental data for verification and correlation with dynamic-structural-analysis computer codes being developed by the Los Alamos Scientific Laboratory for DOE/ECT

  5. Validation and comparison of dispersion models of RTARC DSS

    International Nuclear Information System (INIS)

    Duran, J.; Pospisil, M.

    2004-01-01

    RTARC DSS (Real Time Accident Release Consequences - Decision Support System) is a computer code developed at the VUJE Trnava, Inc. (Stubna, M. et al, 1993). The code calculations include atmospheric transport and diffusion, dose assessment, evaluation and displaying of the affected zones, evaluation of the early health effects, concentration and dose rate time dependence in the selected sites etc. The simulation of the protective measures (sheltering, iodine administration) is involved. The aim of this paper is to present the process of validation of the RTARC dispersion models. RTARC includes models for calculations of release for very short (Method Monte Carlo - MEMOC), short (Gaussian Straight-Line Model) and long distances (Puff Trajectory Model - PTM). Validation of the code RTARC was performed using the results of comparisons and experiments summarized in the Table 1.: 1. Experiments and comparisons in the process of validation of the system RTARC - experiments or comparison - distance - model. Wind tunnel experiments (Universitaet der Bundeswehr, Muenchen) - Area of NPP - Method Monte Carlo. INEL (Idaho National Engineering Laboratory) - short/medium - Gaussian model and multi tracer atmospheric experiment - distances - PTM. Model Validation Kit - short distances - Gaussian model. STEP II.b 'Realistic Case Studies' - long distances - PTM. ENSEMBLE comparison - long distances - PTM (orig.)

  6. The GPM Ground Validation Program: Pre to Post-Launch

    Science.gov (United States)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi

  7. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  8. [Design and validation of a training model on paediatric and neonatal surgery].

    Science.gov (United States)

    Pérez-Duarte, F J; Díaz-Güemes, I; Sánchez-Hurtado, M A; Cano Novillo, I; Berchi García, F J; García Vázquez, A; Sánchez-Margallo, F M

    2012-07-01

    We present our experience in the design and development of a training program in paediatric and neonatal laparoscopic surgery, and the determination of face validity by the attendants. Data included in the present study was obtained from five consecutive editions of our Neonatal and Paediatric Laparoscopic Surgery Course. Our training model, with a total duration of 21 hours, begins with acquisition of knowledge in ergonomics and instrument concepts, after which the attendants develop basic laparoscopic dexterity through the performance of hands-on physical simulator tasks. During the second and third days of the course, surgeons undertook various surgical techniques hands-on animal model. At the end of the training program, a subjective evaluation questionnaire was handed out to the attendants, in which different didactic and organizational aspects were considered. We obtained a highly positive score on all questions concerning the different topics and techniques included in the training program (> or = 9 points over 10). 78,5% of the 54 attendants was in accordance with the course total duration, whilst 21,5% considered that it should be of longer duration. Regarding abilities' self assessment, 79,1% considered themselves capacitated to perform trained procedures on live patients. The presented training model has obtained a very positive valuation score, leading to an increase in the attendants' self confidence in the application of learned techniques to their clinical practice.

  9. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  10. Validation of heat transfer models for gap cooling

    International Nuclear Information System (INIS)

    Okano, Yukimitsu; Nagae, Takashi; Murase, Michio

    2004-01-01

    For severe accident assessment of a light water reactor, models of heat transfer in a narrow annular gap between overheated core debris and a reactor pressure vessel are important for evaluating vessel integrity and accident management. The authors developed and improved the models of heat transfer. However, validation was not sufficient for applicability of the gap heat flux correlation to the debris cooling in the vessel lower head and applicability of the local boiling heat flux correlations to the high-pressure conditions. Therefore, in this paper, we evaluated the validity of the heat transfer models and correlations by analyses for ALPHA and LAVA experiments where molten aluminum oxide (Al 2 O 3 ) at about 2700 K was poured into the high pressure water pool in a small-scale simulated vessel lower head. In the heating process of the vessel wall, the calculated heating rate and peak temperature agreed well with the measured values, and the validity of the heat transfer models and gap heat flux correlation was confirmed. In the cooling process of the vessel wall, the calculated cooling rate was compared with the measured value, and the validity of the nucleate boiling heat flux correlation was confirmed. The peak temperatures of the vessel wall in ALPHA and LAVA experiments were lower than the temperature at the minimum heat flux point between film boiling and transition boiling, so the minimum heat flux correlation could not be validated. (author)

  11. Partnership model of vocational education with the business sector in civil engineering expertise program of Vocational Secondary Schools

    Directory of Open Access Journals (Sweden)

    I Kadek Budi Sandika

    2018-01-01

    Full Text Available This study aims to: (1 develop a partnership model of vocational education with business sectors in civil engineering expertise program of vocational secondary schools in Bali and (2testing the effectiveness and efficiency of partnership model developed. The study used the design and development model of Richey & Klein (2009, which consists of three main phases, namely the phase of model development, model validation, and model testing. The phase of model development used the qualitative approach, through literature review, observation and interview. Expert review techniques were used in the model validation phase. The model testing used pre-experimental design with one-shot case study. The study found that the partnership model of vocational education with the business sector in civil engineering expertise program of vocational secondary schools in Bali involves several components, such as key stakeholders, the underlying principle of partnership, orientation/common goal, the management of educational resources (teachers and facilities, curriculum development, implementation of learning/training and work practices, competency test of graduates, distribution of learning outcomes/output, as well as monitoring, evaluation and feedback of partnership program. Experimental results show that the partnership model developed has met all of the criteria (effective, practical and efficient.

  12. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  13. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  14. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  15. Validation of a numerical 3-D fluid-structure interaction model for a prosthetic valve based on experimental PIV measurements.

    Science.gov (United States)

    Guivier-Curien, Carine; Deplano, Valérie; Bertrand, Eric

    2009-10-01

    A numerical 3-D fluid-structure interaction (FSI) model of a prosthetic aortic valve was developed, based on a commercial computational fluid dynamics (CFD) software program using an Arbitrary Eulerian Lagrangian (ALE) formulation. To make sure of the validity of this numerical model, an equivalent experimental model accounting for both the geometrical features and the hydrodynamic conditions was also developed. The leaflet and the flow behaviours around the bileaflet valve were investigated numerically and experimentally by performing particle image velocimetry (PIV) measurements. Through quantitative and qualitative comparisons, it was shown that the leaflet behaviour and the velocity fields were similar in both models. The present study allows the validation of a fully coupled 3-D FSI numerical model. The promising numerical tool could be therefore used to investigate clinical issues involving the aortic valve.

  16. Program of neuropsychological stimulation of cognition in students: Emphasis on executive functions - development and evidence of content validity

    Directory of Open Access Journals (Sweden)

    Caroline de Oliveira Cardoso

    Full Text Available ABSTRACT Objective: The goal of this study was to describe the construction process and content validity evidence of an early and preventive intervention program for stimulating executive functions (EF in Elementary School children within the school environment. Methods: The process has followed the recommended steps for creating neuropsychological instruments: internal phase of program organization, with literature search and analyses of available materials in the classroom; program construction; analysis by expert judges; data integration and program finalization. To determine the level of agreement among the judges, a Content Validity Index (CVI was calculated. Results: Content validity was evidenced by the agreement among the experts with regards to the program, both in general and for each activity. All steps taken were deemed necessary because they contributed to the identification of positive aspects and possible flaws in the process Conclusion: The steps also helped to adapt stimuli and improve program tasks and activities. Methodological procedures implemented in this study can be adopted by other researchers to create or adapt neuropsychological stimulation and rehabilitation programs. Furthermore, the methodological approach allows the reader to understand, in detail, the technical and scientific rigor adopted in devising this program.

  17. Preliminary validation of a Monte Carlo model for IMRT fields

    International Nuclear Information System (INIS)

    Wright, Tracy; Lye, Jessica; Mohammadi, Mohammad

    2011-01-01

    Full text: A Monte Carlo model of an Elekta linac, validated for medium to large (10-30 cm) symmetric fields, has been investigated for small, irregular and asymmetric fields suitable for IMRT treatments. The model has been validated with field segments using radiochromic film in solid water. The modelled positions of the multileaf collimator (MLC) leaves have been validated using EBT film, In the model, electrons with a narrow energy spectrum are incident on the target and all components of the linac head are included. The MLC is modelled using the EGSnrc MLCE component module. For the validation, a number of single complex IMRT segments with dimensions approximately 1-8 cm were delivered to film in solid water (see Fig, I), The same segments were modelled using EGSnrc by adjusting the MLC leaf positions in the model validated for 10 cm symmetric fields. Dose distributions along the centre of each MLC leaf as determined by both methods were compared. A picket fence test was also performed to confirm the MLC leaf positions. 95% of the points in the modelled dose distribution along the leaf axis agree with the film measurement to within 1%/1 mm for dose difference and distance to agreement. Areas of most deviation occur in the penumbra region. A system has been developed to calculate the MLC leaf positions in the model for any planned field size.

  18. Los Alamos Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Bergen, Benjamin Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-07

    This is the PDF of a powerpoint presentation from a teleconference on Los Alamos programming models. It starts by listing their assumptions for the programming models and then details a hierarchical programming model at the System Level and Node Level. Then it details how to map this to their internal nomenclature. Finally, a list is given of what they are currently doing in this regard.

  19. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  20. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  1. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  3. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  4. A Practical Approach to Validating a PD Model

    NARCIS (Netherlands)

    Medema, L.; Koning, de R.; Lensink, B.W.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  5. A practical approach to validating a PD model

    NARCIS (Netherlands)

    Medema, Lydian; Koning, Ruud H.; Lensink, Robert; Medema, M.

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  6. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  7. EOS Terra Validation Program

    Science.gov (United States)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  8. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  9. Validation experience with the core calculation program karate

    International Nuclear Information System (INIS)

    Hegyi, Gy.; Hordosy, G.; Kereszturi, A.; Makai, M.; Maraczy, Cs.

    1995-01-01

    A relatively fast and easy-to-handle modular code system named KARATE-440 has been elaborated for steady-state operational calculations of VVER-440 type reactors. It is built up from cell, assembly and global calculations. In the frame of the program neutron physical and thermohydraulic process of the core at normal startup, steady and slow transient can be simulated. The verification and validation of the global code have been prepared recently. The test cases include mathematical benchmark and measurements on operating VVER-440 units. Summary of the results, such as startup parameters, boron letdown curves, radial and axial power distributions of some cycles of Paks NPP is presented. (author)

  10. Nutrient profiling can help identify foods of good nutritional quality for their price: a validation study with linear programming.

    Science.gov (United States)

    Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole

    2008-06-01

    Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.

  11. Validation of an age-modified caries risk assessment program (Cariogram) in preschool children

    DEFF Research Database (Denmark)

    Holgerson, Pernilla Lif; Twetman, Svante; Stecksèn-Blicks, Christina

    2009-01-01

    OBJECTIVES: (i) To validate caries risk profiles assessed with a computer program against actual caries development in preschool children, (ii) to study the possible impact of a preventive program on the risk profiles, and (iii) to compare the individual risk profiles longitudinally. MATERIAL...... of sugar. The majority of the children who changed category displayed a lowered risk at 7 years. The intervention program seemed to impair the predictive abilities of Cariogram. CONCLUSION: A modified Cariogram applied on preschool children was not particularly useful in identifying high caries risk...

  12. Developing rural palliative care: validating a conceptual model.

    Science.gov (United States)

    Kelley, Mary Lou; Williams, Allison; DeMiglio, Lily; Mettam, Hilary

    2011-01-01

    The purpose of this research was to validate a conceptual model for developing palliative care in rural communities. This model articulates how local rural healthcare providers develop palliative care services according to four sequential phases. The model has roots in concepts of community capacity development, evolves from collaborative, generalist rural practice, and utilizes existing health services infrastructure. It addresses how rural providers manage challenges, specifically those related to: lack of resources, minimal community understanding of palliative care, health professionals' resistance, the bureaucracy of the health system, and the obstacles of providing services in rural environments. Seven semi-structured focus groups were conducted with interdisciplinary health providers in 7 rural communities in two Canadian provinces. Using a constant comparative analysis approach, focus group data were analyzed by examining participants' statements in relation to the model and comparing emerging themes in the development of rural palliative care to the elements of the model. The data validated the conceptual model as the model was able to theoretically predict and explain the experiences of the 7 rural communities that participated in the study. New emerging themes from the data elaborated existing elements in the model and informed the requirement for minor revisions. The model was validated and slightly revised, as suggested by the data. The model was confirmed as being a useful theoretical tool for conceptualizing the development of rural palliative care that is applicable in diverse rural communities.

  13. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  14. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    International Nuclear Information System (INIS)

    UYTIOCO EM

    2007-01-01

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates

  15. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    Science.gov (United States)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  16. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  17. Geochemistry Model Validation Report: Material Degradation and Release Model

    International Nuclear Information System (INIS)

    Stockman, H.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17)

  18. The CRAFT Fortran Programming Model

    Directory of Open Access Journals (Sweden)

    Douglas M. Pase

    1994-01-01

    Full Text Available Many programming models for massively parallel machines exist, and each has its advantages and disadvantages. In this article we present a programming model that combines features from other programming models that (1 can be efficiently implemented on present and future Cray Research massively parallel processor (MPP systems and (2 are useful in constructing highly parallel programs. The model supports several styles of programming: message-passing, data parallel, global address (shared data, and work-sharing. These styles may be combined within the same program. The model includes features that allow a user to define a program in terms of the behavior of the system as a whole, where the behavior of individual tasks is implicit from this systemic definition. (In general, features marked as shared are designed to support this perspective. It also supports an opposite perspective, where a program may be defined in terms of the behaviors of individual tasks, and a program is implicitly the sum of the behaviors of all tasks. (Features marked as private are designed to support this perspective. Users can exploit any combination of either set of features without ambiguity and thus are free to define a program from whatever perspective is most appropriate to the problem at hand.

  19. Development and validation of a mass casualty conceptual model.

    Science.gov (United States)

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  20. Validation of an Information-Motivation-Behavioral Skills model of diabetes self-care (IMB-DSC).

    Science.gov (United States)

    Osborn, Chandra Y; Egede, Leonard E

    2010-04-01

    Comprehensive behavior change frameworks are needed to provide guidance for the design, implementation, and evaluation of diabetes self-care programs in diverse populations. We applied the Information-Motivation-Behavioral Skills (IMB) model, a well-validated, comprehensive health behavior change framework, to diabetes self-care. Patients with diabetes were recruited from an outpatient clinic. Information gathered pertained to demographics, diabetes knowledge (information); diabetes fatalism (personal motivation); social support (social motivation); and diabetes self-care (behavior). Hemoglobin A1C values were extracted from the patient medical record. Structural equation models tested the IMB framework. More diabetes knowledge (r=0.22 pbehavior; and through behavior, were related to glycemic control (r=-0.20, pmotivation (less fatalistic attitudes), and social motivation (more social support) was associated with behavior; and behavior was the sole predictor of glycemic control. The IMB model is an appropriate, comprehensive health behavior change framework for diabetes self-care. The findings indicate that in addition to knowledge, diabetes education programs should target personal and social motivation to effect behavior change. 2009 Elsevier Ireland Ltd. All rights reserved.

  1. Modeling EERE deployment programs

    Energy Technology Data Exchange (ETDEWEB)

    Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2007-11-01

    The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.

  2. Using program logic model analysis to evaluate and better deliver what works

    International Nuclear Information System (INIS)

    Megdal, Lori; Engle, Victoria; Pakenas, Larry; Albert, Scott; Peters, Jane; Jordan, Gretchen

    2005-01-01

    There is a rich history in using program theories and logic models (PT/LM) for evaluation, monitoring, and program refinement in a variety of fields, such as health care, social and education programs. The use of these tools to evaluate and improve energy efficiency programs has been growing over the last 5-7 years. This paper provides an overview of the state-of-the-art methods of logic model development, with analysis that significantly contributed to: Assessing the logic behind how the program expects to be able to meets its ultimate goals, including the 'who', the 'how', and through what mechanism. In doing so, gaps and questions that still need to be addressed can be identified. Identifying and prioritize the indicators that should be measured to evaluate the program and program theory. Determining key researchable questions that need to be answered by evaluation/research, to assess whether the mechanism assumed to cause the changes in actions, attitudes, behaviours, and business practices is workable and efficient. Also will assess the validity in the program logic and the likelihood that the program can accomplish its ultimate goals. Incorporating analysis of prior like programs and social science theories in a framework to identify opportunities for potential program refinements. The paper provides an overview of the tools, techniques and references, and uses as example the energy efficiency program analysis conducted for the New York State Energy Research and Development Authority's (NYSERDA) New York ENERGY $MART SM programs

  3. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  4. Model-based verification and validation of the SMAP uplink processes

    Science.gov (United States)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  5. Modeling EERE Deployment Programs

    Energy Technology Data Exchange (ETDEWEB)

    Cort, Katherine A.; Hostick, Donna J.; Belzer, David B.; Livingston, Olga V.

    2007-11-08

    The purpose of this report is to compile information and conclusions gathered as part of three separate tasks undertaken as part of the overall project, “Modeling EERE Deployment Programs,” sponsored by the Planning, Analysis, and Evaluation office within the Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE). The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address improvements to modeling in the near term, and note gaps in knowledge where future research is needed.

  6. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  7. In-Drift Microbial Communities Model Validation Calculations

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  8. In-Drift Microbial Communities Model Validation Calculation

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  9. In-Drift Microbial Communities Model Validation Calculations

    International Nuclear Information System (INIS)

    Jolley, D.M.

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data

  10. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    International Nuclear Information System (INIS)

    D.M. Jolley

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data

  11. Validation of Computer Models for Homeland Security Purposes

    International Nuclear Information System (INIS)

    Schweppe, John E.; Ely, James; Kouzes, Richard T.; McConn, Ronald J.; Pagh, Richard T.; Robinson, Sean M.; Siciliano, Edward R.; Borgardt, James D.; Bender, Sarah E.; Earnhart, Alison H.

    2005-01-01

    At Pacific Northwest National Laboratory, we are developing computer models of radiation portal monitors for screening vehicles and cargo. Detailed models of the radiation detection equipment, vehicles, cargo containers, cargos, and radioactive sources have been created. These are used to determine the optimal configuration of detectors and the best alarm algorithms for the detection of items of interest while minimizing nuisance alarms due to the presence of legitimate radioactive material in the commerce stream. Most of the modeling is done with the Monte Carlo code MCNP to describe the transport of gammas and neutrons from extended sources through large, irregularly shaped absorbers to large detectors. A fundamental prerequisite is the validation of the computational models against field measurements. We describe the first step of this validation process, the comparison of the models to measurements with bare static sources

  12. ATHLET. Mod 3.0 Cycle A. Validation

    Energy Technology Data Exchange (ETDEWEB)

    Lerchl, G.; Austregesilo, H.; Glaeser, H.; Hrubisko, M.; Luther, W.

    2012-09-15

    ATHLET is an advanced best-estimate code which has been initially developed for the simulation of design basis and beyond design basis accidents (without core degradation) in light water reactors, including VVER and RBMK reactors. Furthermore, this program version enables the simulation of further working fluids like helium and liquid metals. The one-dimensional, two-phase fluiddynamic models are based on a five-equation model supplemented by a full-range drift-flux model, including a dynamic mixture-level tracking capability. Moreover, a two-fluid model based on six conservation equations is provided. The heat conduction and heat transfer module allows a flexible simulation of fuel rods and structures. The nuclear heat generation is calculated by a point-kinetics or by a one-dimensional kinetics model. A general control simulation module is provided for a flexible modelling of BOP- and auxiliary plant systems. Systematic code validation is performed by GRS and independent organizations. This Validation Manual is the fourth volume of the ATHLET Code Documentation comprising four volumes. This manual presents an overview about the complete ATHLET validation effort spent up to now. In addition, the results of five test cases simulated with the present ATHLET program version are compared with the experimental data.

  13. Modeling EERE Deployment Programs

    Energy Technology Data Exchange (ETDEWEB)

    Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2007-11-01

    This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.

  14. Models, validation, and applied geochemistry: Issues in science, communication, and philosophy

    International Nuclear Information System (INIS)

    Kirk Nordstrom, D.

    2012-01-01

    Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.

  15. An independent verification and validation of the Future Theater Level Model conceptual model

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  16. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    NARCIS (Netherlands)

    P.C. Austin (Peter); D. van Klaveren (David); Y. Vergouwe (Yvonne); D. Nieboer (Daan); D.S. Lee (Douglas); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractObjective: Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting: We

  17. Validation of the Colorado Retinopathy of Prematurity Screening Model.

    Science.gov (United States)

    McCourt, Emily A; Ying, Gui-Shuang; Lynch, Anne M; Palestine, Alan G; Wagner, Brandie D; Wymore, Erica; Tomlinson, Lauren A; Binenbaum, Gil

    2018-04-01

    The Colorado Retinopathy of Prematurity (CO-ROP) model uses birth weight, gestational age, and weight gain at the first month of life (WG-28) to predict risk of severe retinopathy of prematurity (ROP). In previous validation studies, the model performed very well, predicting virtually all cases of severe ROP and potentially reducing the number of infants who need ROP examinations, warranting validation in a larger, more diverse population. To validate the performance of the CO-ROP model in a large multicenter cohort. This study is a secondary analysis of data from the Postnatal Growth and Retinopathy of Prematurity (G-ROP) Study, a retrospective multicenter cohort study conducted in 29 hospitals in the United States and Canada between January 2006 and June 2012 of 6351 premature infants who received ROP examinations. Sensitivity and specificity for severe (early treatment of ROP [ETROP] type 1 or 2) ROP, and reduction in infants receiving examinations. The CO-ROP model was applied to the infants in the G-ROP data set with all 3 data points (infants would have received examinations if they met all 3 criteria: birth weight, large validation cohort. The model requires all 3 criteria to be met to signal a need for examinations, but some infants with a birth weight or gestational age above the thresholds developed severe ROP. Most of these infants who were not detected by the CO-ROP model had obvious deviation in expected weight trajectories or nonphysiologic weight gain. These findings suggest that the CO-ROP model needs to be revised before considering implementation into clinical practice.

  18. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  19. Mixed integer linear programming model for dynamic supplier selection problem considering discounts

    Directory of Open Access Journals (Sweden)

    Adi Wicaksono Purnawan

    2018-01-01

    Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.

  20. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  1. Traffic modelling validation of advanced driver assistance systems

    NARCIS (Netherlands)

    Tongeren, R. van; Gietelink, O.J.; Schutter, B. de; Verhaegen, M.

    2007-01-01

    This paper presents a microscopic traffic model for the validation of advanced driver assistance systems. This model describes single-lane traffic and is calibrated with data from a field operational test. To illustrate the use of the model, a Monte Carlo simulation of single-lane traffic scenarios

  2. Valid Competency Assessment in Higher Education

    Directory of Open Access Journals (Sweden)

    Olga Zlatkin-Troitschanskaia

    2017-01-01

    Full Text Available The aim of the 15 collaborative projects conducted during the new funding phase of the German research program Modeling and Measuring Competencies in Higher Education—Validation and Methodological Innovations (KoKoHs is to make a significant contribution to advancing the field of modeling and valid measurement of competencies acquired in higher education. The KoKoHs research teams assess generic competencies and domain-specific competencies in teacher education, social and economic sciences, and medicine based on findings from and using competency models and assessment instruments developed during the first KoKoHs funding phase. Further, they enhance, validate, and test measurement approaches for use in higher education in Germany. Results and findings are transferred at various levels to national and international research, higher education practice, and education policy.

  3. Verification and Validation of Tropospheric Model/Database

    National Research Council Canada - National Science Library

    Junho, choi

    1998-01-01

    A verification and validation of tropospheric models and databases has been performed based on ray tracing algorithm, statistical analysis, test on real time system operation, and other technical evaluation process...

  4. 1:50 Scale Testing of Three Floating Wind Turbines at MARIN and Numerical Model Validation Against Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Dagher, Habib [Univ. of Maine, Orno, ME (United States); Viselli, Anthony [Univ. of Maine, Orno, ME (United States); Goupee, Andrew [Univ. of Maine, Orno, ME (United States); Allen, Christopher [Univ. of Maine, Orno, ME (United States)

    2017-08-15

    The primary goal of the basin model test program discussed herein is to properly scale and accurately capture physical data of the rigid body motions, accelerations and loads for different floating wind turbine platform technologies. The intended use for this data is for performing comparisons with predictions from various aero-hydro-servo-elastic floating wind turbine simulators for calibration and validation. Of particular interest is validating the floating offshore wind turbine simulation capabilities of NREL’s FAST open-source simulation tool. Once the validation process is complete, coupled simulators such as FAST can be used with a much greater degree of confidence in design processes for commercial development of floating offshore wind turbines. The test program subsequently described in this report was performed at MARIN (Maritime Research Institute Netherlands) in Wageningen, the Netherlands. The models considered consisted of the horizontal axis, NREL 5 MW Reference Wind Turbine (Jonkman et al., 2009) with a flexible tower affixed atop three distinct platforms: a tension leg platform (TLP), a spar-buoy modeled after the OC3 Hywind (Jonkman, 2010) and a semi-submersible. The three generic platform designs were intended to cover the spectrum of currently investigated concepts, each based on proven floating offshore structure technology. The models were tested under Froude scale wind and wave loads. The high-quality wind environments, unique to these tests, were realized in the offshore basin via a novel wind machine which exhibits negligible swirl and low turbulence intensity in the flow field. Recorded data from the floating wind turbine models included rotor torque and position, tower top and base forces and moments, mooring line tensions, six-axis platform motions and accelerations at key locations on the nacelle, tower, and platform. A large number of tests were performed ranging from simple free-decay tests to complex operating conditions with

  5. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  6. Enhancement of weld failure and tube ejection model in PENTAP program

    International Nuclear Information System (INIS)

    Jung, Jaehoon; An, Sang Mo; Ha, Kwang Soon; Kim, Hwan Yeol

    2014-01-01

    The reactor vessel pressure, the debris mass, the debris temperature, and the component of material can have an effect on the penetration tube failure modes. Furthermore, these parameters are interrelated. There are some representative severe accident codes such as MELCOR, MAAP, and PENTAP program. MELCOR decides on a penetration tube failure by its failure temperature such as 1273K simply. MAAP considers all penetration failure modes and has the most advanced model for a penetration tube failure model. However, the validation work against the experimental data is very limited. PENTAP program which evaluates the possible penetration tube failure modes such as creep failure, weld failure, tube ejection, and a long term tube failure under given accident condition was developed by KAERI. The experiment for the tube ejection is being performed by KAERI. The temperature distribution and the ablation rate of both weld and lower vessel wall can be obtained through the experiment. This paper includes the updated calculation steps for the weld failure and the tube ejection modes of the PENTAP program to apply the experimental results. PENTAP program can evaluate the possible penetration tube failure modes. It still requires a large amount of efforts to increase the prediction of failure modes. Some calculation steps are necessary for applying the experimental and the numerical data in the PENTAP program. In this study, new calculation steps are added to PENTAP program to enhance the weld failure and tube ejection models using KAERI's experimental data which are the ablation rate and temperature distribution of weld and lower vessel wall

  7. Development of a programming model for radiation-resistant software

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1984-01-01

    The adverse effects of ionizing radiation on microelectronic systems include cumulative dosage effects, single-event upsets (SEU's) and latch-up. Most frequent, especially when the radiation environment includes heavy ions, are SEU's. Unfortunately SEU's are difficult to detect since they can be read (in RAM or ROM) as valid addresses. They can however be handled in software by proper techniques. The authors refer to their method as MRS - Maximally Redundant Software. The MRS programming model which the authors are developing uses multiply redundant boot blocks, majority voting, periodic refresh, and error recovery techniques to minimize the deleterious effects of SEU's. 1 figure

  8. Cost model validation: a technical and cultural approach

    Science.gov (United States)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  9. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    Science.gov (United States)

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  10. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    Directory of Open Access Journals (Sweden)

    Anna Gomes

    Full Text Available Gentamicin shows large variations in half-life and volume of distribution (Vd within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1 creating an optimal model for endocarditis patients; and 2 assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE and Median Absolute Prediction Error (MDAPE were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358 renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076 L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68% as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37% and standard (MDPE -0.90%, MDAPE 4.82% models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to

  11. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  12. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  13. Validation analysis of probabilistic models of dietary exposure to food additives.

    Science.gov (United States)

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  14. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  15. Ensuring the Validity of the Micro Foundation in DSGE Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    & Primiceri (American Economic Review, forth- coming) and Fernández-Villaverde & Rubio-Ramírez (Review of Economic Studies, 2007) do not satisfy these sufficient conditions, or any other known set of conditions ensuring finite values for the objective functions. Thus, the validity of the micro foundation......The presence of i) stochastic trends, ii) deterministic trends, and/or iii) stochastic volatil- ity in DSGE models may imply that the agents' objective functions attain infinite values. We say that such models do not have a valid micro foundation. The paper derives sufficient condi- tions which...... ensure that the objective functions of the households and the firms are finite even when various trends and stochastic volatility are included in a standard DSGE model. Based on these conditions we test the validity of the micro foundation in six DSGE models from the literature. The models of Justiniano...

  16. Experimental validation of the twins prediction program for rolling noise. Pt.2: results

    NARCIS (Netherlands)

    Thompson, D.J.; Fodiman, P.; Mahé, H.

    1996-01-01

    Two extensive measurement campaigns have been carried out to validate the TWINS prediction program for rolling noise, as described in part 1 of this paper. This second part presents the experimental results of vibration and noise during train pass-bys and compares them with predictions from the

  17. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  18. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    Science.gov (United States)

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Cross-validation of an employee safety climate model in Malaysia.

    Science.gov (United States)

    Bahari, Siti Fatimah; Clarke, Sharon

    2013-06-01

    Whilst substantial research has investigated the nature of safety climate, and its importance as a leading indicator of organisational safety, much of this research has been conducted with Western industrial samples. The current study focuses on the cross-validation of a safety climate model in the non-Western industrial context of Malaysian manufacturing. The first-order factorial validity of Cheyne et al.'s (1998) [Cheyne, A., Cox, S., Oliver, A., Tomas, J.M., 1998. Modelling safety climate in the prediction of levels of safety activity. Work and Stress, 12(3), 255-271] model was tested, using confirmatory factor analysis, in a Malaysian sample. Results showed that the model fit indices were below accepted levels, indicating that the original Cheyne et al. (1998) safety climate model was not supported. An alternative three-factor model was developed using exploratory factor analysis. Although these findings are not consistent with previously reported cross-validation studies, we argue that previous studies have focused on validation across Western samples, and that the current study demonstrates the need to take account of cultural factors in the development of safety climate models intended for use in non-Western contexts. The results have important implications for the transferability of existing safety climate models across cultures (for example, in global organisations) and highlight the need for future research to examine cross-cultural issues in relation to safety climate. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  1. Modeling and Simulation Behavior Validation Methodology and Extension Model Validation for the Individual Soldier

    Science.gov (United States)

    2015-03-01

    domains. Major model functions include: • Ground combat: Light and heavy forces. • Air mobile forces. • Future forces. • Fixed-wing and rotary-wing...Constraints: • Study must be completed no later than 31 December 2014. • Entity behavior limited to select COMBATXXI Mobility , Unmanned Aerial System...and SQL backend , as well as any open application programming interface API. • Allows data transparency and data driven navigation through the model

  2. Predicting the ungauged basin: Model validation and realism assessment

    Directory of Open Access Journals (Sweden)

    Tim evan Emmerik

    2015-10-01

    Full Text Available The hydrological decade on Predictions in Ungauged Basins (PUB led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  3. Validating firn compaction model with remote sensing data

    DEFF Research Database (Denmark)

    Simonsen, S. B.; Stenseng, Lars; Sørensen, Louise Sandberg

    A comprehensive understanding of firn processes is of outmost importance, when estimating present and future changes of the Greenland Ice Sheet. Especially, when remote sensing altimetry is used to assess the state of ice sheets and their contribution to global sea level rise, firn compaction...... models have been shown to be a key component. Now, remote sensing data can also be used to validate the firn models. Radar penetrating the upper part of the firn column in the interior part of Greenland shows a clear layering. The observed layers from the radar data can be used as an in-situ validation...... correction relative to the changes in the elevation of the surface observed with remote sensing altimetry? What model time resolution is necessary to resolved the observed layering? What model refinements are necessary to give better estimates of the surface mass balance of the Greenland ice sheet from...

  4. Quantitative Validation of the Integrated Medical Model (IMM) for ISS Missions

    Science.gov (United States)

    Young, Millennia; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    Lifetime Surveillance of Astronaut Health (LSAH) provided observed medical event data on 33 ISS and 111 STS person-missions for use in further improving and validating the Integrated Medical Model (IMM). Using only the crew characteristics from these observed missions, the newest development version, IMM v4.0, will simulate these missions to predict medical events and outcomes. Comparing IMM predictions to the actual observed medical event counts will provide external validation and identify areas of possible improvement. In an effort to improve the power of detecting differences in this validation study, the total over each program ISS and STS will serve as the main quantitative comparison objective, specifically the following parameters: total medical events (TME), probability of loss of crew life (LOCL), and probability of evacuation (EVAC). Scatter plots of observed versus median predicted TMEs (with error bars reflecting the simulation intervals) will graphically display comparisons while linear regression will serve as the statistical test of agreement. Two scatter plots will be analyzed 1) where each point reflects a mission and 2) where each point reflects a condition-specific total number of occurrences. The coefficient of determination (R2) resulting from a linear regression with no intercept bias (intercept fixed at zero) will serve as an overall metric of agreement between IMM and the real world system (RWS). In an effort to identify as many possible discrepancies as possible for further inspection, the -level for all statistical tests comparing IMM predictions to observed data will be set to 0.1. This less stringent criterion, along with the multiple testing being conducted, should detect all perceived differences including many false positive signals resulting from random variation. The results of these analyses will reveal areas of the model requiring adjustment to improve overall IMM output, which will thereby provide better decision support for

  5. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  6. Genetic programming-based mathematical modeling of influence of weather parameters in BOD5 removal by Lemna minor.

    Science.gov (United States)

    Chandrasekaran, Sivapragasam; Sankararajan, Vanitha; Neelakandhan, Nampoothiri; Ram Kumar, Mahalakshmi

    2017-11-04

    This study, through extensive experiments and mathematical modeling, reveals that other than retention time and wastewater temperature (T w ), atmospheric parameters also play important role in the effective functioning of aquatic macrophyte-based treatment system. Duckweed species Lemna minor is considered in this study. It is observed that the combined effect of atmospheric temperature (T atm ), wind speed (U w ), and relative humidity (RH) can be reflected through one parameter, namely the "apparent temperature" (T a ). A total of eight different models are considered based on the combination of input parameters and the best mathematical model is arrived at which is validated through a new experimental set-up outside the modeling period. The validation results are highly encouraging. Genetic programming (GP)-based models are found to reveal deeper understandings of the wetland process.

  7. Studying Validity of Single-Fluid Model in Inertial Confinement Fusion

    International Nuclear Information System (INIS)

    Gu Jian-Fa; Fan Zheng-Feng; Dai Zhen-Sheng; Ye Wen-Hua; Pei Wen-Bing; Zhu Shao-Ping

    2014-01-01

    The validity of single-fluid model in inertial confinement fusion simulations is studied by comparing the results of the multi- and single-fluid models. The multi-fluid model includes the effects of collision and interpenetration between fluid species. By simulating the collision of fluid species, steady-state shock propagation into the thin DT gas and expansion of hohlraum Au wall heated by lasers, the results show that the validity of single-fluid model is strongly dependent on the ratio of the characteristic length of the simulated system to the particle mean free path. When the characteristic length L is one order larger than the mean free path λ, the single-fluid model's results are found to be in good agreement with the multi-fluid model's simulations, and the modeling of single-fluid remains valid. If the value of L/λ is lower than 10, the interpenetration between fluid species is significant, and the single-fluid simulations show some unphysical results; while the multi-fluid model can describe well the interpenetration and mix phenomena, and give more reasonable results. (physics of gases, plasmas, and electric discharges)

  8. ADVISHE: A new tool to report validation of health-economic decision models

    NARCIS (Netherlands)

    Vemer, P.; Corro Ramos, I.; Van Voorn, G.; Al, M.J.; Feenstra, T.L.

    2014-01-01

    Background: Modelers and reimbursement decision makers could both profit from a more systematic reporting of the efforts to validate health-economic (HE) models. Objectives: Development of a tool to systematically report validation efforts of HE decision models and their outcomes. Methods: A gross

  9. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  10. Validation of a Numerical Program for Analyzing Kinetic Energy Potential in the Bangka Strait, North Sulawesi, Indonesia

    Science.gov (United States)

    Rompas, P. T. D.; Taunaumang, H.; Sangari, F. J.

    2018-02-01

    The paper presents validation of the numerical program that computes the distribution of marine current velocities in the Bangka strait and the kinetic energy potential in the form the distributions of available power per area in the Bangka strait. The numerical program used the RANS model where the pressure distribution in the vertical assumed to be hydrostatic. The 2D and 3D numerical program results compared with the measurement results that are observation results to the moment conditions of low and high tide currents. It found no different significant between the numerical results and the measurement results. There are 0.97-2.2 kW/m2 the kinetic energy potential in the form the distributions of available power per area in the Bangka strait when low tide currents, whereas when high tide currents of 1.02-2.1 kW/m2. The results show that to be enabling the installation of marine current turbines for construction of power plant in the Bangka strait, North Sulawesi, Indonesia.

  11. Validating a continental-scale groundwater diffuse pollution model using regional datasets.

    Science.gov (United States)

    Ouedraogo, Issoufou; Defourny, Pierre; Vanclooster, Marnik

    2017-12-11

    In this study, we assess the validity of an African-scale groundwater pollution model for nitrates. In a previous study, we identified a statistical continental-scale groundwater pollution model for nitrate. The model was identified using a pan-African meta-analysis of available nitrate groundwater pollution studies. The model was implemented in both Random Forest (RF) and multiple regression formats. For both approaches, we collected as predictors a comprehensive GIS database of 13 spatial attributes, related to land use, soil type, hydrogeology, topography, climatology, region typology, nitrogen fertiliser application rate, and population density. In this paper, we validate the continental-scale model of groundwater contamination by using a nitrate measurement dataset from three African countries. We discuss the issue of data availability, and quality and scale issues, as challenges in validation. Notwithstanding that the modelling procedure exhibited very good success using a continental-scale dataset (e.g. R 2  = 0.97 in the RF format using a cross-validation approach), the continental-scale model could not be used without recalibration to predict nitrate pollution at the country scale using regional data. In addition, when recalibrating the model using country-scale datasets, the order of model exploratory factors changes. This suggests that the structure and the parameters of a statistical spatially distributed groundwater degradation model for the African continent are strongly scale dependent.

  12. Validation of spectral gas radiation models under oxyfuel conditions

    Energy Technology Data Exchange (ETDEWEB)

    Becher, Johann Valentin

    2013-05-15

    Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the

  13. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  14. Designing and Assessing the Validity and Reliability of the Hospital Readiness Assessment Tools to Conducting Quality Improvement Program

    Directory of Open Access Journals (Sweden)

    Kamal Gholipoor

    2016-09-01

    Full Text Available Background and objectives : Identifying the readiness of hospital and its strengths and weaknesses can be useful in developing appropriate planning and situation analyses and management to getting effective in clinical audit programs. The aim of this study was to design and assess the validity of the Hospital Readiness Assessment Tools to conduct quality improvement and clinical audit programs. Material and Methods: In this study, based on the results of a systematic review of literature, an initial questionnaire with 77 items was designed. Questionnaire content validity was reviewed by experts in the field of hospital management and quality improvement in Tabriz University of Medical Sciences. For this purpose, 20 questionnaires were sent to experts. Finally, 15 participants returned completed questionnaire. Questionnaire validity was reviewed and confirmed based on Content Validity Index and Content Validity Ratio. Questionnaire reliability was confirmed based on Cronbach's alpha index (α = 0.96 in a pilot study by participation of 30 hospital managers. Results: The results showed that the final questionnaire contains 54 questions as nine category as: data and information (9 items, teamwork (12 questions, resources (5 questions, patient and education (5, intervention design and implementation (5 questions, clinical audit management (4 questions, human resources (6 questions, evidence and standard (4 items and evaluation and feedback (4 items. The final questionnaire content validity index was 0.91 and final questionnaire Cronbach's alpha coefficient was 0.96. Conclusion: Considering the relative good validity and reliability of the designed tool in this study, it appears that the questionnaire can be used to identify and assess the readiness of hospitals for quality improvement and clinical audit program implementation

  15. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    . There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  16. Patient involvement in research programming and implementation: a responsive evaluation of the Dialogue Model for research agenda setting

    NARCIS (Netherlands)

    Abma, T.A.; Pittens, C.A.C.M.; Visse, M.; Elberse, J.E.; Broerse, J.E.W.

    2015-01-01

    Background: The Dialogue Model for research agenda-setting, involving multiple stakeholders including patients, was developed and validated in the Netherlands. However, there is little insight into whether and how patient involvement is sustained during the programming and implementation of research

  17. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    International Nuclear Information System (INIS)

    Apostolakis, J; Burkhardt, H; Ivanchenko, V N; Asai, M; Bagulya, A; Grichine, V; Brown, J M C; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Jacquemier, J; Guatelli, S; Incerti, S; Kadri, O; Maire, M; Urban, L; Pandola, L; Sawkey, D; Toshito, T; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed. (paper)

  18. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  19. Developing and establishing the validity and reliability of the perceptions toward Aviation Safety Action Program (ASAP) and Line Operations Safety Audit (LOSA) questionnaires

    Science.gov (United States)

    Steckel, Richard J.

    Aviation Safety Action Program (ASAP) and Line Operations Safety Audits (LOSA) are voluntary safety reporting programs developed by the Federal Aviation Administration (FAA) to assist air carriers in discovering and fixing threats, errors and undesired aircraft states during normal flights that could result in a serious or fatal accident. These programs depend on voluntary participation of and reporting by air carrier pilots to be successful. The purpose of the study was to develop and validate a measurement scale to measure U.S. air carrier pilots' perceived benefits and/or barriers to participating in ASAP and LOSA programs. Data from these surveys could be used to make changes to or correct pilot misperceptions of these programs to improve participation and the flow of data. ASAP and LOSA a priori models were developed based on previous research in aviation and healthcare. Sixty thousand ASAP and LOSA paper surveys were sent to 60,000 current U.S. air carrier pilots selected at random from an FAA database of pilot certificates. Two thousand usable ASAP and 1,970 usable LOSA surveys were returned and analyzed using Confirmatory Factor Analysis. Analysis of the data using confirmatory actor analysis and model generation resulted in a five factor ASAP model (Ease of use, Value, Improve, Trust and Risk) and a five factor LOSA model (Value, Improve, Program Trust, Risk and Management Trust). ASAP and LOSA data were not normally distributed, so bootstrapping was used. While both final models exhibited acceptable fit with approximate fit indices, the exact fit hypothesis and the Bollen-Stine p value indicated possible model mis-specification for both ASAP and LOSA models.

  20. Modelling, simulation and validation of the industrial robot

    Directory of Open Access Journals (Sweden)

    Aleksandrov Slobodan Č.

    2014-01-01

    Full Text Available In this paper, a DH model of industrial robot, with anthropomorphic configuration and five degrees of freedom - Mitsubishi RV2AJ, is developed. The model is verified on the example robot Mitsubishi RV2AJ. In paper detailed represented the complete mathematical model of the robot and the parameters of the programming. On the basis of this model, simulation of robot motion from point to point is performed, as well as the continuous movement of the pre-defined path. Also, programming of industrial robots identical to simulation programs is made, and comparative analysis of real and simulated experiment is shown. In the final section, a detailed analysis of robot motion is described.

  1. Validation of dispersion model of RTARC-DSS based on ''KIT'' field experiments

    International Nuclear Information System (INIS)

    Duran, J.

    2000-01-01

    The aim of this study is to present the performance of the Gaussian dispersion model RTARC-DSS (Real Time Accident Release Consequences - Decision Support System) at the 'Kit' field experiments. The Model Validation Kit is a collection of three experimental data sets from Kincaid, Copenhagen, Lillestrom and supplementary Indianopolis experimental campaigns accompanied by software for model evaluation. The validation of the model has been performed on the basis of the maximum arc-wise concentrations using the Bootstrap resampling procedure the variation of the model residuals. Validation was performed for the short-range distances (about 1 - 10 km, maximum for Kincaid data set - 50 km from source). Model evaluation procedure and amount of relative over- or under-prediction are discussed and compared with the model. (author)

  2. Validation of the Canadian atmospheric dispersion model for the CANDU reactor complex at Wolsong, Korea

    International Nuclear Information System (INIS)

    Klukas, M.H.; Davis, P.A.

    2000-01-01

    AECL is undertaking the validation of ADDAM, an atmospheric dispersion and dose code based on the Canadian Standards Association model CSA N288.2. The key component of the validation program involves comparison of model predicted and measured vertical and lateral dispersion parameters, effective release height and air concentrations. A wind tunnel study of the dispersion of exhaust gases from the CANDU complex at Wolsong, Korea provides test data for dispersion over uniform and complex terrain. The test data are for distances close enough to the release points to evaluate the model for exclusion area boundaries (EAB) as small as 500 m. Lateral and vertical dispersion is described well for releases over uniform terrain but the model tends to over-predict these parameters for complex terrain. Both plume rise and entrainment are modelled conservatively and the way they are combined in the model produces conservative estimates of the effective release height for low and high wind speeds. Estimates for the medium wind speed case (50-m wind speed, 3.8 ms -1 ) are conservative when the correction for entrainment is made. For the highest ground-level concentrations, those of greatest interest in a safety analysis, 82% of the predictions were within a factor 2 of the observed values. The model can be used with confidence to predict air concentrations of exhaust gases at the Wolsong site for neutral conditions, even for flows over the hills to the west, and is unlikely to substantially under-predict concentrations. (author)

  3. Development and demonstration of a validation methodology for vehicle lateral dynamics simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Kutluay, Emir

    2013-02-01

    In this thesis a validation methodology to be used in the assessment of the vehicle dynamics simulation models is presented. Simulation of vehicle dynamics is used to estimate the dynamic responses of existing or proposed vehicles and has a wide array of applications in the development of vehicle technologies. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. The developed validation paradigm has a top-down approach to the problem. It is ascertained that vehicle dynamics simulation models can only be validated using test maneuvers although they are aimed for real world maneuvers. Test maneuvers are determined according to the requirements of the real event at the start of the model development project and data handling techniques, validation metrics and criteria are declared for each of the selected maneuvers. If the simulation results satisfy these criteria, then the simulation is deemed ''not invalid''. If the simulation model fails to meet the criteria, the model is deemed invalid, and model iteration should be performed. The results are analyzed to determine if the results indicate a modeling error or a modeling inadequacy; and if a conditional validity in terms of system variables can be defined. Three test cases are used to demonstrate the application of the methodology. The developed methodology successfully identified the shortcomings of the tested simulation model, and defined the limits of application. The tested simulation model is found to be acceptable but valid only in a certain dynamical range. Several insights for the deficiencies of the model are reported in the analysis but the iteration step of the methodology is not demonstrated. Utilizing the proposed methodology will help to achieve more time and cost efficient simulation projects with

  4. The validation of evacuation simulation models through the analysis of behavioural uncertainty

    International Nuclear Information System (INIS)

    Lovreglio, Ruggiero; Ronchi, Enrico; Borri, Dino

    2014-01-01

    Both experimental and simulation data on fire evacuation are influenced by a component of uncertainty caused by the impact of the unexplained variance in human behaviour, namely behavioural uncertainty (BU). Evacuation model validation studies should include the study of this type of uncertainty during the comparison of experiments and simulation results. An evacuation model validation procedure is introduced in this paper to study the impact of BU. This methodology is presented through a case study for the comparison between repeated experimental data and simulation results produced by FDS+Evac, an evacuation model for the simulation of human behaviour in fire, which makes use of distribution laws. - Highlights: • Validation of evacuation models is investigated. • Quantitative evaluation of behavioural uncertainty is performed. • A validation procedure is presented through an evacuation case study

  5. Construct validity of the ovine model in endoscopic sinus surgery training.

    Science.gov (United States)

    Awad, Zaid; Taghi, Ali; Sethukumar, Priya; Tolley, Neil S

    2015-03-01

    To demonstrate construct validity of the ovine model as a tool for training in endoscopic sinus surgery (ESS). Prospective, cross-sectional evaluation study. Over 18 consecutive months, trainees and experts were evaluated in their ability to perform a range of tasks (based on previous face validation and descriptive studies conducted by the same group) relating to ESS on the sheep-head model. Anonymized randomized video recordings of the above were assessed by two independent and blinded assessors. A validated assessment tool utilizing a five-point Likert scale was employed. Construct validity was calculated by comparing scores across training levels and experts using mean and interquartile range of global and task-specific scores. Subgroup analysis of the intermediate group ascertained previous experience. Nonparametric descriptive statistics were used, and analysis was carried out using SPSS version 21 (IBM, Armonk, NY). Reliability of the assessment tool was confirmed. The model discriminated well between different levels of expertise in global and task-specific scores. A positive correlation was noted between year in training and both global and task-specific scores (P variable, and the number of ESS procedures performed under supervision had the highest impact on performance. This study describes an alternative model for ESS training and assessment. It is also the first to demonstrate construct validity of the sheep-head model for ESS training. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Social Validity of the Social Skills Improvement System--Classwide Intervention Program (SSIS-CIP) in the Primary Grades

    Science.gov (United States)

    Wollersheim Shervey, Sarah; Sandilos, Lia E.; DiPerna, James C.; Lei, Pui-Wa

    2017-01-01

    The purpose of this study was to examine the social validity of the Social Skills Improvement System--Classwide Intervention Program (SSIS-CIP) for teachers in the primary grades. Participants included 45 first and second grade teachers who completed a 16-item social validity questionnaire during each year of the SSIS-CIP efficacy trial. Findings…

  7. Validating a Technology Enhanced Student-Centered Learning Model

    Science.gov (United States)

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world…

  8. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  9. EPRI MOV performance prediction program

    International Nuclear Information System (INIS)

    Hosler, J.F.; Damerell, P.S.; Eidson, M.G.; Estep, N.E.

    1994-01-01

    An overview of the EPRI Motor-Operated Valve (MOV) Performance Prediction Program is presented. The objectives of this Program are to better understand the factors affecting the performance of MOVs and to develop and validate methodologies to predict MOV performance. The Program involves valve analytical modeling, separate-effects testing to refine the models, and flow-loop and in-plant MOV testing to provide a basis for model validation. The ultimate product of the Program is an MOV Performance Prediction Methodology applicable to common gate, globe, and butterfly valves. The methodology predicts thrust and torque requirements at design-basis flow and differential pressure conditions, assesses the potential for gate valve internal damage, and provides test methods to quantify potential for gate valve internal damage, and provides test methods to quantify potential variations in actuator output thrust with loading condition. Key findings and their potential impact on MOV design and engineering application are summarized

  10. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  11. Systematic validation of non-equilibrium thermochemical models using Bayesian inference

    KAUST Repository

    Miki, Kenji

    2015-10-01

    © 2015 Elsevier Inc. The validation process proposed by Babuška et al. [1] is applied to thermochemical models describing post-shock flow conditions. In this validation approach, experimental data is involved only in the calibration of the models, and the decision process is based on quantities of interest (QoIs) predicted on scenarios that are not necessarily amenable experimentally. Moreover, uncertainties present in the experimental data, as well as those resulting from an incomplete physical model description, are propagated to the QoIs. We investigate four commonly used thermochemical models: a one-temperature model (which assumes thermal equilibrium among all inner modes), and two-temperature models developed by Macheret et al. [2], Marrone and Treanor [3], and Park [4]. Up to 16 uncertain parameters are estimated using Bayesian updating based on the latest absolute volumetric radiance data collected at the Electric Arc Shock Tube (EAST) installed inside the NASA Ames Research Center. Following the solution of the inverse problems, the forward problems are solved in order to predict the radiative heat flux, QoI, and examine the validity of these models. Our results show that all four models are invalid, but for different reasons: the one-temperature model simply fails to reproduce the data while the two-temperature models exhibit unacceptably large uncertainties in the QoI predictions.

  12. Stochastic modeling of oligodendrocyte generation in cell culture: model validation with time-lapse data

    Directory of Open Access Journals (Sweden)

    Noble Mark

    2006-05-01

    Full Text Available Abstract Background The purpose of this paper is two-fold. The first objective is to validate the assumptions behind a stochastic model developed earlier by these authors to describe oligodendrocyte generation in cell culture. The second is to generate time-lapse data that may help biomathematicians to build stochastic models of cell proliferation and differentiation under other experimental scenarios. Results Using time-lapse video recording it is possible to follow the individual evolutions of different cells within each clone. This experimental technique is very laborious and cannot replace model-based quantitative inference from clonal data. However, it is unrivalled in validating the structure of a stochastic model intended to describe cell proliferation and differentiation at the clonal level. In this paper, such data are reported and analyzed for oligodendrocyte precursor cells cultured in vitro. Conclusion The results strongly support the validity of the most basic assumptions underpinning the previously proposed model of oligodendrocyte development in cell culture. However, there are some discrepancies; the most important is that the contribution of progenitor cell death to cell kinetics in this experimental system has been underestimated.

  13. Importance of Computer Model Validation in Pyroprocessing Technology Development

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Y. E.; Li, Hui; Yim, M. S. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    In this research, we developed a plan for experimental validation of one of the computer models developed for ER process modeling, i. e., the ERAD code. Several candidate surrogate materials are selected for the experiment considering the chemical and physical properties. Molten salt-based pyroprocessing technology is being examined internationally as an alternative to treat spent nuclear fuel over aqueous technology. The central process in pyroprocessing is electrorefining(ER) which separates uranium from transuranic elements and fission products present in spent nuclear fuel. ER is a widely used process in the minerals industry to purify impure metals. Studies of ER by using actual spent nuclear fuel materials are problematic for both technical and political reasons. Therefore, the initial effort for ER process optimization is made by using computer models. A number of models have been developed for this purpose. But as validation of these models is incomplete and often times problematic, the simulation results from these models are inherently uncertain.

  14. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  15. Validation Hydrodynamic Models of Three Topological Models of Secondary Facultative Ponds

    OpenAIRE

    Aponte-Reyes Alxander

    2014-01-01

    A methodology was developed to analyze boundary conditions, the size of the mesh and the turbulence of a mathematical model of CFD, which could explain hydrodynamic behavior on facultative stabilization ponds, FSP, built to pilot scale: conventional pond, CP, baffled pond, BP, and baffled-mesh pond, BMP. Models dispersion studies were performed in field for validation, taking samples into and out of the FSP, the information was used to carry out CFD model simulations of the three topologies. ...

  16. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... using a simple loudspeaker setup, consisting of only two loudspeakers, one for the target sound source and the other for the interfering sound source. Recently, the model was successfully validated in a complex personal sound-zone system with speech-on-music stimuli. Second round of validations were...... conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show...

  17. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  18. Paleoclimate validation of a numerical climate model

    International Nuclear Information System (INIS)

    Schelling, F.J.; Church, H.W.; Zak, B.D.; Thompson, S.L.

    1994-01-01

    An analysis planned to validate regional climate model results for a past climate state at Yucca Mountain, Nevada, against paleoclimate evidence for the period is described. This analysis, which will use the GENESIS model of global climate nested with the RegCM2 regional climate model, is part of a larger study for DOE's Yucca Mountain Site Characterization Project that is evaluating the impacts of long term future climate change on performance of the potential high level nuclear waste repository at Yucca Mountain. The planned analysis and anticipated results are presented

  19. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  20. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  1. Applied Integer Programming Modeling and Solution

    CERN Document Server

    Chen, Der-San; Dang, Yu

    2011-01-01

    An accessible treatment of the modeling and solution of integer programming problems, featuring modern applications and software In order to fully comprehend the algorithms associated with integer programming, it is important to understand not only how algorithms work, but also why they work. Applied Integer Programming features a unique emphasis on this point, focusing on problem modeling and solution using commercial software. Taking an application-oriented approach, this book addresses the art and science of mathematical modeling related to the mixed integer programming (MIP) framework and

  2. Validation od computational model ALDERSON/EGSnrc for chest radiography

    International Nuclear Information System (INIS)

    Muniz, Bianca C.; Santos, André L. dos; Menezes, Claudio J.M.

    2017-01-01

    To perform dose studies in situations of exposure to radiation, without exposing individuals, the numerical dosimetry uses Computational Exposure Models (ECM). Composed essentially by a radioactive source simulator algorithm, a voxel phantom representing the human anatomy and a Monte Carlo code, the ECMs must be validated to determine the reliability of the physical array representation. The objective of this work is to validate the ALDERSON / EGSnrc MCE by through comparisons between the experimental measurements obtained with the ionization chamber and virtual simulations using Monte Carlo Method to determine the ratio of the input and output radiation dose. Preliminary results of these comparisons showed that the ECM reproduced the results of the experimental measurements performed with the physical phantom with a relative error of less than 10%, validating the use of this model for simulations of chest radiographs and estimates of radiation doses in tissues in the irradiated structures

  3. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  4. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  5. Model Validation and Verification of Data Mining from the ...

    African Journals Online (AJOL)

    Michael Horsfall

    In this paper, we seek to present a hybrid method for Model Validation and Verification of Data Mining from the ... This model generally states the numerical value of knowledge .... procedures found in the field of software engineering should be ...

  6. How to enhance the future use of energy policy simulation models through ex post validation

    International Nuclear Information System (INIS)

    Qudrat-Ullah, Hassan

    2017-01-01

    Although simulation and modeling in general and system dynamics models in particular has long served the energy policy domain, ex post validation of these energy policy models is rarely addressed. In fact, ex post validation is a valuable area of research because it offers modelers a chance to enhance the future use of their simulation models by validating them against the field data. This paper contributes by presenting (i) a system dynamics simulation model, which was developed and used to do a three dimensional, socio-economical and environmental long-term assessment of Pakistan's energy policy in 1999, (ii) a systematic analysis of the 15-years old predictive scenarios produced by a system dynamics simulation model through ex post validation. How did the model predictions compare with the actual data? We report that the ongoing crisis of the electricity sector of Pakistan is unfolding, as the model-based scenarios had projected. - Highlights: • Argues that increased use of energy policy models is dependent on their credibility validation. • An ex post validation process is presented as a solution to build confidence in models. • A unique system dynamics model, MDESRAP, is presented. • The root mean square percentage error and Thiel's inequality statistics are applied. • The dynamic model, MDESRAP, is presented as an ex ante and ex post validated model.

  7. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Science.gov (United States)

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  8. Validation of a model for calculating environmental doses caused by gamma emitters in the soil

    International Nuclear Information System (INIS)

    Ortega, X.; Rosell, J.R.; Dies, X.

    1991-01-01

    A model has been developed to calculate the absorbed dose rates caused by gamma emitters of both natural and artificial origin distributed in the soil. The model divides the soil into five compartments corresponding to layers situated at different depths, and assumes that the concentration of radionuclides is constant in each one of them. The calculations, following the model developed, are undertaken through a program which, based on the concentrations of the radionuclides in the different compartments, gives as a result the dose rate at a height of one metre above the ground caused by each radionuclide and the percentage this represents with respect to the total absorbed dose rate originating from this soil. The validity of the model has been checked in the case of sandy soils by comparing the exposure rates calculated for five sites with the experimental values obtained with an ionisation chamber. (author)

  9. The Spiral-Interactive Program Evaluation Model.

    Science.gov (United States)

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  10. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  11. Predicting the ungauged basin : Model validation and realism assessment

    NARCIS (Netherlands)

    Van Emmerik, T.H.M.; Mulder, G.; Eilander, D.; Piet, M.; Savenije, H.H.G.

    2015-01-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of

  12. Predicting the ungauged basin: model validation and realism assessment

    NARCIS (Netherlands)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2015-01-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of

  13. Validation of Inhibition Effect in the Cellulose Hydrolysis: a Dynamic Modelling Approach

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Tsai, Chien-Tai; Meyer, Anne S.

    2011-01-01

    Enzymatic hydrolysis is one of the main steps in the processing of bioethanol from lignocellulosic raw materials. However, complete understanding of the underlying phenomena is still under development. Hence, this study has focused on validation of the inhibition effects in the cellulosic biomass...... for parameter estimation (calibration) and validation purposes. The model predictions using calibrated parameters have shown good agreement with the validation data sets, which provides credibility to the model structure and the parameter values....

  14. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    Science.gov (United States)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  15. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  16. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  17. Validation of a two-fluid model used for the simulation of dense fluidized beds; Validation d`un modele a deux fluides applique a la simulation des lits fluidises denses

    Energy Technology Data Exchange (ETDEWEB)

    Boelle, A.

    1997-02-17

    A two-fluid model applied to the simulation of gas-solid dense fluidized beds is validated on micro scale and on macro scale. Phase coupling is carried out in the momentum and energy transport equation of both phases. The modeling is built on the kinetic theory of granular media in which the gas action has been taken into account in order to get correct expressions of transport coefficients. A description of hydrodynamic interactions between particles in high Stokes number flow is also incorporated in the model. The micro scale validation uses Lagrangian numerical simulations viewed as numerical experiments. The first validation case refers to a gas particle simple shear flow. It allows to validate the competition between two dissipation mechanisms: drag and particle collisions. The second validation case is concerted with sedimenting particles in high Stokes number flow. It allows to validate our approach of hydrodynamic interactions. This last case had led us to develop an original Lagrangian simulation with a two-way coupling between the fluid and the particles. The macro scale validation uses the results of Eulerian simulations of dense fluidized bed. Bed height, particles circulation and spontaneous created bubbles characteristics are studied and compared to experimental measurement, both looking at physical and numerical parameters. (author) 159 refs.

  18. Basic Modelling principles and Validation of Software for Prediction of Collision Damage

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    2000-01-01

    This report describes basic modelling principles, the theoretical background and validation examples for the collision damage prediction module in the ISESO stand-alone software.......This report describes basic modelling principles, the theoretical background and validation examples for the collision damage prediction module in the ISESO stand-alone software....

  19. Gap Conductance model Validation in the TASS/SMR-S code using MARS code

    International Nuclear Information System (INIS)

    Ahn, Sang Jun; Yang, Soo Hyung; Chung, Young Jong; Lee, Won Jae

    2010-01-01

    Korea Atomic Energy Research Institute (KAERI) has been developing the TASS/SMR-S (Transient and Setpoint Simulation/Small and Medium Reactor) code, which is a thermal hydraulic code for the safety analysis of the advanced integral reactor. An appropriate work to validate the applicability of the thermal hydraulic models within the code should be demanded. Among the models, the gap conductance model which is describes the thermal gap conductivity between fuel and cladding was validated through the comparison with MARS code. The validation of the gap conductance model was performed by evaluating the variation of the gap temperature and gap width as the changed with the power fraction. In this paper, a brief description of the gap conductance model in the TASS/SMR-S code is presented. In addition, calculated results to validate the gap conductance model are demonstrated by comparing with the results of the MARS code with the test case

  20. Assessment model validity document FARF31

    International Nuclear Information System (INIS)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria

    2004-08-01

    The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

  1. Validation Assessment of a Glass-to-Metal Seal Finite-Element Model

    Energy Technology Data Exchange (ETDEWEB)

    Jamison, Ryan Dale [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Emery, John M [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stavig, Mark E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Newton, Clay S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brown, Arthur [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    Sealing glasses are ubiquitous in high pressure and temperature engineering applications, such as hermetic feed-through electrical connectors. A common connector technology are glass-to-metal seals where a metal shell compresses a sealing glass to create a hermetic seal. Though finite-element analysis has been used to understand and design glass-to-metal seals for many years, there has been little validation of these models. An indentation technique was employed to measure the residual stress on the surface of a simple glass-to-metal seal. Recently developed rate- dependent material models of both Schott 8061 and 304L VAR stainless steel have been applied to a finite-element model of the simple glass-to-metal seal. Model predictions of residual stress based on the evolution of material models are shown. These model predictions are compared to measured data. Validity of the finite- element predictions is discussed. It will be shown that the finite-element model of the glass-to-metal seal accurately predicts the mean residual stress in the glass near the glass-to-metal interface and is valid for this quantity of interest.

  2. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  3. Validation of an employee satisfaction model: A structural equation model approach

    OpenAIRE

    Ophillia Ledimo; Nico Martins

    2015-01-01

    The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...

  4. Evaluation factors for verification and validation of low-level waste disposal site models

    International Nuclear Information System (INIS)

    Moran, M.S.; Mezga, L.J.

    1982-01-01

    The purpose of this paper is to identify general evaluation factors to be used to verify and validate LLW disposal site performance models in order to assess their site-specific applicability and to determine their accuracy and sensitivity. It is intended that the information contained in this paper be employed by model users involved with LLW site performance model verification and validation. It should not be construed as providing protocols, but rather as providing a framework for the preparation of specific protocols or procedures. A brief description of each evaluation factor is provided. The factors have been categorized according to recommended use during either the model verification or the model validation process. The general responsibilities of the developer and user are provided. In many cases it is difficult to separate the responsibilities of the developer and user, but the user is ultimately accountable for both verification and validation processes. 4 refs

  5. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  6. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  7. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  8. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  9. Systematic Model for Validating Equipment Uses in Selected Marketing and Distribution Education Programs. Final Report, February 1, 1980-June 30, 1981.

    Science.gov (United States)

    Gildan, Kate; Buckner, Leroy

    Research was conducted to provide a model for selecting equipment for marketing and distributive education programs that was required for the development of the skills or competencies needed to perform in marketing and distribution occupation. A research of the literature identified both competency statements for three program areas--Fashion…

  10. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  11. Electron Temperature Fluctuation Measurements and Transport Model Validation at Alcator C-Mod

    Energy Technology Data Exchange (ETDEWEB)

    White, Anne [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-22

    testing and validating predictive models for the transport of heat and particles in fusion plasmas due to turbulence. Once validated, the models are used to predict performance in ITER and other burning plasmas, such as the MIT ARC design. Most recently, data from the newly developed, so-called “CECE diagnostic” [Cima 1995, White 2008] and “nT phase angle measurements” [Haese 1999, White 2010] ]will be combined with data from density fluctuation diagnostics at ASDEX Upgrade to support a long-term program of physics research in turbulence and transport that will allow for more stringent testing and validation of gyrokinetic turbulent-transport codes. This work directly impacts the development of predictive transport models in the U.S. FES program, such as TGLF, developed by General Atomics, which are used to predict performance in ITER and other burning plasma devices as part of advancing the development of fusion energy sciences.

  12. Improved atmospheric dispersion modelling in the new program system UFOMOD for accident consequence assessments

    International Nuclear Information System (INIS)

    Panitz, H.J.

    1988-01-01

    An essential aim of the improvements of the new program system UFOMOD for Accident Consequence Assessments (ACAs) was to substitute the straightline Gaussian plume model conventionally used in ACA models by more realistic atmospheric dispersion models. To identify improved models which can be applied in ACA codes and to quantify the implications of different concepts of dispersion modelling on the results of an ACA, probabilistic comparative calculations with different atmospheric dispersion models have been carried out. The study showed that there are trajectory models available which can be applied in ACAs and that these trajectory models provide more realistic results of ACAs than straight-line Gaussian models. This led to a completly novel concept of atmospheric dispersion modelling which distinguish between two different distance ranges of validity: the near range ( 50 km). The two ranges are assigned to respective trajectory models

  13. Experimental program for physics-of-failure modeling of electrolytic capacitors towards prognostics and health management

    International Nuclear Information System (INIS)

    Rana, Y.S.; Banerjee, Shantanab; Singh, Tej; Varde, P.V.

    2017-01-01

    Prognostics and Health Management (PHM) is a method used for predicting reliability of a component or system by assessing its current health and future operating conditions. A physics-of-failure (PoF)-based program on PHM for reliability prediction has been initiated at our institute. As part of the program, we aim at developing PoF-based models for degradation of electronic components and their experimental validation. In this direction, a database on existing PoF models for different electronic components has been prepared. We plan to experimentally determine the model constants and propose suitable methodology for PHM. Electrolytic capacitors are one of the most common passive components which find their applications in devices such as power supplies in aircrafts and printed circuit boards (PCBs) for regulation and protection of a nuclear reactor. Experimental studies have established that electrolytic capacitors degrade under electrical and thermal stress and tend to fail before their anticipated useful life at normal operating conditions. Equivalent series resistance (ESR) and capacitance (C) are the two main parameters used for monitoring health of such capacitors. In this paper, we present an experimental program for thermal and electrical overstress studies towards degradation models for electrolytic capacitors. (author)

  14. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    International Nuclear Information System (INIS)

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  15. Validation of the Continuum of Care Conceptual Model for Athletic Therapy

    Directory of Open Access Journals (Sweden)

    Mark R. Lafave

    2015-01-01

    Full Text Available Utilization of conceptual models in field-based emergency care currently borrows from existing standards of medical and paramedical professions. The purpose of this study was to develop and validate a comprehensive conceptual model that could account for injuries ranging from nonurgent to catastrophic events including events that do not follow traditional medical or prehospital care protocols. The conceptual model should represent the continuum of care from the time of initial injury spanning to an athlete’s return to participation in their sport. Finally, the conceptual model should accommodate both novices and experts in the AT profession. This paper chronicles the content validation steps of the Continuum of Care Conceptual Model for Athletic Therapy (CCCM-AT. The stages of model development were domain and item generation, content expert validation using a three-stage modified Ebel procedure, and pilot testing. Only the final stage of the modified Ebel procedure reached a priori 80% consensus on three domains of interest: (1 heading descriptors; (2 the order of the model; (3 the conceptual model as a whole. Future research is required to test the use of the CCCM-AT in order to understand its efficacy in teaching and practice within the AT discipline.

  16. Cross validation for the classical model of structured expert judgment

    International Nuclear Information System (INIS)

    Colson, Abigail R.; Cooke, Roger M.

    2017-01-01

    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance

  17. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  18. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a

  19. Animal models of binge drinking, current challenges to improve face validity.

    Science.gov (United States)

    Jeanblanc, Jérôme; Rolland, Benjamin; Gierski, Fabien; Martinetti, Margaret P; Naassila, Mickael

    2018-05-05

    Binge drinking (BD), i.e., consuming a large amount of alcohol in a short period of time, is an increasing public health issue. Though no clear definition has been adopted worldwide the speed of drinking seems to be a keystone of this behavior. Developing relevant animal models of BD is a priority for gaining a better characterization of the neurobiological and psychobiological mechanisms underlying this dangerous and harmful behavior. Until recently, preclinical research on BD has been conducted mostly using forced administration of alcohol, but more recent studies used scheduled access to alcohol, to model more voluntary excessive intakes, and to achieve signs of intoxications that mimic the human behavior. The main challenges for future research are discussed regarding the need of good face validity, construct validity and predictive validity of animal models of BD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Field validation of the contaminant transport model, FEMA

    International Nuclear Information System (INIS)

    Wong, K.-F.V.

    1986-01-01

    The work describes the validation with field data of a finite element model of material transport through aquifers (FEMA). Field data from the Idaho Chemical Processing Plant, Idaho, USA and from the 58th Street landfill in Miami, Florida, USA are used. In both cases the model was first calibrated and then integrated over a span of eight years to check on the predictive capability of the model. Both predictive runs gave results that matched well with available data. (author)

  1. Validation of a regional distribution model in environmental risk assessment of substances

    Energy Technology Data Exchange (ETDEWEB)

    Berding, V.

    2000-06-26

    The regional distribution model SimpleBox proposed in the TGD (Technical Guidance Document) and implemented in the EUSES software (European Union System for the Evaluation of Substances) was validated. The aim of this investigation was to determine the applicability and weaknesses of the model and to make proposals for improvement. The validation was performed using the scheme set up by SCHWARTZ (2000) of which the main aspects are the division into internal and external validation, i.e. into generic and task-specific properties of the model. These two validation parts contain the scrutiny of theory, sensitivity analyses, comparison of predicted environmental concentrations with measured ones by means of scenario analyses, uncertainty analyses and comparison with alternative models. Generally, the model employed is a reasonable compromise between complexity and simplification. Simpler models are applicable, too, but in many cases the results can deviate considerably from the measured values. For the sewage treatment model, it could be shown that its influence on the predicted concentration is very low and a much simpler model fulfils its purpose in a similar way. It is proposed to improve the model in several ways, e.g. by including the pH/pK-correction for dissociating substances or by alternative estimations functions for partition coefficients. But the main focus for future improvements should be on the amelioration of release estimations and substance characteristics as degradation rates and partition coefficients.

  2. External validation of EPIWIN biodegradation models.

    Science.gov (United States)

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  3. Validation of the STAFF-5 computer model

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Fields, S.R.

    1981-04-01

    STAFF-5 is a dynamic heat-transfer-fluid-flow stress model designed for computerized prediction of the temperature-stress performance of spent LWR fuel assemblies under storage/disposal conditions. Validation of the temperature calculating abilities of this model was performed by comparing temperature calculations under specified conditions to experimental data from the Engine Maintenance and Dissassembly (EMAD) Fuel Temperature Test Facility and to calculations performed by Battelle Pacific Northwest Laboratory (PNL) using the HYDRA-1 model. The comparisons confirmed the ability of STAFF-5 to calculate representative fuel temperatures over a considerable range of conditions, as a first step in the evaluation and prediction of fuel temperature-stress performance

  4. The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.

    Science.gov (United States)

    Kim, Jongphil; Lee, Ji-Hyun

    2017-01-01

    The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.

  5. Rainfall Product Evaluation for the TRMM Ground Validation Program

    Science.gov (United States)

    Amitai, E.; Wolff, D. B.; Robinson, M.; Silberstein, D. S.; Marks, D. A.; Kulie, M. S.; Fisher, B.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Evaluation of the Tropical Rainfall Measuring Mission (TRMM) satellite observations is conducted through a comprehensive Ground Validation (GV) Program. Standardized instantaneous and monthly rainfall products are routinely generated using quality-controlled ground based radar data from four primary GV sites. As part of the TRMM GV program, effort is being made to evaluate these GV products and to determine the uncertainties of the rainfall estimates. The evaluation effort is based on comparison to rain gauge data. The variance between the gauge measurement and the true averaged rain amount within the radar pixel is a limiting factor in the evaluation process. While monthly estimates are relatively simple to evaluate, the evaluation of the instantaneous products are much more of a challenge. Scattegrams of point comparisons between radar and rain gauges are extremely noisy for several reasons (e.g. sample volume discrepancies, timing and navigation mismatches, variability of Z(sub e)-R relationships), and therefore useless for evaluating the estimates. Several alternative methods, such as the analysis of the distribution of rain volume by rain rate as derived from gauge intensities and from reflectivities above the gauge network will be presented. Alternative procedures to increase the accuracy of the estimates and to reduce their uncertainties also will be discussed.

  6. Safety Assessment for LILW Near-Surface Disposal Facility Using the IAEA Reference Model and MASCOT Program

    International Nuclear Information System (INIS)

    Kim, Hyun Joo; Park, Joo Wan; Kim, Chang Lak

    2002-01-01

    A reference scenario of vault safety case prepared by the IAEA for the near-surface disposal facility of low-and intermediate-level radioactive wastes is assessed with the MASCOT program. The appropriate conceptual models for the MASCOT implementation is developed. An assessment of groundwater pathway through a drinking well as a geosphere-biosphere interface is performed first, then biosphere pathway is analysed to estimate the radiological consequences of the disposed radionuclides based on compartment modeling approach. The validity of conceptual modeling for the reference scenario is investigated where possible comparing to the results generated by the other assessment. The result of this study shows that the typical conceptual model for groundwater pathway represented by the compartment model can be satisfactorily used for safety assessment of the entire disposal system in a consistent way. It is also shown that safety assessment of a disposal facility considering complex and various pathways would be possible by the MASCOT program

  7. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...

  8. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  9. Development of whole core thermal-hydraulic analysis program ACT. 4. Incorporation of three-dimensional upper plenum model

    International Nuclear Information System (INIS)

    Ohshima, Hiroyuki

    2003-03-01

    The thermal-hydraulic analysis computer program ACT is under development for the evaluation of detailed flow and temperature fields in a core region of fast breeder reactors under various operation conditions. The purpose of this program development is to contribute not only to clarifying thermal hydraulic characteristics that cannot be revealed by experiments due to measurement difficulty but also to performing rational safety design and assessment. This report describes the incorporation of a three-dimensional upper plenum model to ACT and its verification study as part of the program development. To treat the influence of three-dimensional thermal-hydraulic behavior in a upper plenum on the in-core temperature field, the multi-dimensional general purpose thermal-hydraulic analysis program AQUA, which was developed and validated at JNC, was applied as the base of the upper plenum analysis module of ACT. AQUA enables to model the upper plenum configuration including immersed heat exchangers of the direct reactor auxiliary cooling system (DRACS). In coupling core analysis module that consists of the fuel-assembly and the inter-wrapper gap calculation parts with the upper plenum module, different types of computation mesh systems were jointed using the staggered quarter assembly mesh scheme. A coupling algorithm among core, upper plenum and heat transport system modules, which can keep mass, momentum and energy conservation, was developed and optimized in consideration of parallel computing. ACT was applied to analyzing a sodium experiment (PLANDTL-DHX) performed at JNC, which simulated the natural circulation decay heat removal under DRACS operation conditions for the program verification. From the calculation result, the validity of the improved program was confirmed. (author)

  10. Recent validation studies for two NRPB environmental transfer models

    International Nuclear Information System (INIS)

    Brown, J.; Simmonds, J.R.

    1991-01-01

    The National Radiological Protection Board (NRPB) developed a dynamic model for the transfer of radionuclides through terrestrial food chains some years ago. This model, now called FARMLAND, predicts both instantaneous and time integrals of concentration of radionuclides in a variety of foods. The model can be used to assess the consequences of both accidental and routine releases of radioactivity to the environment; and results can be obtained as a function of time. A number of validation studies have been carried out on FARMLAND. In these the model predictions have been compared with a variety of sets of environmental measurement data. Some of these studies will be outlined in the paper. A model to predict external radiation exposure from radioactivity deposited on different surfaces in the environment has also been developed at NRPB. This model, called EXPURT (EXPosure from Urban Radionuclide Transfer), can be used to predict radiation doses as a function of time following deposition in a variety of environments, ranging from rural to inner-city areas. This paper outlines validation studies and future extensions to be carried out on EXPURT. (12 refs., 4 figs.)

  11. Validation of use of the low energies library in the GATE program: assessment of the effective mass attenuation coefficient

    International Nuclear Information System (INIS)

    Argenta, Jackson; Brambilla, Claudia R.; Silva, Ana Maria Marques da; Hoff, Gabriela

    2010-01-01

    Geant4 Application for Emission Tomography program (GATE) is a versatile toolkit for nuclear medicine simulations of SPECT and PET studies. GATE takes advantage of well-validated libraries of physics processes models, geometry description, tracking of particles through materials, response of detector and visualization tools offered by Geant4 (version 4.0). One package available to simulate electromagnetic interactions is low energy electromagnetic processes (LEP). The purpose of this work was to evaluate the LEP package used by GATE 4 for nuclear medicine shielding simulations. Several simulations were made involving a monodirectional and 140 keV monoenergetic point source beam, passing through barriers of variable thickness of water and lead. The results showed good agreement with the theoretical model, indicating that GATE 4 uses correctly the LEP package. (author)

  12. NRPB models for calculating the transfer of radionuclides through the environment. Verification and validation

    International Nuclear Information System (INIS)

    Attwood, C.; Barraclough, I.; Brown, J.

    1998-06-01

    There is a wide range of models available at NRPB to predict the transfer of radionuclides through the environment. Such models form an essential part of assessments of the radiological impact of releases of radionuclides into the environment. These models cover: the atmosphere; the aquatic environment; the geosphere; the terrestrial environment including foodchains. It is important that the models used for radiological impact assessments are robust, reliable and suitable for the assessment being undertaken. During model development it is, therefore, important that the model is both verified and validated. Verification of a model involves ensuring that it has been implemented correctly, while validation consists of demonstrating that the model is an adequate representation of the real environment. The extent to which a model can be verified depends on its complexity and whether similar models exist. For relatively simple models verification is straightforward, but for more complex models verification has to form part of the development, coding and testing of the model within quality assurance procedures. Validation of models should ideally consist of comparisons between the results of the models and experimental or environmental measurement data that were not used to develop the model. This is more straightforward for some models than for others depending on the quantity and type of data available. Validation becomes increasingly difficult for models which are intended to predict environmental transfer at long times or at great distances. It is, therefore, necessary to adopt qualitative validation techniques to ensure that the model is an adequate representation of the real environment. This report summarises the models used at NRPB to predict the transfer of radionuclides through the environment as part of a radiological impact assessment. It outlines the work carried out to verify and validate the models. The majority of these models are not currently available

  13. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  14. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.P.; Chang, S.K.; Huang, H.C. [Nuclear Training Branch, Northeast Utilities, Waterford, CT (United States)

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  15. Concrete structures vulnerability under impact: characterization, modeling, and validation - Concrete slabs vulnerability under impact: characterization, modeling, and validation

    International Nuclear Information System (INIS)

    Xuan Dung Vu

    2013-01-01

    Concrete is a material whose behavior is complex, especially in cases of extreme loads. The objective of this thesis is to carry out an experimental characterization of the behavior of concrete under impact-generated stresses (confined compression and dynamic traction) and to develop a robust numerical tool to reliably model this behavior. In the experimental part, we have studied concrete samples from the VTT center (Technical Research Center of Finland). At first, quasi-static triaxial compressions with the confinement varies from 0 MPa (unconfined compression test) to 600 MPa were realized. The stiffness of the concrete increases with confinement pressure because of the reduction of porosity. Therefore, the maximum shear strength of the concrete is increased. The presence of water plays an important role when the degree of saturation is high and the concrete is subjected to high confinement pressure. Beyond a certain level of confinement pressure, the maximum shear strength of concrete decreases with increasing water content. The effect of water also influences the volumetric behavior of concrete. When all free pores are closed as a result of compaction, the low compressibility of the water prevents the deformation of the concrete, whereby the wet concrete is less deformed than the dry concrete for the same mean stress. The second part of the experimental program concerns dynamic tensile tests at different loading velocities, and different moisture conditions of concrete. The results show that the tensile strength of concrete C50 may increase up to 5 times compared to its static strength for a strain rate of about 100 s -1 . In the numerical part, we are interested in improving an existing constitutive coupled model of concrete behavior called PRM (Pontiroli-Rouquand-Mazars) to predict the concrete behavior under impact. This model is based on a coupling between a damage model which is able to describe the degradation mechanisms and cracking of the concrete at

  16. Statistical Analysis Methods for Physics Models Verification and Validation

    CERN Document Server

    De Luca, Silvia

    2017-01-01

    The validation and verification process is a fundamental step for any software like Geant4 and GeantV, which aim to perform data simulation using physics models and Monte Carlo techniques. As experimental physicists, we have to face the problem to compare the results obtained using simulations with what the experiments actually observed. One way to solve the problem is to perform a consistency test. Within the Geant group, we developed a C++ compact library which will be added to the automated validation process on the Geant Validation Portal

  17. Isotopes as validation tools for global climate models

    International Nuclear Information System (INIS)

    Henderson-Sellers, A.

    2001-01-01

    Global Climate Models (GCMs) are the predominant tool with which we predict the future climate. In order that people can have confidence in such predictions, GCMs require validation. As almost every available item of meteorological data has been exploited in the construction and tuning of GCMs to date, independent validation is very difficult. This paper explores the use of isotopes as a novel and fully independent means of evaluating GCMs. The focus is the Amazon Basin which has a long history of isotope collection and analysis and also of climate modelling: both having been reported for over thirty years. Careful consideration of the results of GCM simulations of Amazonian deforestation and climate change suggests that the recent stable isotope record is more consistent with the predicted effects of greenhouse warming, possibly combined with forest removal, than with GCM predictions of the effects of deforestation alone

  18. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  19. Validation of a Wave-Body Interaction Model by Experimental Tests

    DEFF Research Database (Denmark)

    Ferri, Francesco; Kramer, Morten; Pecher, Arthur

    2013-01-01

    Within the wave energy field, numerical simulation has recently acquired a worldwide consent as being a useful tool, besides physical model testing. The main goal of this work is the validation of a numerical model by experimental results. The numerical model is based on a linear wave-body intera...

  20. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  1. SWAT application in intensive irrigation systems: Model modification, calibration and validation

    Science.gov (United States)

    Dechmi, Farida; Burguete, Javier; Skhiri, Ahmed

    2012-11-01

    SummaryThe Soil and Water Assessment Tool (SWAT) is a well established, distributed, eco-hydrologic model. However, using the study case of an agricultural intensive irrigated watershed, it was shown that all the model versions are not able to appropriately reproduce the total streamflow in such system when the irrigation source is outside the watershed. The objective of this study was to modify the SWAT2005 version for correctly simulating the main hydrological processes. Crop yield, total streamflow, total suspended sediment (TSS) losses and phosphorus load calibration and validation were performed using field survey information and water quantity and quality data recorded during 2008 and 2009 years in Del Reguero irrigated watershed in Spain. The goodness of the calibration and validation results was assessed using five statistical measures, including the Nash-Sutcliffe efficiency (NSE). Results indicated that the average annual crop yield and actual evapotranspiration estimations were quite satisfactory. On a monthly basis, the values of NSE were 0.90 (calibration) and 0.80 (validation) indicating that the modified model could reproduce accurately the observed streamflow. The TSS losses were also satisfactorily estimated (NSE = 0.72 and 0.52 for the calibration and validation steps). The monthly temporal patterns and all the statistical parameters indicated that the modified SWAT-IRRIG model adequately predicted the total phosphorus (TP) loading. Therefore, the model could be used to assess the impacts of different best management practices on nonpoint phosphorus losses in irrigated systems.

  2. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    Science.gov (United States)

    Leontjevas, Ruslan; Gerritsen, Debby L; Koopmans, Raymond T C M; Smalbrugge, Martin; Vernooij-Dassen, Myrra J F J

    2012-06-01

    A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). Before effect analyses, to evaluate AiD process data on sampling quality (recruitment and randomization, reach) and intervention quality (relevance and feasibility, extent to which AiD was performed), which can be used for understanding internal and external validity. In this article, a model is presented that divides process evaluation data into first- and second-order process data. Qualitative and quantitative data based on personal files of residents, interviews of nursing home professionals, and a research database were analyzed according to the following process evaluation components: sampling quality and intervention quality. Nursing home. The pattern of residents' informed consent rates differed for dementia special care units and somatic units during the study. The nursing home staff was satisfied with the AiD program and reported that the program was feasible and relevant. With the exception of the first screening step (nursing staff members using a short observer-based depression scale), AiD components were not performed fully by NH staff as prescribed in the AiD protocol. Although NH staff found the program relevant and feasible and was satisfied with the program content, individual AiD components may have different feasibility. The results on sampling quality implied that statistical analyses of AiD effectiveness should account for the type of unit, whereas the findings on intervention quality implied that, next to the type of unit, analyses should account for the extent to which individual AiD program components were performed. In general, our first-order process data evaluation confirmed internal and external validity of the AiD trial, and this evaluation enabled further statistical fine tuning. The importance of

  3. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  4. Object-oriented simulation model of a parabolic trough solar collector: Static and dynamic validation

    Science.gov (United States)

    Ubieta, Eduardo; Hoyo, Itzal del; Valenzuela, Loreto; Lopez-Martín, Rafael; Peña, Víctor de la; López, Susana

    2017-06-01

    A simulation model of a parabolic-trough solar collector developed in Modelica® language is calibrated and validated. The calibration is performed in order to approximate the behavior of the solar collector model to a real one due to the uncertainty in some of the system parameters, i.e. measured data is used during the calibration process. Afterwards, the validation of this calibrated model is done. During the validation, the results obtained from the model are compared to the ones obtained during real operation in a collector from the Plataforma Solar de Almeria (PSA).

  5. Experimental Testing Procedures and Dynamic Model Validation for Vanadium Redox Flow Battery Storage System

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per Bromand

    2013-01-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing...... efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs...

  6. The Air Force Mobile Forward Surgical Team (MFST): Using the Estimating Supplies Program to Validate Clinical Requirement

    National Research Council Canada - National Science Library

    Nix, Ralph E; Onofrio, Kathleen; Konoske, Paula J; Galarneau, Mike R; Hill, Martin

    2004-01-01

    .... The primary objective of the study was to provide the Air Force with the ability to validate clinical requirements of the MFST assemblage, with the goal of using NHRC's Estimating Supplies Program (ESP...

  7. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  8. Validating a perceptual distraction model using a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user's perceived distraction caused by audio-on-audio interference. Originally, the distraction model was trained with music targets and interferers using a simple loudspeaker setup, consisting of only two...... sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. The results show that the model performance is equally good in both zones, i.e., with both speech- on-music and music-on-speech stimuli...

  9. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  10. Seismic analysis program group: SSAP

    International Nuclear Information System (INIS)

    Uchida, Masaaki

    2002-05-01

    A group of programs SSAP has been developed, each member of which performs seismic calculation using simple single-mass system model or multi-mass system model. For response of structures to a transverse s-wave, a single-mass model program calculating response spectrum and a multi-mass model program are available. They perform calculation using the output of another program, which produces simulated earthquakes having the so-called Ohsaki-spectrum characteristic. Another program has been added, which calculates the response of one-dimensional multi-mass systems to vertical p-wave input. It places particular emphasis on the analysis of the phenomena observed at some shallow earthquakes in which stones jump off the ground. Through a series of test calculations using these programs, some interesting information has been derived concerning the validity of superimposing single-mass model calculation, and also the condition for stones to jump. (author)

  11. Efforts toward validation of a hydrogeological model of the Asse area

    International Nuclear Information System (INIS)

    Fein, E.; Klarr, K.; von Stempel, C.

    1995-01-01

    The Asse anticline (8 x 3 km) near Braunschweig (Germany) is part of the Subhercynian Basin and is characterized by a NW-SE orientation. In 1965, the GSF Research Center for Environment and Health acquired the former Asse salt mine on behalf of the FRG in order to carry out research and development work with a view of safe disposal of radioactive waste. To assess long term safety and predict groundwater flow nd radionuclide transport, an experimental program was carried out to validate hydrogeological models of the overburden of the Asse salt mine and to provide these with data. Five deep boreholes from 700 to 2250 m and 4 geological exploration shallow boreholes where drilled in the Asse area. Moreover, 19 piezometers and 27 exploration boreholes were sunk to perform pumping and tracer tests and yearly borehole loggings. In the end, about 50 boreholes and wells, 25 measuring weirs and about 70 creeks, drainage and springs were available to collect hydrological data and water samples. The different experiments and their evaluations as well as different hydrogeological models are presented and discussed. (J.S.). 9 refs., 7 figs

  12. Model Checker for Java Programs

    Science.gov (United States)

    Visser, Willem

    2007-01-01

    Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.

  13. Developing a model for hospital inherent safety assessment: Conceptualization and validation.

    Science.gov (United States)

    Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed

    2018-01-01

    Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.

  14. Verification and Validation of TMAP7

    Energy Technology Data Exchange (ETDEWEB)

    James Ambrosek; James Ambrosek

    2008-12-01

    The Tritium Migration Analysis Program, Version 7 (TMAP7) code is an update of TMAP4, an earlier version that was verified and validated in support of the International Thermonuclear Experimental Reactor (ITER) program and of the intermediate version TMAP2000. It has undergone several revisions. The current one includes radioactive decay, multiple trap capability, more realistic treatment of heteronuclear molecular formation at surfaces, processes that involve surface-only species, and a number of other improvements. Prior to code utilization, it needed to be verified and validated to ensure that the code is performing as it was intended and that its predictions are consistent with physical reality. To that end, the demonstration and comparison problems cited here show that the code results agree with analytical solutions for select problems where analytical solutions are straightforward or with results from other verified and validated codes, and that actual experimental results can be accurately replicated using reasonable models with this code. These results and their documentation in this report are necessary steps in the qualification of TMAP7 for its intended service.

  15. A proposed strategy for the validation of ground-water flow and solute transport models

    International Nuclear Information System (INIS)

    Davis, P.A.; Goodrich, M.T.

    1991-01-01

    Ground-water flow and transport models can be thought of as a combination of conceptual and mathematical models and the data that characterize a given system. The judgment of the validity or invalidity of a model depends both on the adequacy of the data and the model structure (i.e., the conceptual and mathematical model). This report proposes a validation strategy for testing both components independently. The strategy is based on the philosophy that a model cannot be proven valid, only invalid or not invalid. In addition, the authors believe that a model should not be judged in absence of its intended purpose. Hence, a flow and transport model may be invalid for one purpose but not invalid for another. 9 refs

  16. Sewer solids separation by sedimentation--the problem of modeling, validation and transferability.

    Science.gov (United States)

    Kutzner, R; Brombach, H; Geiger, W F

    2007-01-01

    Sedimentation of sewer solids in tanks, ponds and similar devices is the most relevant process for the treatment of stormwater and combined sewer overflows in urban collecting systems. In the past a lot of research work was done to develop deterministic models for the description of this separation process. But these modern models are not commonly accepted in Germany until today. Water Authorities are sceptical with regard to model validation and transferability. Within this paper it is checked whether this scepticism is reasonable. A framework-proposal for the validation of mathematical models with zero or one dimensional spatial resolution for particle separation processes for stormwater and combined sewer overflow treatment is presented. This proposal was applied to publications of repute on sewer solids separation by sedimentation. The result was that none of the investigated models described in literature passed the validation entirely. There is an urgent need for future research in sewer solids sedimentation and remobilization!

  17. Dynamic modeling and experimental validation for direct contact membrane distillation (DCMD) process

    KAUST Repository

    Eleiwi, Fadi

    2016-02-01

    This work proposes a mathematical dynamic model for the direct contact membrane distillation (DCMD) process. The model is based on a 2D Advection–Diffusion Equation (ADE), which describes the heat and mass transfer mechanisms that take place inside the DCMD module. The model studies the behavior of the process in the time varying and the steady state phases, contributing to understanding the process performance, especially when it is driven by intermittent energy supply, such as the solar energy. The model is experimentally validated in the steady state phase, where the permeate flux is measured for different feed inlet temperatures and the maximum absolute error recorded is 2.78 °C. Moreover, experimental validation includes the time variation phase, where the feed inlet temperature ranges from 30 °C to 75 °C with 0.1 °C increment every 2min. The validation marks relative error to be less than 5%, which leads to a strong correlation between the model predictions and the experiments.

  18. Dynamic modeling and experimental validation for direct contact membrane distillation (DCMD) process

    KAUST Repository

    Eleiwi, Fadi; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Francis, Lijo; Laleg-Kirati, Taous-Meriem

    2016-01-01

    This work proposes a mathematical dynamic model for the direct contact membrane distillation (DCMD) process. The model is based on a 2D Advection–Diffusion Equation (ADE), which describes the heat and mass transfer mechanisms that take place inside the DCMD module. The model studies the behavior of the process in the time varying and the steady state phases, contributing to understanding the process performance, especially when it is driven by intermittent energy supply, such as the solar energy. The model is experimentally validated in the steady state phase, where the permeate flux is measured for different feed inlet temperatures and the maximum absolute error recorded is 2.78 °C. Moreover, experimental validation includes the time variation phase, where the feed inlet temperature ranges from 30 °C to 75 °C with 0.1 °C increment every 2min. The validation marks relative error to be less than 5%, which leads to a strong correlation between the model predictions and the experiments.

  19. Large Eddy Simulation Modeling of Flashback and Flame Stabilization in Hydrogen-Rich Gas Turbines Using a Hierarchical Validation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Clemens, Noel [Univ. of Texas, Austin, TX (United States)

    2015-09-30

    This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LES to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.

  20. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  1. Context discovery using attenuated Bloom codes: model description and validation

    NARCIS (Netherlands)

    Liu, F.; Heijenk, Geert

    A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been

  2. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  3. Development and Validation of a Predictive Model for Functional Outcome After Stroke Rehabilitation: The Maugeri Model.

    Science.gov (United States)

    Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi

    2017-12-01

    Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ 2 was 4.12 ( P =0.249) and 1.20 ( P =0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ 2 was 8.86 ( P =0.115) and 34.50 ( P =0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P =0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P =0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.

  4. Verification and Validation of FAARR Model and Data Envelopment Analysis Models for United States Army Recruiting

    National Research Council Canada - National Science Library

    Piskator, Gene

    1998-01-01

    ...) model and to develop a Data Envelopment Analysis (DEA) modeling strategy. First, the FAARR model was verified using a simulation of a known production function and validated using sensitivity analysis and ex-post forecasts...

  5. Towards a realistic approach to validation of reactive transport models for performance assessment

    International Nuclear Information System (INIS)

    Siegel, M.D.

    1993-01-01

    Performance assessment calculations are based on geochemical models that assume that interactions among radionuclides, rocks and groundwaters under natural conditions, can be estimated or bound by data obtained from laboratory-scale studies. The data include radionuclide distribution coefficients, measured in saturated batch systems of powdered rocks, and retardation factors measured in short-term column experiments. Traditional approaches to model validation cannot be applied in a straightforward manner to the simple reactive transport models that use these data. An approach to model validation in support of performance assessment is described in this paper. It is based on a recognition of different levels of model validity and is compatible with the requirements of current regulations for high-level waste disposal. Activities that are being carried out in support of this approach include (1) laboratory and numerical experiments to test the validity of important assumptions inherent in current performance assessment methodologies,(2) integrated transport experiments, and (3) development of a robust coupled reaction/transport code for sensitivity analyses using massively parallel computers

  6. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  7. Social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) in the primary grades.

    Science.gov (United States)

    Wollersheim Shervey, Sarah; Sandilos, Lia E; DiPerna, James C; Lei, Pui-Wa

    2017-09-01

    The purpose of this study was to examine the social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) for teachers in the primary grades. Participants included 45 first and second grade teachers who completed a 16-item social validity questionnaire during each year of the SSIS-CIP efficacy trial. Findings indicated that teachers generally perceived the SSIS-CIP as a socially valid and feasible intervention for primary grades; however, teachers' ratings regarding ease of implementation and relevance and sequence demonstrated differences across grade levels in the second year of implementation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. The FITS model office ergonomics program: a model for best practice.

    Science.gov (United States)

    Chim, Justine M Y

    2014-01-01

    An effective office ergonomics program can predict positive results in reducing musculoskeletal injury rates, enhancing productivity, and improving staff well-being and job satisfaction. Its objective is to provide a systematic solution to manage the potential risk of musculoskeletal disorders among computer users in an office setting. A FITS Model office ergonomics program is developed. The FITS Model Office Ergonomics Program has been developed which draws on the legislative requirements for promoting the health and safety of workers using computers for extended periods as well as previous research findings. The Model is developed according to the practical industrial knowledge in ergonomics, occupational health and safety management, and human resources management in Hong Kong and overseas. This paper proposes a comprehensive office ergonomics program, the FITS Model, which considers (1) Furniture Evaluation and Selection; (2) Individual Workstation Assessment; (3) Training and Education; (4) Stretching Exercises and Rest Break as elements of an effective program. An experienced ergonomics practitioner should be included in the program design and implementation. Through the FITS Model Office Ergonomics Program, the risk of musculoskeletal disorders among computer users can be eliminated or minimized, and workplace health and safety and employees' wellness enhanced.

  9. Implementation and validation of the condensation model for containment hydrogen distribution studies

    International Nuclear Information System (INIS)

    Ravva, Srinivasa Rao; Iyer, Kannan N.; Gupta, S.K.; Gaikwad, Avinash J.

    2014-01-01

    Highlights: • A condensation model based on diffusion was implemented in FLUENT. • Validation of a condensation model for the H 2 distribution studies was performed. • Multi-component diffusion is used in the present work. • Appropriate grid and turbulence model were identified. - Abstract: This paper aims at the implementation details of a condensation model in the CFD code FLUENT and its validation so that it can be used in performing the containment hydrogen distribution studies. In such studies, computational fluid dynamics simulations are necessary for obtaining accurate predictions. While steam condensation plays an important role, commercial CFD codes such as FLUENT do not have an in-built condensation model. Therefore, a condensation model was developed and implemented in the FLUENT code through user defined functions (UDFs) for the sink terms in the mass, momentum, energy and species balance equations together with associated turbulence quantities viz., kinetic energy and dissipation rate. The implemented model was validated against the ISP-47 test of TOSQAN facility using the standard wall functions and enhanced wall treatment approaches. The best suitable grid size and the turbulence model for the low density gas (He) distribution studies are brought out in this paper

  10. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  11. Validation of Hydrodynamic Numerical Model of a Pitching Wave Energy Converter

    DEFF Research Database (Denmark)

    López, Maria del Pilar Heras; Thomas, Sarah; Kramer, Morten Mejlhede

    2017-01-01

    Validation of numerical model is essential in the development of new technologies. Commercial software and codes available simulating wave energy converters (WECs) have not been proved to work for all the available and upcoming technologies yet. The present paper presents the first stages...... of the validation process of a hydrodynamic numerical model for a pitching wave energy converter. The development of dry tests, wave flume and wave basin experiments are going to be explained, lessons learned shared and results presented....

  12. Validating soil phosphorus routines in the SWAT model

    Science.gov (United States)

    Phosphorus transfer from agricultural soils to surface waters is an important environmental issue. Commonly used models like SWAT have not always been updated to reflect improved understanding of soil P transformations and transfer to runoff. Our objective was to validate the ability of the P routin...

  13. An integrated approach for the validation of energy and environmental system analysis models : used in the validation of the Flexigas Excel BioGas model

    NARCIS (Netherlands)

    Pierie, Frank; van Someren, Christian; Liu, Wen; Bekkering, Jan; Hengeveld, Evert Jan; Holstein, J.; Benders, René M.J.; Laugs, Gideon A.H.; van Gemert, Wim; Moll, Henri C.

    2016-01-01

    A review has been completed for a verification and validation (V&V) of the (Excel) BioGas simulator or EBS model. The EBS model calculates the environmental impact of biogas production pathways using Material and Energy Flow Analysis, time dependent dynamics, geographic information, and Life Cycle

  14. Validation of a fracture mechanics approach to nuclear transportation cask design through a drop test program

    International Nuclear Information System (INIS)

    Sorenson, K.B.

    1986-01-01

    Sandia National Laboratories (SNL), under contract to the Department of Energy, is conducting a research program to develop and validate a fracture mechanics approach to cask design. A series of drop tests of a transportation cask is planned for the summer of 1986 as the method for benchmarking and, thereby, validating the fracture mechanics approach. This paper presents the drop test plan and background leading to the development of the test plan including structural analyses, material characterization, and non-destructive evaluation (NDE) techniques necessary for defining the test plan properly

  15. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  16. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  17. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    Science.gov (United States)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  18. The Development and Validation of a Transformational Leadership Survey for Substance Use Treatment Programs

    Science.gov (United States)

    Edwards, Jennifer R.; Knight, Danica K.; Broome, Kirk M.; Flynn, Patrick M.

    2014-01-01

    Directors in substance use treatment programs are increasingly required to respond to external economic and socio-political pressures. Leadership practices that promote innovation can help offset these challenges. Using focus groups, factor analysis, and validation instruments, the current study developed and established psychometrics for the Survey of Transformational Leadership. In 2008, clinical directors were evaluated on leadership practices by 214 counselors within 57 programs in four U.S. regions. Nine themes emerged: integrity, sensible risk, demonstrates innovation, encourages innovation, inspirational motivation, supports others, develops others, delegates tasks, and expects excellence. Study implications, limitations and suggested future directions are discussed. Funding from NIDA. PMID:20509734

  19. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 2 - model validation

    Science.gov (United States)

    Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.

  20. Validation of NEPTUNE-CFD two-phase flow models using experimental data

    International Nuclear Information System (INIS)

    Perez-Manes, Jorge; Sanchez Espinoza, Victor Hugo; Bottcher, Michael; Stieglitz, Robert; Sergio Chiva Vicent

    2014-01-01

    This paper deals with the validation of the two-phase flow models of the CFD code NEPTUNE-CFD using experimental data provided by the OECD BWR BFBT and PSBT Benchmark. Since the two-phase models of CFD codes are extensively being improved, the validation is a key step for the acceptability of such codes. The validation work is performed in the frame of the European NURISP Project and it was focused on the steady state and transient void fraction tests. The influence of different NEPTUNE-CFD model parameters on the void fraction prediction is investigated and discussed in detail. Due to the coupling of heat conduction solver SYRTHES with NEPTUNE-CFD, the description of the coupled fluid dynamics and heat transfer between the fuel rod and the fluid is improved significantly. The averaged void fraction predicted by NEPTUNE-CFD for selected PSBT and BFBT tests is in good agreement with the experimental data. Finally, areas for future improvements of the NEPTUNE-CFD code were identified, too. (authors)

  1. Flight Testing an Iced Business Jet for Flight Simulation Model Validation

    Science.gov (United States)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon

    2007-01-01

    A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.

  2. Validating a perceptual distraction model in a personal two-zone sound system

    DEFF Research Database (Denmark)

    Rämö, Jussi; Christensen, Lasse; Bech, Søren

    2017-01-01

    This paper focuses on validating a perceptual distraction model, which aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space. Originally, the distraction model was trained with music-on-music stimuli...... that the model performance is equally good in both zones, i.e., with both speech-on-music and music-on-speech stimuli, and comparable to the previous validation round (RMSE approximately 10%). The results further confirm that the distraction model can be used as a valuable tool in evaluating and optimizing...

  3. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  4. Secure Programming Cookbook for C and C++ Recipes for Cryptography, Authentication, Input Validation & More

    CERN Document Server

    Viega, John

    2009-01-01

    Secure Programming Cookbook for C and C++ is an important new resource for developers serious about writing secure code for Unix® (including Linux®) and Windows® environments. This essential code companion covers a wide range of topics, including safe initialization, access control, input validation, symmetric and public key cryptography, cryptographic hashes and MACs, authentication and key exchange, PKI, random numbers, and anti-tampering.

  5. International Energy Agency Ocean Energy Systems Task 10 Wave Energy Converter Modeling Verification and Validation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yu, Yi-Hsiang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Nielsen, Kim [Ramboll, Copenhagen (Denmark); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bunnik, Tim [MARIN (Netherlands); Touzon, Imanol [Tecnalia (Spain); Nam, Bo Woo [KRISO (Korea, Rep. of); Kim, Jeong Seok [KRISO (Korea, Rep. of); Janson, Carl Erik [Chalmers University (Sweden); Jakobsen, Ken-Robert [EDRMedeso (Norway); Crowley, Sarah [WavEC (Portugal); Vega, Luis [Hawaii Natural Energy Institute (United States); Rajagopalan, Krishnakimar [Hawaii Natural Energy Institute (United States); Mathai, Thomas [Glosten (United States); Greaves, Deborah [Plymouth University (United Kingdom); Ransley, Edward [Plymouth University (United Kingdom); Lamont-Kane, Paul [Queen' s University Belfast (United Kingdom); Sheng, Wanan [University College Cork (Ireland); Costello, Ronan [Wave Venture (United Kingdom); Kennedy, Ben [Wave Venture (United Kingdom); Thomas, Sarah [Floating Power Plant (Denmark); Heras, Pilar [Floating Power Plant (Denmark); Bingham, Harry [Technical University of Denmark (Denmark); Kurniawan, Adi [Aalborg University (Denmark); Kramer, Morten Mejlhede [Aalborg University (Denmark); Ogden, David [INNOSEA (France); Girardin, Samuel [INNOSEA (France); Babarit, Aurelien [EC Nantes (France); Wuillaume, Pierre-Yves [EC Nantes (France); Steinke, Dean [Dynamic Systems Analysis (Canada); Roy, Andre [Dynamic Systems Analysis (Canada); Beatty, Scott [Cascadia Coast Research (Canada); Schofield, Paul [ANSYS (United States); Kim, Kyong-Hwan [KRISO (Korea, Rep. of); Jansson, Johan [KTH Royal Inst. of Technology, Stockholm (Sweden); BCAM (Spain); Hoffman, Johan [KTH Royal Inst. of Technology, Stockholm (Sweden)

    2017-10-16

    This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 was proposed by Bob Thresher (National Renewable Energy Laboratory) in 2015 and approved by the OES Executive Committee EXCO in 2016. The kickoff workshop took place in September 2016, wherein the initial baseline task was defined. Experience from similar offshore wind validation/verification projects (OC3-OC5 conducted within the International Energy Agency Wind Task 30) [1], [2] showed that a simple test case would help the initial cooperation to present results in a comparable way. A heaving sphere was chosen as the first test case. The team of project participants simulated different numerical experiments, such as heave decay tests and regular and irregular wave cases. The simulation results are presented and discussed in this paper.

  6. Modeling of surge in free-spool centrifugal compressors : experimental validation

    NARCIS (Netherlands)

    Gravdahl, J.T.; Willems, F.P.T.; Jager, de A.G.; Egeland, O.

    2004-01-01

    The derivation of a compressor characteristic, and the experimental validation of a dynamic model for a variable speed centrifugal compressor using this characteristic, are presented. The dynamic compressor model of Fink et al. is used, and a variable speed compressor characteristic is derived by

  7. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  8. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    Science.gov (United States)

    Anderson, T.

    2016-02-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  9. Earth as an extrasolar planet: Earth model validation using EPOXI earth observations.

    Science.gov (United States)

    Robinson, Tyler D; Meadows, Victoria S; Crisp, David; Deming, Drake; A'hearn, Michael F; Charbonneau, David; Livengood, Timothy A; Seager, Sara; Barry, Richard K; Hearty, Thomas; Hewagama, Tilak; Lisse, Carey M; McFadden, Lucy A; Wellnitz, Dennis D

    2011-06-01

    The EPOXI Discovery Mission of Opportunity reused the Deep Impact flyby spacecraft to obtain spatially and temporally resolved visible photometric and moderate resolution near-infrared (NIR) spectroscopic observations of Earth. These remote observations provide a rigorous validation of whole-disk Earth model simulations used to better understand remotely detectable extrasolar planet characteristics. We have used these data to upgrade, correct, and validate the NASA Astrobiology Institute's Virtual Planetary Laboratory three-dimensional line-by-line, multiple-scattering spectral Earth model. This comprehensive model now includes specular reflectance from the ocean and explicitly includes atmospheric effects such as Rayleigh scattering, gas absorption, and temperature structure. We have used this model to generate spatially and temporally resolved synthetic spectra and images of Earth for the dates of EPOXI observation. Model parameters were varied to yield an optimum fit to the data. We found that a minimum spatial resolution of ∼100 pixels on the visible disk, and four categories of water clouds, which were defined by using observed cloud positions and optical thicknesses, were needed to yield acceptable fits. The validated model provides a simultaneous fit to Earth's lightcurve, absolute brightness, and spectral data, with a root-mean-square (RMS) error of typically less than 3% for the multiwavelength lightcurves and residuals of ∼10% for the absolute brightness throughout the visible and NIR spectral range. We have extended our validation into the mid-infrared by comparing the model to high spectral resolution observations of Earth from the Atmospheric Infrared Sounder, obtaining a fit with residuals of ∼7% and brightness temperature errors of less than 1 K in the atmospheric window. For the purpose of understanding the observable characteristics of the distant Earth at arbitrary viewing geometry and observing cadence, our validated forward model can be

  10. Validation of a Hot Water Distribution Model Using Laboratory and Field Data

    Energy Technology Data Exchange (ETDEWEB)

    Backman, C.; Hoeschele, M.

    2013-07-01

    Characterizing the performance of hot water distribution systems is a critical step in developing best practice guidelines for the design and installation of high performance hot water systems. Developing and validating simulation models is critical to this effort, as well as collecting accurate input data to drive the models. In this project, the ARBI team validated the newly developed TRNSYS Type 604 pipe model against both detailed laboratory and field distribution system performance data. Validation efforts indicate that the model performs very well in handling different pipe materials, insulation cases, and varying hot water load conditions. Limitations of the model include the complexity of setting up the input file and long simulation run times. In addition to completing validation activities, this project looked at recent field hot water studies to better understand use patterns and potential behavioral changes as homeowners convert from conventional storage water heaters to gas tankless units. Based on these datasets, we conclude that the current Energy Factor test procedure overestimates typical use and underestimates the number of hot water draws. This has implications for both equipment and distribution system performance. Gas tankless water heaters were found to impact how people use hot water, but the data does not necessarily suggest an increase in usage. Further study in hot water usage and patterns is needed to better define these characteristics in different climates and home vintages.

  11. Experimental validation of a thermodynamic boiler model under steady state and dynamic conditions

    International Nuclear Information System (INIS)

    Carlon, Elisa; Verma, Vijay Kumar; Schwarz, Markus; Golicza, Laszlo; Prada, Alessandro; Baratieri, Marco; Haslinger, Walter; Schmidl, Christoph

    2015-01-01

    Highlights: • Laboratory tests on two commercially available pellet boilers. • Steady state and a dynamic load cycle tests. • Pellet boiler model calibration based on data registered in stationary operation. • Boiler model validation with reference to both stationary and dynamic operation. • Validated model suitable for coupled simulation of building and heating system. - Abstract: Nowadays dynamic building simulation is an essential tool for the design of heating systems for residential buildings. The simulation of buildings heated by biomass systems, first of all needs detailed boiler models, capable of simulating the boiler both as a stand-alone appliance and as a system component. This paper presents the calibration and validation of a boiler model by means of laboratory tests. The chosen model, i.e. TRNSYS “Type 869”, has been validated for two commercially available pellet boilers of 6 and 12 kW nominal capacities. Two test methods have been applied: the first is a steady state test at nominal load and the second is a load cycle test including stationary operation at different loads as well as transient operation. The load cycle test is representative of the boiler operation in the field and characterises the boiler’s stationary and dynamic behaviour. The model had been calibrated based on laboratory data registered during stationary operation at different loads and afterwards it was validated by simulating both the stationary and the dynamic tests. Selected parameters for the validation were the heat transfer rates to water and the water temperature profiles inside the boiler and at the boiler outlet. Modelling results showed better agreement with experimental data during stationary operation rather than during dynamic operation. Heat transfer rates to water were predicted with a maximum deviation of 10% during the stationary operation, and a maximum deviation of 30% during the dynamic load cycle. However, for both operational regimes the

  12. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  13. Atmospheric corrosion: statistical validation of models

    International Nuclear Information System (INIS)

    Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.

    2003-01-01

    In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs

  14. Modelling of electrical cabinet fires based on the CARMELA experimental program

    International Nuclear Information System (INIS)

    Melis, S.; Rigollet, L.; Such, J.M.; Casselman, C.

    2004-01-01

    As fire of electrical cabinets causes some hazard to nuclear safety, IRSN has conducted the CARMELA program to investigate this topic. The program was carried out in three stages. The two first stages consisted in analytical experiments where the combustible was simulated by thin plastic pieces and where the different parameters that influence the fire could be easily varied. The third stage involved real relay cabinets. This article first describes the experimental facility and the test matrix. The phenomenology of electrical cabinet fires is then exposed and the most influencing parameters are identified from the analytical experiments: the ventilation comes at first rank but the materials involved are also shown to influence the propagation of the fire. The model developed to represent the fire, and particularly the rate of heat released, is then presented and the comparison of its results with the measurements performed in the experiments shows that its validity is acceptable. (orig.)

  15. Lagrangian Stochastic Dispersion Model IMS Model Suite and its Validation against Experimental Data

    International Nuclear Information System (INIS)

    Bartok, J.

    2010-01-01

    The dissertation presents IMS Lagrangian Dispersion Model, which is a 'new generation' Slovak dispersion model of long-range transport, developed by MicroStep-MIS. It solves trajectory equation for a vast number of Lagrangian 'particles' and stochastic equation that simulates the effects of turbulence. Model contains simulation of radioactive decay (full decay chains of more than 300 nuclides), and dry and wet deposition. Model was integrated into IMS Model Suite, a system in which several models and modules can run and cooperate, e.g. LAM model WRF preparing fine resolution meteorological data for dispersion. The main theme of the work is validation of dispersion model against large scale international campaigns CAPTEX and ETEX, which are two of the largest tracer experiments. Validation addressed treatment of missing data, data interpolation into comparable temporal and spatial representation. The best model results were observed for ETEX I, standard results for CAPTEXes and worst results for ETEX II, known in modelling community for its meteorological conditions that can be hardly resolved by models. The IMS Lagrangian Dispersion Model was identified as capable long range dispersion model for slowly- or nonreacting chemicals and radioactive matter. Influence of input data on simulation quality is discussed within the work. Additional modules were prepared according to praxis requirement: a) Recalculation of concentrations of radioactive pollutant into effective doses form inhalation, immersion in the plume and deposition. b) Dispersion of mineral dust was added and tested in desert locality, where wind and soil moisture were firstly analysed and forecast by WRF. The result was qualitatively verified in case study against satellite observations. (author)

  16. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  17. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    Science.gov (United States)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-09-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of graphs and point estimates. To discuss the applicability of existing validation techniques and to present a new method for quantifying the degrees of validity statistically, which is useful for decision makers. A new Bayesian method is proposed to determine how well HE model outcomes compare with empirical data. Validity is based on a pre-established accuracy interval in which the model outcomes should fall. The method uses the outcomes of a probabilistic sensitivity analysis and results in a posterior distribution around the probability that HE model outcomes can be regarded as valid. We use a published diabetes model (Modelling Integrated Care for Diabetes based on Observational data) to validate the outcome "number of patients who are on dialysis or with end-stage renal disease." Results indicate that a high probability of a valid outcome is associated with relatively wide accuracy intervals. In particular, 25% deviation from the observed outcome implied approximately 60% expected validity. Current practice in HE model validation can be improved by using an alternative method based on assessing whether the model outcomes fit to empirical data at a predefined level of accuracy. This method has the advantage of assessing both model bias and parameter uncertainty and resulting in a quantitative measure of the degree of validity that penalizes models predicting the mean of an outcome correctly but with overly wide credible intervals. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  19. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  20. Validation of a FAST Model of the SWAY Prototype Floating Wind Turbine

    Energy Technology Data Exchange (ETDEWEB)

    Koh, J. H. [Nanyang Technological Univ. (Singapore); Ng, E. Y. K. [Nanyang Technological Univ. (Singapore); Robertson, Amy [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Lab. (NREL), Golden, CO (United States); Driscoll, Frederick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-06-01

    As part of a collaboration of the National Renewable Energy Laboratory (NREL) and SWAY AS, NREL installed scientific wind, wave, and motion measurement equipment on the spar-type 1/6.5th-scale prototype SWAY floating offshore wind system. The equipment enhanced SWAY's data collection and allowed SWAY to verify the concept and NREL to validate a FAST model of the SWAY design in an open-water condition. Nanyang Technological University (NTU), in collaboration with NREL, assisted with the validation. This final report gives an overview of the SWAY prototype and NREL and NTU's efforts to validate a model of the system. The report provides a summary of the different software tools used in the study, the modeling strategies, and the development of a FAST model of the SWAY prototype wind turbine, including justification of the modeling assumptions. Because of uncertainty in system parameters and modeling assumptions due to the complexity of the design, several system properties were tuned to better represent the system and improve the accuracy of the simulations. Calibration was performed using data from a static equilibrium test and free-decay tests.

  1. Refining and validating a conceptual model of Clinical Nurse Leader integrated care delivery.

    Science.gov (United States)

    Bender, Miriam; Williams, Marjory; Su, Wei; Hites, Lisle

    2017-02-01

    To empirically validate a conceptual model of Clinical Nurse Leader integrated care delivery. There is limited evidence of frontline care delivery models that consistently achieve quality patient outcomes. Clinical Nurse Leader integrated care delivery is a promising nursing model with a growing record of success. However, theoretical clarity is necessary to generate causal evidence of effectiveness. Sequential mixed methods. A preliminary Clinical Nurse Leader practice model was refined and survey items developed to correspond with model domains, using focus groups and a Delphi process with a multi-professional expert panel. The survey was administered in 2015 to clinicians and administrators involved in Clinical Nurse Leader initiatives. Confirmatory factor analysis and structural equation modelling were used to validate the measurement and model structure. Final sample n = 518. The model incorporates 13 components organized into five conceptual domains: 'Readiness for Clinical Nurse Leader integrated care delivery'; 'Structuring Clinical Nurse Leader integrated care delivery'; 'Clinical Nurse Leader Practice: Continuous Clinical Leadership'; 'Outcomes of Clinical Nurse Leader integrated care delivery'; and 'Value'. Sample data had good fit with specified model and two-level measurement structure. All hypothesized pathways were significant, with strong coefficients suggesting good fit between theorized and observed path relationships. The validated model articulates an explanatory pathway of Clinical Nurse Leader integrated care delivery, including Clinical Nurse Leader practices that result in improved care dynamics and patient outcomes. The validated model provides a basis for testing in practice to generate evidence that can be deployed across the healthcare spectrum. © 2016 John Wiley & Sons Ltd.

  2. Assessment of teacher competence using video portfolios: reliability, construct validity and consequential validity

    NARCIS (Netherlands)

    Admiraal, W.; Hoeksma, M.; van de Kamp, M.-T.; van Duin, G.

    2011-01-01

    The richness and complexity of video portfolios endanger both the reliability and validity of the assessment of teacher competencies. In a post-graduate teacher education program, the assessment of video portfolios was evaluated for its reliability, construct validity, and consequential validity.

  3. Interactive differential equations modeling program

    International Nuclear Information System (INIS)

    Rust, B.W.; Mankin, J.B.

    1976-01-01

    Due to the recent emphasis on mathematical modeling, many ecologists are using mathematics and computers more than ever, and engineers, mathematicians and physical scientists are now included in ecological projects. However, the individual ecologist, with intuitive knowledge of the system, still requires the means to critically examine and adjust system models. An interactive program was developed with the primary goal of allowing an ecologist with minimal experience in either mathematics or computers to develop a system model. It has also been used successfully by systems ecologists, engineers, and mathematicians. This program was written in FORTRAN for the DEC PDP-10, a remote terminal system at Oak Ridge National Laboratory. However, with relatively minor modifications, it can be implemented on any remote terminal system with a FORTRAN IV compiler, or equivalent. This program may be used to simulate any phenomenon which can be described as a system of ordinary differential equations. The program allows the user to interactively change system parameters and/or initial conditions, to interactively select a set of variables to be plotted, and to model discontinuities in the state variables and/or their derivatives. One of the most useful features to the non-computer specialist is the ability to interactively address the system parameters by name and to interactively adjust their values between simulations. These and other features are described in greater detail

  4. Predictive Simulation of Material Failure Using Peridynamics -- Advanced Constitutive Modeling, Verification and Validation

    Science.gov (United States)

    2016-03-31

    AFRL-AFOSR-VA-TR-2016-0309 Predictive simulation of material failure using peridynamics- advanced constitutive modeling, verification , and validation... Self -explanatory. 8. PERFORMING ORGANIZATION REPORT NUMBER. Enter all unique alphanumeric report numbers assigned by the performing organization, e.g...for public release. Predictive simulation of material failure using peridynamics-advanced constitutive modeling, verification , and validation John T

  5. Development and Validation of the Motivation for Tutoring Questionnaire in Problem-Based Learning Programs

    OpenAIRE

    Kassab, Salah Eldin; Hassan, Nahla; El-Araby, Shimaa; Salem, Abdel Halim; Alrebish, Saleh Ali; Al-Amro, Ahmed S.; Al-Shobaili, Hani A.; Hamdy, Hossam

    2017-01-01

    Purpose: There are no published instruments, which measure tutor motivation for conducting small group tutorials in problem-based learning programs. Therefore, we aimed to develop a motivation for tutoring questionnaire in problem-based learning (MTQ-PBL) and evaluate its construct validity. Methods: The questionnaire included 28 items representing four constructs: tutoring self-efficacy (15 items), tutoring interest (6 items), tutoring value (4 items), and tutoring effort (3 items). Tutor...

  6. Validation of CATHARE for gas-cooled reactors

    International Nuclear Information System (INIS)

    Fabrice Bentivoglio; Ola Widlund; Manuel Saez

    2005-01-01

    Full text of publication follows: Extensively validated and qualified for light-water reactor safety studies, the thermo-hydraulics code CATHARE has been adapted to deal also with gas-cooled reactor applications. In order to validate the code for these novel applications, CEA (Commissariat a l'Energie Atomique) has initiated an ambitious long-term experimental program. The foreseen experimental facilities range from small-scale loops for physical correlations, to component technology and system demonstration loops. In the short-term perspective, CATHARE is being validated against existing experimental data, in particular from the German power plant Oberhausen II and the South African Pebble-Bed Micro Model (PBMM). Oberhausen II, operated by the German utility EVO, is a 50 MW(e) direct-cycle Helium turbine plant. The power source is a gas burner rather than a nuclear reactor core, but the power conversion system resembles those of the GFR (Gas-cooled Fast Reactor) and other high-temperature reactor concepts. Oberhausen II was operated for more than 100 000 hours between 1974 and 1988. Design specifications, drawings and experimental data have been obtained through the European HTR project, offering a unique opportunity to validate CATHARE on a large-scale Brayton cycle. Available measurements of temperatures, pressures and mass flows throughout the circuit have allowed a very comprehensive thermohydraulic description of the plant, in steady-state conditions as well as during transients. The Pebble-Bed Micro Model (PBMM) is a small-scale model conceived to demonstrate the operability and control strategies of the South African PBMR concept. The model uses Nitrogen instead of Helium, and an electrical heater with a maximum rating of 420 kW. As the full-scale PBMR, the PBMM loop features three turbines and two compressors on the primary circuit, located on three separate shafts. The generator, however, is modelled by a third compressor on a separate circuit, with a

  7. TCV software test and validation tools and technique. [Terminal Configured Vehicle program for commercial transport aircraft operation

    Science.gov (United States)

    Straeter, T. A.; Williams, J. R.

    1976-01-01

    The paper describes techniques for testing and validating software for the TCV (Terminal Configured Vehicle) program which is intended to solve problems associated with operating a commercial transport aircraft in the terminal area. The TCV research test bed is a Boeing 737 specially configured with digital computer systems to carry out automatic navigation, guidance, flight controls, and electronic displays research. The techniques developed for time and cost reduction include automatic documentation aids, an automatic software configuration, and an all software generation and validation system.

  8. Comparison of Stepped Care Delivery Against a Single, Empirically Validated Cognitive-Behavioral Therapy Program for Youth With Anxiety: A Randomized Clinical Trial.

    Science.gov (United States)

    Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine

    2017-10-01

    Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. The Best Model of the Swiss Banknote Data -Validation by the 95% CI of coefficients and t-test of discriminant scores

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2016-06-01

    Full Text Available The discriminant analysis is not the inferential statistics since there are no equations for standard error (SE of error rate and discriminant coefficient based on the normal distribution. In this paper, we proposed the “k-fold cross validation for small sample” and can obtain the 95% confidence interval (CI of error rates and discriminant coefficients. This method is the computer-intensive approach by statistical and mathematical programming (MP software such as JMP and LINGO. By the proposed approach, we can choose the best model with the minimum mean of error rate in the validation samples (Minimum M2 Standard. In this research, we examine the sixteen linear separable models of Swiss banknote data by eight linear discriminant functions (LDFs. M2 of the best model of Revised IP-OLDF is the smallest value of all models. We find all coefficients of six Revised IP-OLDF among sixteen models rejected by the 95% CI of discriminant coefficients (Discriminant coefficient standard. We compare t-values of the discriminant scores. The t-value of the best model has the maximum values among sixteen models (Maximum t-value Standard. Moreover, we can conclude that all standards support the best model of Revised IP-OLDF.

  10. Natural analogues and radionuclide transport model validation

    International Nuclear Information System (INIS)

    Lever, D.A.

    1987-08-01

    In this paper, some possible roles for natural analogues are discussed from the point of view of those involved with the development of mathematical models for radionuclide transport and with the use of these models in repository safety assessments. The characteristic features of a safety assessment are outlined in order to address the questions of where natural analogues can be used to improve our understanding of the processes involved and where they can assist in validating the models that are used. Natural analogues have the potential to provide useful information about some critical processes, especially long-term chemical processes and migration rates. There is likely to be considerable uncertainty and ambiguity associated with the interpretation of natural analogues, and thus it is their general features which should be emphasized, and models with appropriate levels of sophistication should be used. Experience gained in modelling the Koongarra uranium deposit in northern Australia is drawn upon. (author)

  11. Merging Empiricism and Humanism: Role of Social Validity in the School-Wide Positive Behavior Support Model

    Science.gov (United States)

    Marchant, Michelle; Heath, Melissa Allen; Miramontes, Nancy Y.

    2013-01-01

    Criteria for evaluating behavior support programs are changing. Consumer-based educational and behavioral programs, such as School-Wide Positive Behavior Support (SWPBS), are particularly influenced by consumer opinion. Unfortunately, the need for and use of social validity measures have not received adequate attention in the empirical literature…

  12. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  13. Outdoor Program Models: Placing Cooperative Adventure and Adventure Education Models on the Continuum.

    Science.gov (United States)

    Guthrie, Steven P.

    In two articles on outdoor programming models, Watters distinguished four models on a continuum ranging from the common adventure model, with minimal organizational structure and leadership control, to the guide service model, in which leaders are autocratic and trips are highly structured. Club programs and instructional programs were in between,…

  14. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    Science.gov (United States)

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  15. Lessons learned from recent geomagnetic disturbance model validation activities

    Science.gov (United States)

    Pulkkinen, A. A.; Welling, D. T.

    2017-12-01

    Due to concerns pertaining to geomagnetically induced current impact on ground-based infrastructure, there has been significantly elevated interest in applying models for local geomagnetic disturbance or "delta-B" predictions. Correspondingly there has been elevated need for testing the quality of the delta-B predictions generated by the modern empirical and physics-based models. To address this need, community-wide activities were launched under the GEM Challenge framework and one culmination of the activities was the validation and selection of models that were transitioned into operations at NOAA SWPC. The community-wide delta-B action is continued under the CCMC-facilitated International Forum for Space Weather Capabilities Assessment and its "Ground Magnetic Perturbations: dBdt, delta-B, GICs, FACs" working group. The new delta-B working group builds on the past experiences and expands the collaborations to cover the entire international space weather community. In this paper, we discuss the key lessons learned from the past delta-B validation exercises and lay out the path forward for building on those experience under the new delta-B working group.

  16. High Turbidity Solis Clear Sky Model: Development and Validation

    Directory of Open Access Journals (Sweden)

    Pierre Ineichen

    2018-03-01

    Full Text Available The Solis clear sky model is a spectral scheme based on radiative transfer calculations and the Lambert–Beer relation. Its broadband version is a simplified fast analytical version; it is limited to broadband aerosol optical depths lower than 0.45, which is a weakness when applied in countries with very high turbidity such as China or India. In order to extend the use of the original simplified version of the model for high turbidity values, we developed a new version of the broadband Solis model based on radiative transfer calculations, valid for turbidity values up to 7, for the three components, global, beam, and diffuse, and for the four aerosol types defined by Shettle and Fenn. A validation of low turbidity data acquired in Geneva shows slightly better results than the previous version. On data acquired at sites presenting higher turbidity data, the bias stays within ±4% for the beam and the global irradiances, and the standard deviation around 5% for clean and stable condition data and around 12% for questionable data and variable sky conditions.

  17. Construction and validation of detailed kinetic models for the combustion of gasoline surrogates; Construction et validation de modeles cinetiques detailles pour la combustion de melanges modeles des essences

    Energy Technology Data Exchange (ETDEWEB)

    Touchard, S.

    2005-10-15

    The irreversible reduction of oil resources, the CO{sub 2} emission control and the application of increasingly strict standards of pollutants emission lead the worldwide researchers to work to reduce the pollutants formation and to improve the engine yields, especially by using homogenous charge combustion of lean mixtures. The numerical simulation of fuel blends oxidation is an essential tool to study the influence of fuel formulation and motor conditions on auto-ignition and on pollutants emissions. The automatic generation helps to obtain detailed kinetic models, especially at low temperature, where the number of reactions quickly exceeds thousand. The main purpose of this study is the generation and the validation of detailed kinetic models for the oxidation of gasoline blends using the EXGAS software. This work has implied an improvement of computation rules for thermodynamic and kinetic data, those were validated by numerical simulation using CHEMKIN II softwares. A large part of this work has concerned the understanding of the low temperature oxidation chemistry of the C5 and larger alkenes. Low and high temperature mechanisms were proposed and validated for 1 pentene, 1-hexene, the binary mixtures containing 1 hexene/iso octane, 1 hexene/toluene, iso octane/toluene and the ternary mixture of 1 hexene/toluene/iso octane. Simulations were also done for propene, 1-butene and iso-octane with former models including the modifications proposed in this PhD work. If the generated models allowed us to simulate with a good agreement the auto-ignition delays of the studied molecules and blends, some uncertainties still remains for some reaction paths leading to the formation of cyclic products in the case of alkenes oxidation at low temperature. It would be also interesting to carry on this work for combustion models of gasoline blends at low temperature. (author)

  18. Status of CHAP: composite HTGR analysis program

    International Nuclear Information System (INIS)

    Secker, P.A.; Gilbert, J.S.

    1975-12-01

    Development of an HTGR accident simulation program is in progress for the prediction of the overall HTGR plant transient response to various initiating events. The status of the digital computer program named CHAP (Composite HTGR Analysis Program) as of June 30, 1975, is given. The philosophy, structure, and capabilities of the CHAP code are discussed. Mathematical descriptions are given for those HTGR components that have been modeled. Component model validation and evaluation using auxiliary analysis codes are also discussed

  19. Validation of an O-18 leaf water enrichment model

    Energy Technology Data Exchange (ETDEWEB)

    Jaeggi, M.; Saurer, M.; Siegwolf, R.

    2002-03-01

    The seasonal trend in {delta}{sup 18}O{sub ol} in leaf organic matter of spruce needles of mature trees could be modelled for two years. The seasonality was mainly explained by the {delta}{sup 18}O of top-soil water, whereas between years differences were due to variation in air humidity. Application of a third year's data set improved the correlation between modelled and measured {delta}{sup 18}O{sub ol} and thus validated our extended Dongmann model. (author)

  20. Development and validation of logistic prognostic models by predefined SAS-macros

    Directory of Open Access Journals (Sweden)

    Ziegler, Christoph

    2006-02-01

    Full Text Available In medical decision making about therapies or diagnostic procedures in the treatment of patients the prognoses of the course or of the magnitude of diseases plays a relevant role. Beside of the subjective attitude of the clinician mathematical models can help in providing such prognoses. Such models are mostly multivariate regression models. In the case of a dichotomous outcome the logistic model will be applied as the standard model. In this paper we will describe SAS-macros for the development of such a model, for examination of the prognostic performance, and for model validation. The rational for this developmental approach of a prognostic modelling and the description of the macros can only given briefly in this paper. Much more details are given in. These 14 SAS-macros are a tool for setting up the whole process of deriving a prognostic model. Especially the possibility of validating the model by a standardized software tool gives an opportunity, which is not used in general in published prognostic models. Therefore, this can help to develop new models with good prognostic performance for use in medical applications.

  1. Model Checking JAVA Programs Using Java Pathfinder

    Science.gov (United States)

    Havelund, Klaus; Pressburger, Thomas

    2000-01-01

    This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.

  2. Modeling and validation of existing VAV system components

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada)

    2004-07-01

    The optimization of supervisory control strategies and local-loop controllers can improve the performance of HVAC (heating, ventilating, air-conditioning) systems. In this study, the component model of the fan, the damper and the cooling coil were developed and validated against monitored data of an existing variable air volume (VAV) system installed at Montreal's Ecole de Technologie Superieure. The measured variables that influence energy use in individual HVAC models included: (1) outdoor and return air temperature and relative humidity, (2) supply air and water temperatures, (3) zone airflow rates, (4) supply duct, outlet fan, mixing plenum static pressures, (5) fan speed, and (6) minimum and principal damper and cooling and heating coil valve positions. The additional variables that were considered, but not measured were: (1) fan and outdoor airflow rate, (2) inlet and outlet cooling coil relative humidity, and (3) liquid flow rate through the heating or cooling coils. The paper demonstrates the challenges of the validation process when monitored data of existing VAV systems are used. 7 refs., 11 figs.

  3. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    Science.gov (United States)

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries

  4. International validation of safety analyses for nuclear power plants; Mednarodno preverjanje varnostnih analiz za jedrske elektrane

    Energy Technology Data Exchange (ETDEWEB)

    Gregoric, N; Mavko, B [Institut ' Jozef Stefan' Ljubljana (Yugoslavia)

    1988-07-01

    Paper describes the participation of 'J.Stefan' Institute in international standard problems for validation of modeling and programs for safety analysis. Listed are main international experimental facilities for collecting data basic for understanding of physical phenomena, code development and validation of modelling and programs. Since the results of international standard problem analyses are published in a joint final report, it is simple to asses the conformance of the results of a particular group with the experiment. Good results from three international exercises done so far, have encouraged the group to currently participate in OECD-ISP-22 which is a model of the Italian three loop PWR. (author)

  5. DNA Commission of the International Society for Forensic Genetics: Recommendations on the validation of software programs performing biostatistical calculations for forensic genetics applications.

    Science.gov (United States)

    Coble, M D; Buckleton, J; Butler, J M; Egeland, T; Fimmers, R; Gill, P; Gusmão, L; Guttman, B; Krawczak, M; Morling, N; Parson, W; Pinto, N; Schneider, P M; Sherry, S T; Willuweit, S; Prinz, M

    2016-11-01

    The use of biostatistical software programs to assist in data interpretation and calculate likelihood ratios is essential to forensic geneticists and part of the daily case work flow for both kinship and DNA identification laboratories. Previous recommendations issued by the DNA Commission of the International Society for Forensic Genetics (ISFG) covered the application of bio-statistical evaluations for STR typing results in identification and kinship cases, and this is now being expanded to provide best practices regarding validation and verification of the software required for these calculations. With larger multiplexes, more complex mixtures, and increasing requests for extended family testing, laboratories are relying more than ever on specific software solutions and sufficient validation, training and extensive documentation are of upmost importance. Here, we present recommendations for the minimum requirements to validate bio-statistical software to be used in forensic genetics. We distinguish between developmental validation and the responsibilities of the software developer or provider, and the internal validation studies to be performed by the end user. Recommendations for the software provider address, for example, the documentation of the underlying models used by the software, validation data expectations, version control, implementation and training support, as well as continuity and user notifications. For the internal validations the recommendations include: creating a validation plan, requirements for the range of samples to be tested, Standard Operating Procedure development, and internal laboratory training and education. To ensure that all laboratories have access to a wide range of samples for validation and training purposes the ISFG DNA commission encourages collaborative studies and public repositories of STR typing results. Published by Elsevier Ireland Ltd.

  6. A Type Graph Model for Java Programs

    NARCIS (Netherlands)

    Rensink, Arend; Zambon, Eduardo

    2009-01-01

    In this report we present a type graph that models all executable constructs of the Java programming language. Such a model is useful for any graph-based technique that relies on a representation of Java programs as graphs. The model can be regarded as a common representation to which all Java

  7. Validation of groundwater modelling for DDT and petroleum hydrocarbons at Border Pump Station and Rainy Hollow, northern British Columbia

    International Nuclear Information System (INIS)

    Dodd, M.; Bright, D.; Hartshorne, B.

    2001-01-01

    Border Station and Rainy Hollow are inactive booster pumping stations along the Haines-Fairbanks Pipeline in northern British Columbia. An emergency site cleanup was conducted in 1994 after canisters containing DDT [1,1,1-trichloro-2,2-bis(p-chloro phenyl)ethane] were discovered buried in a dump. A detailed site investigation showed that hydrocarbons and DDT were present in soil and groundwater. The major contaminants of concern were DDTs in surface soil, DDTs in subsurface soils and groundwater, and light hydrocarbons in subsurface soils and groundwater. Remedial action took place in the summer of 1997. The canisters, along with soils and other contaminated materials, were excavated and shipped off-site for disposal. A conceptual groundwater model was developed to predict future contaminant releases to the nearby Klehini River. A monitoring program was initiated to validate the groundwater model. From 1997 to 2000, the groundwater was sampled analyzed annually for DDT, metals and hydrocarbons. Results indicated a striking overall consistency in the concentrations of DDT and hydrocarbons in both groundwater and surface water samples, confirming the validity of the 1996 model predictions. 12 refs., 1 tab., 4 figs

  8. Predicting Environmental Suitability for a Rare and Threatened Species (Lao Newt, Laotriton laoensis) Using Validated Species Distribution Models

    Science.gov (United States)

    Chunco, Amanda J.; Phimmachak, Somphouthone; Sivongxay, Niane; Stuart, Bryan L.

    2013-01-01

    The Lao newt (Laotriton laoensis) is a recently described species currently known only from northern Laos. Little is known about the species, but it is threatened as a result of overharvesting. We integrated field survey results with climate and altitude data to predict the geographic distribution of this species using the niche modeling program Maxent, and we validated these predictions by using interviews with local residents to confirm model predictions of presence and absence. The results of the validated Maxent models were then used to characterize the environmental conditions of areas predicted suitable for L. laoensis. Finally, we overlaid the resulting model with a map of current national protected areas in Laos to determine whether or not any land predicted to be suitable for this species is coincident with a national protected area. We found that both area under the curve (AUC) values and interview data provided strong support for the predictive power of these models, and we suggest that interview data could be used more widely in species distribution niche modeling. Our results further indicated that this species is mostly likely geographically restricted to high altitude regions (i.e., over 1,000 m elevation) in northern Laos and that only a minute fraction of suitable habitat is currently protected. This work thus emphasizes that increased protection efforts, including listing this species as endangered and the establishment of protected areas in the region predicted to be suitable for L. laoensis, are urgently needed. PMID:23555808

  9. Verification and validation of predictive computer programs describing the near and far-field chemistry of radioactive waste disposal systems

    International Nuclear Information System (INIS)

    Read, D.; Broyd, T.W.

    1988-01-01

    This paper provides an introduction to CHEMVAL, an international project concerned with establishing the applicability of chemical speciation and coupled transport models to the simulation of realistic waste disposal situations. The project aims to validate computer-based models quantitatively by comparison with laboratory and field experiments. Verification of the various computer programs employed by research organisations within the European Community is ensured through close inter-laboratory collaboration. The compilation and review of thermodynamic data forms an essential aspect of this work and has led to the production of an internally consistent standard CHEMVAL database. The sensitivity of results to variation in fundamental constants is being monitored at each stage of the project and, where feasible, complementary laboratory studies are used to improve the data set. Currently, thirteen organisations from five countries are participating in CHEMVAL which forms part of the Commission of European Communities' MIRAGE 2 programme of research. (orig.)

  10. Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention

    NARCIS (Netherlands)

    Koning, L. de; Maanen, P.P. van; Dongen, K. van

    2008-01-01

    Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were

  11. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  12. Study to validate the outcome goal, competencies and educational objectives for use in intensive care orientation programs.

    Science.gov (United States)

    Boyle, M; Butcher, R; Kenney, C

    1998-03-01

    Intensive care orientation programs have become an accepted component of intensive care education. To date, however, there have been no Australian-based standards defining the appropriate level of competence to be attained upon completion of orientation. The aim of this study was to validate a set of aims, competencies and educational objectives that could form the basis of intensive care orientation and which would ensure an outcome standard of safe and effective practice. An initial document containing a statement of the desired outcome goal, six competency statements and 182 educational objectives was developed through a review of the orientation programs developed by the investigators. The Delphi technique was used to gain consensus among 13 nurses recognised for their expertise in intensive care education. The expert group rated the acceptability of each of the study items and provided suggestions for objectives to be included. An approval rating of 80 per cent was required to retain each of the study items, with the document refined through three Delphi rounds. The final document contains a validated statement of outcome goal, competencies and educational objectives for intensive care orientation programs.

  13. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  14. A Type Graph Model for Java Programs

    NARCIS (Netherlands)

    Rensink, Arend; Zambon, Eduardo; Lee, D.; Lopes, A.; Poetzsch-Heffter, A.

    2009-01-01

    In this work we present a type graph that models all executable constructs of the Java programming language. Such a model is useful for any graph-based technique that relies on a representation of Java programs as graphs. The model can be regarded as a common representation to which all Java syntax

  15. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  16. Validation of PV-RPM Code in the System Advisor Model.

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Geoffrey Taylor [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lavrova, Olga [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whether the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.

  17. Deployable and Conformal Planar Micro-Devices: Design and Model Validation

    Directory of Open Access Journals (Sweden)

    Jinda Zhuang

    2014-08-01

    Full Text Available We report a design concept for a deployable planar microdevice and the modeling and experimental validation of its mechanical behavior. The device consists of foldable membranes that are suspended between flexible stems and actuated by push-pull wires. Such a deployable device can be introduced into a region of interest in its compact “collapsed” state and then deployed to conformally cover a large two-dimensional surface area for minimally invasive biomedical operations and other engineering applications. We develop and experimentally validate theoretical models based on the energy minimization approach to examine the conformality and figures of merit of the device. The experimental results obtained using model contact surfaces agree well with the prediction and quantitatively highlight the importance of the membrane bending modulus in controlling surface conformality. The present study establishes an early foundation for the mechanical design of this and related deployable planar microdevice concepts.

  18. Two fuzzy possibilistic bi-objective zero-one programming models for outsourcing the equipment maintenance problem

    Science.gov (United States)

    Vahdani, Behnam; Jolai, Fariborz; Tavakkoli-Moghaddam, Reza; Meysam Mousavi, S.

    2012-07-01

    Maintenance outsourcing can be regarded as a strategic weapon to increase productivity and customer satisfaction in many companies, and this critical activity can be performed in a more efficient and effective way. This article presents two novel fuzzy possibilistic bi-objective zero-one programming (FPBOZOP) models for outsourcing of the equipment maintenance. In these models, cost parameters, including outsourcing cost, risk cost, time operations for performing the equipment maintenance and reliability level, as well as other influential parameters are considered through the outsourcing process. Moreover, the presented models can measure the capability of the company in doing different activities, unlike previous studies, in order to see the possibility of maintenance in-house, and can lead to make a best decision on the basis of the models' results. Both models are developed under uncertainty, which bring top managers the possibility of assigning more than one equipment or project to the supplier so that the profit is maximized, and the cost is minimized by considering bi-objectives concurrently. Then, a new fuzzy mathematical programming based possibilistic approach is introduced as a solution methodology from the recent literature to solve the proposed bi-objective zero-one programming (BOZOP) models and to reach a preferred compromise solution. Furthermore, a real-case study is utilized to demonstrate and to validate the effectiveness of the presented models. The computational results revealed that the models can be implemented in variety of problems in the domain of the equipment maintenance outsourcing and project outsourcing either from theory or application perspectives.

  19. Modeling and validating HL7 FHIR profiles using semantic web Shape Expressions (ShEx).

    Science.gov (United States)

    Solbrig, Harold R; Prud'hommeaux, Eric; Grieve, Grahame; McKenzie, Lloyd; Mandel, Joshua C; Sharma, Deepak K; Jiang, Guoqian

    2017-03-01

    HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  1. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided

  2. Heat Transfer Modeling and Validation for Optically Thick Alumina Fibrous Insulation

    Science.gov (United States)

    Daryabeigi, Kamran

    2009-01-01

    Combined radiation/conduction heat transfer through unbonded alumina fibrous insulation was modeled using the diffusion approximation for modeling the radiation component of heat transfer in the optically thick insulation. The validity of the heat transfer model was investigated by comparison to previously reported experimental effective thermal conductivity data over the insulation density range of 24 to 96 kg/cu m, with a pressure range of 0.001 to 750 torr (0.1 to 101.3 x 10(exp 3) Pa), and test sample hot side temperature range of 530 to 1360 K. The model was further validated by comparison to thermal conductivity measurements using the transient step heating technique on an insulation sample at a density of 144 kg/cu m over a pressure range of 0.001 to 760 torr, and temperature range of 290 to 1090 K.

  3. PRA (Probabilistic Risk Assessments) Participation versus Validation

    Science.gov (United States)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  4. Validating a model that predicts daily growth and feed quality of New Zealand dairy pastures.

    Science.gov (United States)

    Woodward, S J

    2001-09-01

    The Pasture Quality (PQ) model is a simple, mechanistic, dynamical system model that was designed to capture the essential biological processes in grazed grass-clover pasture, and to be optimised to derive improved grazing strategies for New Zealand dairy farms. While the individual processes represented in the model (photosynthesis, tissue growth, flowering, leaf death, decomposition, worms) were based on experimental data, this did not guarantee that the assembled model would accurately predict the behaviour of the system as a whole (i.e., pasture growth and quality). Validation of the whole model was thus a priority, since any strategy derived from the model could impact a farm business in the order of thousands of dollars per annum if adopted. This paper describes the process of defining performance criteria for the model, obtaining suitable data to test the model, and carrying out the validation analysis. The validation process highlighted a number of weaknesses in the model, which will lead to the model being improved. As a result, the model's utility will be enhanced. Furthermore, validation was found to have an unexpected additional benefit, in that despite the model's poor initial performance, support was generated for the model among field scientists involved in the wider project.

  5. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  6. Field research program for unsaturated flow and transport experimentation

    International Nuclear Information System (INIS)

    Tidwell, V.C.; Rautman, C.A.; Glass, R.J.

    1992-01-01

    As part of the Yucca Mountain Site Characterization Project, a field research program has been developed to refine and validate models for flow and transport through unsaturated fractured rock. Validation of these models within the range of their application for performance assessment requires a more sophisticated understanding of the processes that govern flow and transport within fractured porous media than currently exists. In particular, our research is prioritized according to understanding and modeling processes that, if not accurately incorporated into performance assessment models, would adversely impact the project's ability to evaluate repository performance. For this reason, we have oriented our field program toward enhancing our understanding of scaling processes as they relate to effective media property modeling, as well as to the conceptual modeling of complex flow and transport phenomena

  7. When is the Anelastic Approximation a Valid Model for Compressible Convection?

    Science.gov (United States)

    Alboussiere, T.; Curbelo, J.; Labrosse, S.; Ricard, Y. R.; Dubuffet, F.

    2017-12-01

    Compressible convection is ubiquitous in large natural systems such Planetary atmospheres, stellar and planetary interiors. Its modelling is notoriously more difficult than the case when the Boussinesq approximation applies. One reason for that difficulty has been put forward by Ogura and Phillips (1961): the compressible equations generate sound waves with very short time scales which need to be resolved. This is why they introduced an anelastic model, based on an expansion of the solution around an isentropic hydrostatic profile. How accurate is that anelastic model? What are the conditions for its validity? To answer these questions, we have developed a numerical model for the full set of compressible equations and compared its solutions with those of the corresponding anelastic model. We considered a simple rectangular 2D Rayleigh-Bénard configuration and decided to restrict the analysis to infinite Prandtl numbers. This choice is valid for convection in the mantles of rocky planets, but more importantly lead to a zero Mach number. So we got rid of the question of the interference of acoustic waves with convection. In that simplified context, we used the entropy balances (that of the full set of equations and that of the anelastic model) to investigate the differences between exact and anelastic solutions. We found that the validity of the anelastic model is dictated by two conditions: first, the superadiabatic temperature difference must be small compared with the adiabatic temperature difference (as expected) ɛ = Δ TSA / delta Ta << 1, and secondly that the product of ɛ with the Nusselt number must be small.

  8. Model-based Systems Engineering: Creation and Implementation of Model Validation Rules for MOS 2.0

    Science.gov (United States)

    Schmidt, Conrad K.

    2013-01-01

    Model-based Systems Engineering (MBSE) is an emerging modeling application that is used to enhance the system development process. MBSE allows for the centralization of project and system information that would otherwise be stored in extraneous locations, yielding better communication, expedited document generation and increased knowledge capture. Based on MBSE concepts and the employment of the Systems Modeling Language (SysML), extremely large and complex systems can be modeled from conceptual design through all system lifecycles. The Operations Revitalization Initiative (OpsRev) seeks to leverage MBSE to modernize the aging Advanced Multi-Mission Operations Systems (AMMOS) into the Mission Operations System 2.0 (MOS 2.0). The MOS 2.0 will be delivered in a series of conceptual and design models and documents built using the modeling tool MagicDraw. To ensure model completeness and cohesiveness, it is imperative that the MOS 2.0 models adhere to the specifications, patterns and profiles of the Mission Service Architecture Framework, thus leading to the use of validation rules. This paper outlines the process by which validation rules are identified, designed, implemented and tested. Ultimately, these rules provide the ability to maintain model correctness and synchronization in a simple, quick and effective manner, thus allowing the continuation of project and system progress.

  9. Steam generator tube integrity program

    International Nuclear Information System (INIS)

    Dierks, D.R.; Shack, W.J.; Muscara, J.

    1996-01-01

    A new research program on steam generator tubing degradation is being sponsored by the U.S. Nuclear Regulatory Commission (NRC) at Argonne National Laboratory. This program is intended to support a performance-based steam generator tube integrity rule. Critical areas addressed by the program include evaluation of the processes used for the in-service inspection of steam generator tubes and recommendations for improving the reliability and accuracy of inspections; validation and improvement of correlations for evaluating integrity and leakage of degraded steam generator tubes, and validation and improvement of correlations and models for predicting degradation in steam generator tubes as aging occurs. The studies will focus on mill-annealed Alloy 600 tubing, however, tests will also be performed on replacement materials such as thermally-treated Alloy 600 or 690. An overview of the technical work planned for the program is given

  10. Multiphysics modelling and experimental validation of high concentration photovoltaic modules

    International Nuclear Information System (INIS)

    Theristis, Marios; Fernández, Eduardo F.; Sumner, Mike; O'Donovan, Tadhg S.

    2017-01-01

    Highlights: • A multiphysics modelling approach for concentrating photovoltaics was developed. • An experimental campaign was conducted to validate the models. • The experimental results were in good agreement with the models. • The multiphysics modelling allows the concentrator’s optimisation. - Abstract: High concentration photovoltaics, equipped with high efficiency multijunction solar cells, have great potential in achieving cost-effective and clean electricity generation at utility scale. Such systems are more complex compared to conventional photovoltaics because of the multiphysics effect that is present. Modelling the power output of such systems is therefore crucial for their further market penetration. Following this line, a multiphysics modelling procedure for high concentration photovoltaics is presented in this work. It combines an open source spectral model, a single diode electrical model and a three-dimensional finite element thermal model. In order to validate the models and the multiphysics modelling procedure against actual data, an outdoor experimental campaign was conducted in Albuquerque, New Mexico using a high concentration photovoltaic monomodule that is thoroughly described in terms of its geometry and materials. The experimental results were in good agreement (within 2.7%) with the predicted maximum power point. This multiphysics approach is relatively more complex when compared to empirical models, but besides the overall performance prediction it can also provide better understanding of the physics involved in the conversion of solar irradiance into electricity. It can therefore be used for the design and optimisation of high concentration photovoltaic modules.

  11. Validation of a model of intensive training in digestive laparoscopic surgery.

    Science.gov (United States)

    Enciso, Silvia; Díaz-Güemes, Idoia; Usón, Jesús; Sánchez-Margallo, Francisco Miguel

    2016-02-01

    Our objective was to assess a laparoscopic training model for general surgery residents. Twelve general surgery residents carried out a training program, consisting of a theoretical session (one hour) and a hands-on session on simulator (7 h) and on animal model (13 h). For the first and last repetitions of simulator tasks and the Nissen fundoplication technique, time and scores from the global rating scale objective structured assessment of technical skills (OSATS) were registered. Before and after the course, participants performed 4 tasks on the virtual reality simulator LAPMentor™: 1) hand-eye coordination, 2) hand-hand coordination, 3) transference of objects and 4) cholecystectomy task, registering time and movement metrics. Moreover, the residents completed a questionnaire related to the training components on a 5-point rating scale. The last repetition of the tasks and the Nissen fundoplication technique were performed faster and with a higher OSATS score. After the course, the participants performed all LAPMentor™ tasks faster, increasing the speed of movements in all tasks. Number of movements decreased in tasks 2, 3 and 4; as well as path length in tasks 2 and 4. Training components were positively rated by residents, being the suture task the aspect best rated (4.90 ± 0.32). This training model in digestive laparoscopic surgery has demonstrated to be valid for the improvement of basic and advanced skills of general surgery residents. Intracorporeal suturing and the animal model were the best rated training elements. Copyright © 2015 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. Validating Savings Claims of Cold Climate Zero Energy Ready Homes

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, J. [Consortium for Advanced Residential Buildings, Norwalk, CT (United States); Puttagunta, S. [Consortium for Advanced Residential Buildings, Norwalk, CT (United States)

    2015-06-05

    This study was intended to validate actual performance of three ZERHs in the Northeast to energy models created in REM/Rate v14.5 (one of the certified software programs used to generate a HERS Index) and the National Renewable Energy Laboratory’s Building Energy Optimization (BEopt™) v2.3 E+ (a more sophisticated hourly energy simulation software). This report details the validation methods used to analyze energy consumption at each home.

  13. DMFC performance and methanol cross-over: Experimental analysis and model validation

    Energy Technology Data Exchange (ETDEWEB)

    Casalegno, A.; Marchesi, R. [Dipartimento di Energia, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2008-10-15

    A combined experimental and modelling approach is proposed to analyze methanol cross-over and its effect on DMFC performance. The experimental analysis is performed in order to allow an accurate investigation of methanol cross-over influence on DMFC performance, hence measurements were characterized in terms of uncertainty and reproducibility. The findings suggest that methanol cross-over is mainly determined by diffusion transport and affects cell performance partly via methanol electro-oxidation at the cathode. The modelling analysis is carried out to further investigate methanol cross-over phenomenon. A simple model evaluates the effectiveness of two proposed interpretations regarding methanol cross-over and its effects. The model is validated using the experimental data gathered. Both the experimental analysis and the proposed and validated model allow a substantial step forward in the understanding of the main phenomena associated with methanol cross-over. The findings confirm the possibility to reduce methanol cross-over by optimizing anode feeding. (author)

  14. Modelling and validation of electromechanical shock absorbers

    Science.gov (United States)

    Tonoli, Andrea; Amati, Nicola; Girardello Detoni, Joaquim; Galluzzi, Renato; Gasparin, Enrico

    2013-08-01

    Electromechanical vehicle suspension systems represent a promising substitute to conventional hydraulic solutions. However, the design of electromechanical devices that are able to supply high damping forces without exceeding geometric dimension and mass constraints is a difficult task. All these challenges meet in off-road vehicle suspension systems, where the power density of the dampers is a crucial parameter. In this context, the present paper outlines a particular shock absorber configuration where a suitable electric machine and a transmission mechanism are utilised to meet off-road vehicle requirements. A dynamic model is used to represent the device. Subsequently, experimental tests are performed on an actual prototype to verify the functionality of the damper and validate the proposed model.

  15. Modelling of PEM Fuel Cell Performance: Steady-State and Dynamic Experimental Validation

    Directory of Open Access Journals (Sweden)

    Idoia San Martín

    2014-02-01

    Full Text Available This paper reports on the modelling of a commercial 1.2 kW proton exchange membrane fuel cell (PEMFC, based on interrelated electrical and thermal models. The electrical model proposed is based on the integration of the thermodynamic and electrochemical phenomena taking place in the FC whilst the thermal model is established from the FC thermal energy balance. The combination of both models makes it possible to predict the FC voltage, based on the current demanded and the ambient temperature. Furthermore, an experimental characterization is conducted and the parameters for the models associated with the FC electrical and thermal performance are obtained. The models are implemented in Matlab Simulink and validated in a number of operating environments, for steady-state and dynamic modes alike. In turn, the FC models are validated in an actual microgrid operating environment, through the series connection of 4 PEMFC. The simulations of the models precisely and accurately reproduce the FC electrical and thermal performance.

  16. Validation of Yoon's Critical Thinking Disposition Instrument.

    Science.gov (United States)

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  17. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  18. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  19. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  20. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  1. Validating High-Stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    2002-01-01

    Makes the point that the interpretations and use of high-stakes test scores rely on policy assumptions about what should be taught and the content standards and performance standards that should be applied. The assumptions built into an assessment need to be subjected to scrutiny and criticism if a strong case is to be made for the validity of the…

  2. Wave Tank Testing and Model Validation of an Autonomous Wave Energy Converter

    Directory of Open Access Journals (Sweden)

    Bret Bosma

    2015-08-01

    Full Text Available A key component in bringing ocean wave energy converters from concept to commercialization is the building and testing of scaled prototypes to provide model validation. A one quarter scale prototype of an autonomous two body heaving point absorber was modeled, built, and tested for this work. Wave tank testing results are compared with two hydrodynamic and system models—implemented in both ANSYS AQWA and MATLAB/Simulink—and show model validation over certain regions of operation. This work will serve as a guide for future developers of wave energy converter devices, providing insight in taking their design from concept to prototype stage.

  3. From theory to PrACTice: a cognitive remediation program based on a neuropsychological model of schizophrenia

    Directory of Open Access Journals (Sweden)

    Delphine eFabre

    2015-12-01

    Full Text Available Cognitive dysfunction is one of the hallmark deficits of schizophrenia. A wide range of studies illustrate how it is strongly interconnected to clinical presentation and daily life functioning (see Green, 1996 and Green et al., 2000. Hence, cognition is an important treatment target in schizophrenia. To address the challenge of cognitive enhancement in schizophrenia, a large number of cognitive remediation programs have been developed and evaluated over the past several decades. First, an overview of these programs is presented highlighting their specificity to cognitive deficit in schizophrenia using an integrated method. In this case, cognitive training focuses on enhancing several elementary cognitive functions considered as a prerequisite to social skills or vocational training modules. These programs are based on the neurodevelopmental hypothesis of schizophrenia. However, moderate improvement for patients who benefit from these therapies has been observed as described in Wykes et al review (2011. Next, neuropsychological models of schizophrenia are then presented. They highlight the critical role of the internally generated intentions in appropriate willful actions. The cognitive control mechanism deals with this ability. Interestingly, available cognitive remediation programs have not been influenced by these models. Hence, we propose another alternative to set up a specific cognitive remediation program for schizophrenia patients by targeting the cognitive control mechanism. We describe the PrACTice program which is in the process of being validated.

  4. Using the mouse to model human disease: increasing validity and reproducibility

    Directory of Open Access Journals (Sweden)

    Monica J. Justice

    2016-02-01

    Full Text Available Experiments that use the mouse as a model for disease have recently come under scrutiny because of the repeated failure of data, particularly derived from preclinical studies, to be replicated or translated to humans. The usefulness of mouse models has been questioned because of irreproducibility and poor recapitulation of human conditions. Newer studies, however, point to bias in reporting results and improper data analysis as key factors that limit reproducibility and validity of preclinical mouse research. Inaccurate and incomplete descriptions of experimental conditions also contribute. Here, we provide guidance on best practice in mouse experimentation, focusing on appropriate selection and validation of the model, sources of variation and their influence on phenotypic outcomes, minimum requirements for control sets, and the importance of rigorous statistics. Our goal is to raise the standards in mouse disease modeling to enhance reproducibility, reliability and clinical translation of findings.

  5. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  6. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  7. Principal-subordinate hierarchical multi-objective programming model of initial water rights allocation

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2009-06-01

    Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.

  8. A New Statistical Method to Determine the Degree of Validity of Health Economic Model Outcomes against Empirical Data.

    NARCIS (Netherlands)

    Corro Ramos, Isaac; van Voorn, George A K; Vemer, Pepijn; Feenstra, Talitha L; Al, Maiwenn J

    2017-01-01

    The validation of health economic (HE) model outcomes against empirical data is of key importance. Although statistical testing seems applicable, guidelines for the validation of HE models lack guidance on statistical validation, and actual validation efforts often present subjective judgment of

  9. Experimental validation of a mathematical model for seabed liquefaction in waves

    DEFF Research Database (Denmark)

    Sumer, B. Mutlu; Kirca, Özgür; Fredsøe, Jørgen

    2011-01-01

    This paper summarizes the results of an experimental study directed towards the validation of a mathematical model for the buildup of pore water pressure and resulting liquefaction of marine soils under progressive waves. Experiments were conducted under controlled conditions with silt ( d50 = 0.......070 mm) in a wave flume with a soil pit. Waves with wave heights in the range 7.7-18 cm with the water depth 55 cm and the wave period 1.6 s enabled us to study both the liquefaction and no-liquefaction regime pore water pressure buildup. The experimental data was used to validate the model. A numerical...

  10. Validity test and its consistency in the construction of patient loyalty model

    Science.gov (United States)

    Yanuar, Ferra

    2016-04-01

    The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.

  11. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret; Yu, Yi-Hsiang

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is a follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.

  12. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  13. Multi-criteria model for sustainable development using goal programming applied to the United Arab Emirates

    International Nuclear Information System (INIS)

    Jayaraman, Raja; Colapinto, Cinzia; Torre, Davide La; Malik, Tufail

    2015-01-01

    Sustainable development requires implementing suitable policies integrating several competing objectives on economic, environmental, energy and social criteria. Multi-Criteria Decision Analysis (MCDA) using goal programming is a popular and widely used technique to study decision problems in the face of multiple conflicting objectives. MCDA assists policy makers by providing clarity in choosing between alternatives for strategic planning and investments. In this paper, we propose a weighted goal programming model that integrates efficient allocation of resources to simultaneously achieve sustainability related goals on GDP growth, electricity consumption and GHG emissions. We validate the model with application to key economic sectors of the United Arab Emirates to achieve sustainable development goals by the year 2030. The model solution provides a quantitative justification and a basis for comparison in planning future energy requirements and an indispensable requirement to include renewable sources to satisfy long-term energy requirements. - Highlights: • Multi-criteria model for achieving sustainability goals by year 2030. • Integrates criteria on electricity, GDP, GHG emissions for optimal labor allocation. • Future electricity demand requires contribution from renewable sources • Enables planning for long term investments towards energy sustainability.

  14. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  15. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    International Nuclear Information System (INIS)

    Lin, E.I.

    1997-01-01

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before

  16. Programming model for distributed intelligent systems

    Science.gov (United States)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  17. Calibration and validation of coarse-grained models of atomic systems: application to semiconductor manufacturing

    Science.gov (United States)

    Farrell, Kathryn; Oden, J. Tinsley

    2014-07-01

    Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and

  18. EDF EPR project: operating principles validation and human factor engineering program

    International Nuclear Information System (INIS)

    Lefebvre, B.; Berard, E.; Arpino, J.-M.

    2005-01-01

    This article describes the specificities of the operating principles chosen by EDF for the EPR project as a result of an extensive Human Factor Engineering program successfully implemented in an industrial project context. The design process and its achievements benefit of the EDF experience feedback not only in term of NPP operation - including the fully computerized control room of the N4-serie - but also in term of NPP designer. The elements exposed hereafter correspond to the basic design phase of EPR HMI which has been completed and successfully validated by the end of 2003. The article aims to remind the context of the project which basically consists in designing a modern and efficient HMI taking into account the operating needs while relying on proven and reliable technologies. The Human Factor Engineering program implemented merges these both aspects by : 1) being fully integrated within the project activities and scheduling; 2) efficiently taking into account the users needs as well as the feasibility constraints by relying on a multidisciplinary design team including HF specialists, I and C specialists, Process specialists and experienced operator representatives. The resulting design process makes a wide use of experience feedback and experienced operator knowledge to complete largely the existing standards for providing a fully useable and successful design method in an industrial context. The article underlines the design process highlights that largely contribute to the successful implementation of a Human Factor Engineering program for EPR. (authors)

  19. Approaches to Validation of Models for Low Gravity Fluid Behavior

    Science.gov (United States)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  20. An interactive program for pharmacokinetic modeling.

    Science.gov (United States)

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  1. External Validation of a Prediction Model for Successful External Cephalic Version

    NARCIS (Netherlands)

    de Hundt, Marcella; Vlemmix, Floortje; Kok, Marjolein; van der Steeg, Jan W.; Bais, Joke M.; Mol, Ben W.; van der Post, Joris A.

    2012-01-01

    We sought external validation of a prediction model for the probability of a successful external cephalic version (ECV). We evaluated the performance of the prediction model with calibration and discrimination. For clinical practice, we developed a score chart to calculate the probability of a

  2. Modeling and Experimental Validation of an Islanded No-Inertia Microgrid Site

    DEFF Research Database (Denmark)

    Bonfiglio, Andrea; Delfino, Federico; Labella, Alessandro

    2018-01-01

    The paper proposes a simple but effective model for no-inertia microgrids suitable to represent the instantaneous values of its meaningful electric variables, becoming a useful platform to test innovative control logics and energy management systems. The proposed model is validated against a more...

  3. Validation of Pressure Drop Models for PHWR-type Fuel Elements

    International Nuclear Information System (INIS)

    Brasnarof Daniel; Daverio, H.

    2003-01-01

    In the present work an one-dimensional pressure drop analytical model and the COBRA code, are validated with experimental data of CANDU and Atucha fuel bundles in low and high pressure experimental test loops.Models have very good agreement with the experimental data, having less than 5 % of discrepancy. The analytical model results were compared with COBRA code results, having small difference between them in a wide range of pressure, temperature and mass flow

  4. Academic program models for undergraduate biomedical engineering.

    Science.gov (United States)

    Krishnan, Shankar M

    2014-01-01

    There is a proliferation of medical devices across the globe for the diagnosis and therapy of diseases. Biomedical engineering (BME) plays a significant role in healthcare and advancing medical technologies thus creating a substantial demand for biomedical engineers at undergraduate and graduate levels. There has been a surge in undergraduate programs due to increasing demands from the biomedical industries to cover many of their segments from bench to bedside. With the requirement of multidisciplinary training within allottable duration, it is indeed a challenge to design a comprehensive standardized undergraduate BME program to suit the needs of educators across the globe. This paper's objective is to describe three major models of undergraduate BME programs and their curricular requirements, with relevant recommendations to be applicable in institutions of higher education located in varied resource settings. Model 1 is based on programs to be offered in large research-intensive universities with multiple focus areas. The focus areas depend on the institution's research expertise and training mission. Model 2 has basic segments similar to those of Model 1, but the focus areas are limited due to resource constraints. In this model, co-op/internship in hospitals or medical companies is included which prepares the graduates for the work place. In Model 3, students are trained to earn an Associate Degree in the initial two years and they are trained for two more years to be BME's or BME Technologists. This model is well suited for the resource-poor countries. All three models must be designed to meet applicable accreditation requirements. The challenges in designing undergraduate BME programs include manpower, facility and funding resource requirements and time constraints. Each academic institution has to carefully analyze its short term and long term requirements. In conclusion, three models for BME programs are described based on large universities, colleges, and

  5. Development and validation of a mortality risk model for pediatric sepsis

    Science.gov (United States)

    Chen, Mengshi; Lu, Xiulan; Hu, Li; Liu, Pingping; Zhao, Wenjiao; Yan, Haipeng; Tang, Liang; Zhu, Yimin; Xiao, Zhenghui; Chen, Lizhang; Tan, Hongzhuan

    2017-01-01

    Abstract Pediatric sepsis is a burdensome public health problem. Assessing the mortality risk of pediatric sepsis patients, offering effective treatment guidance, and improving prognosis to reduce mortality rates, are crucial. We extracted data derived from electronic medical records of pediatric sepsis patients that were collected during the first 24 hours after admission to the pediatric intensive care unit (PICU) of the Hunan Children's hospital from January 2012 to June 2014. A total of 788 children were randomly divided into a training (592, 75%) and validation group (196, 25%). The risk factors for mortality among these patients were identified by conducting multivariate logistic regression in the training group. Based on the established logistic regression equation, the logit probabilities for all patients (in both groups) were calculated to verify the model's internal and external validities. According to the training group, 6 variables (brain natriuretic peptide, albumin, total bilirubin, D-dimer, lactate levels, and mechanical ventilation in 24 hours) were included in the final logistic regression model. The areas under the curves of the model were 0.854 (0.826, 0.881) and 0.844 (0.816, 0.873) in the training and validation groups, respectively. The Mortality Risk Model for Pediatric Sepsis we established in this study showed acceptable accuracy to predict the mortality risk in pediatric sepsis patients. PMID:28514310

  6. Methods and computer programs for PWR's fuel management: Programs Sothis and Ciclon

    International Nuclear Information System (INIS)

    Aragones, J.M.; Corella, M.R.; Martinez-Val, J.M.

    1976-01-01

    Methos and computer programs developed at JEN for fuel management in PWR are discussed, including scope of model, procedures for sistematic selection of alternatives to be evaluated, basis of model for neutronic calculation, methods for fuel costs calculation, procedures for equilibrium and trans[tion cycles calculation with Soth[s and Ciclon codes and validation of methods by comparison of results with others of reference (author) ' [es

  7. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    Science.gov (United States)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  8. Development and validation of a viscoelastic and nonlinear liver model for needle insertion

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Yo [Waseda University, Consolidated Research Institute for Advanced Science and Medical Care, Shinjuku, Tokyo (Japan); Onishi, Akinori; Hoshi, Takeharu; Kawamura, Kazuya [Waseda University, Graduate School of Science and Engineering, Shinjuku (Japan); Hashizume, Makoto [Kyushu University Hospital, Center for the Integration of Advanced Medicine and Innovative Technology, Fukuoka (Japan); Fujie, Masakatsu G. [Waseda University, Graduate School of Science and Engineering, Faculty of Science and Engineering, Shinjuku (Japan)

    2009-01-15

    The objective of our work is to develop and validate a viscoelastic and nonlinear physical liver model for organ model-based needle insertion, in which the deformation of an organ is estimated and predicted, and the needle path is determined with organ deformation taken into consideration. First, an overview is given of the development of the physical liver model. The material properties of the liver considering viscoelasticity and nonlinearity are modeled based on the measured data collected from a pig's liver. The method to develop the liver model using FEM is also shown. Second, the experimental method to validate the model is explained. Both in vitro and in vivo experiments that made use of a pig's liver were conducted for comparison with the simulation using the model. Results of the in vitro experiment showed that the model reproduces nonlinear and viscoelastic response of displacement at an internally located point with high accuracy. For a force up to 0.45 N, the maximum error is below 1 mm. Results of the in vivo experiment showed that the model reproduces the nonlinear increase of load upon the needle during insertion. Based on these results, the liver model developed and validated in this work reproduces the physical response of a liver in both in vitro and in vivo situations. (orig.)

  9. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  10. Monte Carlo Modelling of Mammograms : Development and Validation

    International Nuclear Information System (INIS)

    Spyrou, G.; Panayiotakis, G.; Bakas, A.; Tzanakos, G.

    1998-01-01

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors)

  11. Validation of Nonlinear Bipolar Transistor Model by Small-Signal Measurements

    DEFF Research Database (Denmark)

    Vidkjær, Jens; Porra, V.; Zhu, J.

    1992-01-01

    A new method for the validity analysis of nonlinear transistor models is presented based on DC-and small-signal S-parameter measurements and realistic consideration of the measurement and de-embedding errors and singularities of the small-signal equivalent circuit. As an example, some analysis...... results for an extended Gummel Poon model are presented in the case of a UHF bipolar power transistor....

  12. Understanding Dynamic Model Validation of a Wind Turbine Generator and a Wind Power Plant: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard; Zhang, Ying Chen; Gevorgian, Vahan; Kosterev, Dmitry

    2016-09-01

    Regional reliability organizations require power plants to validate the dynamic models that represent them to ensure that power systems studies are performed to the best representation of the components installed. In the process of validating a wind power plant (WPP), one must be cognizant of the parameter settings of the wind turbine generators (WTGs) and the operational settings of the WPP. Validating the dynamic model of a WPP is required to be performed periodically. This is because the control parameters of the WTGs and the other supporting components within a WPP may be modified to comply with new grid codes or upgrades to the WTG controller with new capabilities developed by the turbine manufacturers or requested by the plant owners or operators. The diversity within a WPP affects the way we represent it in a model. Diversity within a WPP may be found in the way the WTGs are controlled, the wind resource, the layout of the WPP (electrical diversity), and the type of WTGs used. Each group of WTGs constitutes a significant portion of the output power of the WPP, and their unique and salient behaviors should be represented individually. The objective of this paper is to illustrate the process of dynamic model validations of WTGs and WPPs, the available data recorded that must be screened before it is used for the dynamic validations, and the assumptions made in the dynamic models of the WTG and WPP that must be understood. Without understanding the correct process, the validations may lead to the wrong representations of the WTG and WPP modeled.

  13. ISLAMIC INTERVENTION PROGRAM MODEL REDUCTION OF BULYYING BEHAVIOR AT SENIOR HIGH SCHOOLS PEKANBARU

    Directory of Open Access Journals (Sweden)

    Zaitun Zaitun

    2016-03-01

    Full Text Available This study aims to determine the cause of frequent violence in the education and to find behaviour typology leading to bullying that often occur at school and produce appropriate Islamic intervention program model for bullying. This study uses research and development that only focused on hypothetical phase. Writer collects data from several existing Senior High School in Pekanbaru-Riau. Data collection techniques use questionnaire, observation, interview and documentation. To obtain valid data, the researcher also conducted triangulation. The study concluded that intervention model can be done by streamline the peer coaching, religious mentoring intensively periodically that conducted school with involving counselling and religious teachers and maximize co-operation between parents, teachers and schools to make effective group counselling, mediation and use of ICT in the learning process.

  14. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    Science.gov (United States)

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be

  15. Evaluating health inequity interventions: applying a contextual (external) validity framework to programs funded by the Canadian Health Services Research Foundation.

    Science.gov (United States)

    Phillips, Kaye; Müller-Clemm, Werner; Ysselstein, Margaretha; Sachs, Jonathan

    2013-02-01

    Including context in the measurement and evaluation of health in equity interventions is critical to understanding how events that occur in an intervention's environment might contribute to or impede its success. This study adapted and piloted a contextual validity assessment framework on a selection of health inequity-related programs funded by the Canadian Health Services Research Foundation (CHSRF) between 1998 and 2006. The two overarching objectives of this study were (1) to determine the relative amount and quality of attention given to conceptualizing, measuring and validating context within CHSRF funded research final reports related to health-inequity; and (2) to contribute evaluative evidence towards the incorporation of context into the assessment and measurement of health inequity interventions. The study found that of the 42/146 CHSRF programs and projects, judged to be related to health inequity 20 adequately reported on the conceptualization, measurement and validation of context. Amongst these health-inequity related project reports, greatest emphasis was placed on describing the socio-political and economical context over actually measuring and validating contextual evidence. Applying a contextual validity assessment framework was useful for distinguishing between the descriptive (conceptual) versus empirical (measurement and validation) inclusion of documented contextual evidence. Although contextual validity measurement frameworks needs further development, this study contributes insight into identifying funded research related to health inequities and preliminary criteria for assessing interventions targeted at specific populations and jurisdictions. This study also feeds a larger critical dialogue (albeit beyond the scope of this study) regarding the relevance and utility of using evaluative techniques for understanding how specific external conditions support or impede the successful implementation of health inequity interventions. Copyright

  16. A validated dynamic model of the first marine molten carbonate fuel cell

    International Nuclear Information System (INIS)

    Ovrum, E.; Dimopoulos, G.

    2012-01-01

    In this work we present a modular, dynamic and multi-dimensional model of a molten carbonate fuel cell (MCFC) onboard the offshore supply vessel “Viking Lady” serving as an auxiliary power unit. The model is able to capture detailed thermodynamic, heat transfer and electrochemical reaction phenomena within the fuel cell layers. The model has been calibrated and validated with measured performance data from a prototype installation onboard the vessel. The model is able to capture detailed thermodynamic, heat transfer and electrochemical reaction phenomena within the fuel cell layers. The model has been calibrated and validated with measured performance data from a prototype installation onboard the offshore supply vessel. The calibration process included parameter identification, sensitivity analysis to identify the critical model parameters, and iterative calibration of these to minimize the overall prediction error. The calibrated model has a low prediction error of 4% for the operating range of the cell, exhibiting at the same time a physically sound qualitative behavior in terms of thermodynamic heat transfer and electrochemical phenomena, both on steady-state and transient operation. The developed model is suitable for a wide range of studies covering the aspects of thermal efficiency, performance, operability, safety and endurance/degradation, which are necessary to introduce fuel cells in ships. The aim of this MCFC model is to aid to the introduction, design, concept approval and verification of environmentally friendly marine applications such as fuel cells, in a cost-effective, fast and safe manner. - Highlights: ► We model the first marine molten carbonate fuel cell auxiliary power unit. ► The model is distributed spatially and models both steady state and transients. ► The model is validated against experimental data. ► The paper illustrates how the model can be used in safety and reliability studies.

  17. Transient Model Validation of Fixed-Speed Induction Generator Using Wind Farm Measurements

    DEFF Research Database (Denmark)

    Rogdakis, Georgios; Garcia-Valle, Rodrigo; Arana Aristi, Iván

    2012-01-01

    In this paper, an electromagnetic transient model for fixed-speed wind turbines equipped with induction generators is developed and implemented in PSCAD/EMTDC. The model is comprised by: an induction generator, aerodynamic rotor, and a two-mass representation of the shaft system. Model validation...

  18. A GLOBAL TWO-TEMPERATURE CORONA AND INNER HELIOSPHERE MODEL: A COMPREHENSIVE VALIDATION STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Jin, M.; Manchester, W. B.; Van der Holst, B.; Gruesbeck, J. R.; Frazin, R. A.; Landi, E.; Toth, G.; Gombosi, T. I. [Atmospheric Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Vasquez, A. M. [Instituto de Astronomia y Fisica del Espacio (CONICET-UBA) and FCEN (UBA), CC 67, Suc 28, Ciudad de Buenos Aires (Argentina); Lamy, P. L.; Llebaria, A.; Fedorov, A., E-mail: jinmeng@umich.edu [Laboratoire d' Astrophysique de Marseille, Universite de Provence, Marseille (France)

    2012-01-20

    The recent solar minimum with very low activity provides us a unique opportunity for validating solar wind models. During CR2077 (2008 November 20 through December 17), the number of sunspots was near the absolute minimum of solar cycle 23. For this solar rotation, we perform a multi-spacecraft validation study for the recently developed three-dimensional, two-temperature, Alfven-wave-driven global solar wind model (a component within the Space Weather Modeling Framework). By using in situ observations from the Solar Terrestrial Relations Observatory (STEREO) A and B, Advanced Composition Explorer (ACE), and Venus Express, we compare the observed proton state (density, temperature, and velocity) and magnetic field of the heliosphere with that predicted by the model. Near the Sun, we validate the numerical model with the electron density obtained from the solar rotational tomography of Solar and Heliospheric Observatory/Large Angle and Spectrometric Coronagraph C2 data in the range of 2.4 to 6 solar radii. Electron temperature and density are determined from differential emission measure tomography (DEMT) of STEREO A and B Extreme Ultraviolet Imager data in the range of 1.035 to 1.225 solar radii. The electron density and temperature derived from the Hinode/Extreme Ultraviolet Imaging Spectrometer data are also used to compare with the DEMT as well as the model output. Moreover, for the first time, we compare ionic charge states of carbon, oxygen, silicon, and iron observed in situ with the ACE/Solar Wind Ion Composition Spectrometer with those predicted by our model. The validation results suggest that most of the model outputs for CR2077 can fit the observations very well. Based on this encouraging result, we therefore expect great improvement for the future modeling of coronal mass ejections (CMEs) and CME-driven shocks.

  19. The jmzQuantML programming interface and validator for the mzQuantML data standard.

    Science.gov (United States)

    Qi, Da; Krishna, Ritesh; Jones, Andrew R

    2014-03-01

    The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard. © 2014 The Authors. Proteomics published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Explicit validation of a surface shortwave radiation balance model over snow-covered complex terrain

    Science.gov (United States)

    Helbig, N.; Löwe, H.; Mayer, B.; Lehning, M.

    2010-09-01

    A model that computes the surface radiation balance for all sky conditions in complex terrain is presented. The spatial distribution of direct and diffuse sky radiation is determined from observations of incident global radiation, air temperature, and relative humidity at a single measurement location. Incident radiation under cloudless sky is spatially derived from a parameterization of the atmospheric transmittance. Direct and diffuse sky radiation for all sky conditions are obtained by decomposing the measured global radiation value. Spatial incident radiation values under all atmospheric conditions are computed by adjusting the spatial radiation values obtained from the parametric model with the radiation components obtained from the decomposition model at the measurement site. Topographic influences such as shading are accounted for. The radiosity approach is used to compute anisotropic terrain reflected radiation. Validations of the shortwave radiation balance model are presented in detail for a day with cloudless sky. For a day with overcast sky a first validation is presented. Validation of a section of the horizon line as well as of individual radiation components is performed with high-quality measurements. A new measurement setup was designed to determine terrain reflected radiation. There is good agreement between the measurements and the modeled terrain reflected radiation values as well as with incident radiation values. A comparison of the model with a fully three-dimensional radiative transfer Monte Carlo model is presented. That validation reveals a good agreement between modeled radiation values.

  1. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  2. Groundwater Model Validation for the Project Shoal Area, Corrective Action Unit 447

    Energy Technology Data Exchange (ETDEWEB)

    Hassan, Ahmed [Desert Research Inst. (DRI), Las Vegas, NV (United States). Division of Hydrologic Sciences; Chapman, Jenny [Desert Research Inst. (DRI), Las Vegas, NV (United States). Division of Hydrologic Sciences; Lyles, Brad [Desert Research Inst. (DRI), Las Vegas, NV (United States). Division of Hydrologic Sciences

    2008-05-19

    Stoller has examined newly collected water level data in multiple wells at the Shoal site. On the basis of these data and information presented in the report, we are currently unable to confirm that the model is successfully validated. Most of our concerns regarding the model stem from two findings: (1) measured water level data do not provide clear evidence of a prevailing lateral flow direction; and (2) the groundwater flow system has been and continues to be in a transient state, which contrasts with assumed steady-state conditions in the model. The results of DRI's model validation efforts and observations made regarding water level behavior are discussed in the following sections. A summary of our conclusions and recommendations for a path forward are also provided in this letter report.

  3. MT3DMS: Model use, calibration, and validation

    Science.gov (United States)

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  4. Clinical prediction models for bronchopulmonary dysplasia: a systematic review and external validation study

    NARCIS (Netherlands)

    Onland, Wes; Debray, Thomas P.; Laughon, Matthew M.; Miedema, Martijn; Cools, Filip; Askie, Lisa M.; Asselin, Jeanette M.; Calvert, Sandra A.; Courtney, Sherry E.; Dani, Carlo; Durand, David J.; Marlow, Neil; Peacock, Janet L.; Pillow, J. Jane; Soll, Roger F.; Thome, Ulrich H.; Truffert, Patrick; Schreiber, Michael D.; van Reempts, Patrick; Vendettuoli, Valentina; Vento, Giovanni; van Kaam, Anton H.; Moons, Karel G.; Offringa, Martin

    2013-01-01

    Bronchopulmonary dysplasia (BPD) is a common complication of preterm birth. Very different models using clinical parameters at an early postnatal age to predict BPD have been developed with little extensive quantitative validation. The objective of this study is to review and validate clinical

  5. Validation of a risk prediction model for Barrett's esophagus in an Australian population.

    Science.gov (United States)

    Ireland, Colin J; Gordon, Andrea L; Thompson, Sarah K; Watson, David I; Whiteman, David C; Reed, Richard L; Esterman, Adrian

    2018-01-01

    Esophageal adenocarcinoma is a disease that has a high mortality rate, the only known precursor being Barrett's esophagus (BE). While screening for BE is not cost-effective at the population level, targeted screening might be beneficial. We have developed a risk prediction model to identify people with BE, and here we present the external validation of this model. A cohort study was undertaken to validate a risk prediction model for BE. Individuals with endoscopy and histopathology proven BE completed a questionnaire containing variables previously identified as risk factors for this condition. Their responses were combined with data from a population sample for analysis. Risk scores were derived for each participant. Overall performance of the risk prediction model in terms of calibration and discrimination was assessed. Scores from 95 individuals with BE and 636 individuals from the general population were analyzed. The Brier score was 0.118, suggesting reasonable overall performance. The area under the receiver operating characteristic was 0.83 (95% CI 0.78-0.87). The Hosmer-Lemeshow statistic was p =0.14. Minimizing false positives and false negatives, the model achieved a sensitivity of 74% and a specificity of 73%. This study has validated a risk prediction model for BE that has a higher sensitivity than previous models.

  6. Validation of Weld Residual Stress Modeling in the NRC International Round Robin Study

    International Nuclear Information System (INIS)

    Mullins, Jonathan; Gunnars, Jens

    2013-01-01

    Weld residual stresses (WRS) have a large influence on the behavior of cracks growing under normal operation loads and on the leakage flow from a through-wall crack. Accurate prediction on weld residual stresses is important to make proper decisions when cracks in weld joints are detected. During the latest years, there has been a strong development in both analytical procedures to numerically determine WRS and experimental measurements of WRS. The USNRC (United States Nuclear Regulatory Commission) has formed a program for validation of WRS predictions through comparison of numerically calculated residual stress fields in dissimilar welds measured by different methods. The present report describes the results of the project with special focus on the contribution from Inspecta Technology. Objectives: The principal objective of the project is to compare different WRS predictions for a dissimilar pipe weld with careful measurements on a mock-up weld. The results of the project will make it possible to make recommendations on computational procedures for WRS in dissimilar metal welds. Results: It is concluded that numerical analysis of weld residual stresses using the finite element method is very useful for the estimation of weld residual stresses in complex geometries and dissimilar metal welds. The validation study increases the understanding of uncertainties associated with different modeling approaches and helps to identify the most sensitive parameters

  7. Model-Checking Real-Time Control Programs

    DEFF Research Database (Denmark)

    Iversen, T. K.; Kristoffersen, K. J.; Larsen, Kim Guldstrand

    2000-01-01

    In this paper, we present a method for automatic verification of real-time control programs running on LEGO(R) RCX(TM) bricks using the verification tool UPPALL. The control programs, consisting of a number of tasks running concurrently, are automatically translated into the mixed automata model...... of UPPAAL. The fixed scheduling algorithm used by the LEGO(R) RCX(TM) processor is modeled in UPPALL, and supply of similar (sufficient) timed automata models for the environment allows analysis of the overall real-time system using the tools of UPPALL. To illustrate our technique for sorting LEGO(R) bricks...

  8. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    Science.gov (United States)

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  9. Towards automatic verification of ladder logic programs

    OpenAIRE

    Zoubek , Bohumir; Roussel , Jean-Marc; Kwiatkowska , Martha

    2003-01-01

    International audience; Control system programs are usually validated by testing prior to their deployment. Unfortunately, testing is not exhaustive and therefore it is possible that a program which passed all the required tests still contains errors. In this paper we apply techniques of automatic verification to a control program written in ladder logic. A model is constructed mechanically from the ladder logic program and subjected to automatic verification against requirements that include...

  10. Validation of models that predict Cesarean section after induction of labor

    NARCIS (Netherlands)

    Verhoeven, C. J. M.; Oudenaarden, A.; Hermus, M. A. A.; Porath, M. M.; Oei, S. G.; Mol, B. W. J.

    2009-01-01

    Objective Models for the prediction of Cesarean delivery after induction of labor can be used to improve clinical decision-making. The objective of this study was to validate two existing models, published by Peregrine et al. and Rane et al., for the prediction of Cesarean section after induction of

  11. VALIDATION OF CRACK INTERACTION LIMIT MODEL FOR PARALLEL EDGE CRACKS USING TWO-DIMENSIONAL FINITE ELEMENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    R. Daud

    2013-06-01

    Full Text Available Shielding interaction effects of two parallel edge cracks in finite thickness plates subjected to remote tension load is analyzed using a developed finite element analysis program. In the present study, the crack interaction limit is evaluated based on the fitness of service (FFS code, and focus is given to the weak crack interaction region as the crack interval exceeds the length of cracks (b > a. Crack interaction factors are evaluated based on stress intensity factors (SIFs for Mode I SIFs using a displacement extrapolation technique. Parametric studies involved a wide range of crack-to-width (0.05 ≤ a/W ≤ 0.5 and crack interval ratios (b/a > 1. For validation, crack interaction factors are compared with single edge crack SIFs as a state of zero interaction. Within the considered range of parameters, the proposed numerical evaluation used to predict the crack interaction factor reduces the error of existing analytical solution from 1.92% to 0.97% at higher a/W. In reference to FFS codes, the small discrepancy in the prediction of the crack interaction factor validates the reliability of the numerical model to predict crack interaction limits under shielding interaction effects. In conclusion, the numerical model gave a successful prediction in estimating the crack interaction limit, which can be used as a reference for the shielding orientation of other cracks.

  12. Validating safeguards effectiveness given inherently limited test data

    International Nuclear Information System (INIS)

    Sicherman, A.

    1987-01-01

    A key issue in designing and evaluating nuclear safeguards systems is how to validate safeguards effectiveness against a spectrum of potential threats. Safeguards effectiveness is measured by a performance indicator such as the probability of defeating an adversary attempting a malevolent act. Effectiveness validation means a testing program that provides sufficient evidence that the performance indicator is at an acceptable level. Traditional statistical program when numerous independent system trials are possible. However, within the safeguards environment, many situations arise for which traditional statistical approaches may be neither feasible nor appropriate. Such situations can occur, for example, when there are obvious constraints on the number of possible tests due to operational impacts and testing costs. Furthermore, these tests are usually simulations (e.g., staged force-on-force exercises) rather than actual tests, and the system is often modified after each test. Under such circumstances, it is difficult to make and justify inferences about system performance by using traditional statistical techniques. In this paper, the authors discuss several alternative quantitative techniques for validating system effectiveness. The techniques include: (1) minimizing the number of required tests using sequential testing; (2) combining data from models inspections and exercises using Bayesian statistics to improve inferences about system performance; and (3) using reliability growth and scenario modeling to help specify which safeguards elements and scenarios to test

  13. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  14. Validation of ASTEC v2.0 corium jet fragmentation model using FARO experiments

    International Nuclear Information System (INIS)

    Hermsmeyer, S.; Pla, P.; Sangiorgi, M.

    2015-01-01

    Highlights: • Model validation base extended to six FARO experiments. • Focus on the calculation of the fragmented particle diameter. • Capability and limits of the ASTEC fragmentation model. • Sensitivity analysis of model outputs. - Abstract: ASTEC is an integral code for the prediction of Severe Accidents in Nuclear Power Plants. As such, it needs to cover all physical processes that could occur during accident progression, yet keeping its models simple enough for the ensemble to stay manageable and produce results within an acceptable time. The present paper is concerned with the validation of the Corium jet fragmentation model of ASTEC v2.0 rev3 by means of a selection of six experiments carried out within the FARO facility. The different conditions applied within these six experiments help to analyse the model behaviour in different situations and to expose model limits. In addition to comparing model outputs with experimental measurements, sensitivity analyses are applied to investigate the model. Results of the paper are (i) validation runs, accompanied by an identification of situations where the implemented fragmentation model does not match the experiments well, and discussion of results; (ii) its special attention to the models calculating the diameter of fragmented particles, the identification of a fault in one model implemented, and the discussion of simplification and ad hoc modification to improve the model fit; and, (iii) an investigation of the sensitivity of predictions towards inputs and parameters. In this way, the paper offers a thorough investigation of the merit and limitation of the fragmentation model used in ASTEC

  15. Empirical validation of building simulation programs - Swiss contribution to IEA Task 34, Annex 43; Empirische Validierung von Gebaeudesimulationsprogrammen. Schweizer Beitrag zu IEA Task 34 / Annex 43. Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Loutzenhiser, P.; Manz, H. (eds.)

    2006-11-15

    This comprehensive, illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on work carried out on the validation of building simulation programs. the purpose of this project was to create a data set for use when evaluating the accuracy of models for glazing units and windows with and without shading devices. A series of eight experiments that subsequently increased in complexity were performed in an outdoor test cell located on the Swiss Federal Laboratories for Material Testing and Research (EMPA) campus in Duebendorf, Switzerland. Particular emphasis was placed on accurately determining the test cell characteristics. The report presents information on experimental set-ups, their validation and the methodology used. Further chapters describe particular experiments made, including transient characterisation, evaluation of irradiation models on tiled facades, as well as those made on glazing units with various types of shading and blinds. The thermal properties of windows are looked at. The results of experiments made with four different models, HELIOS, EnergyPlus, DOE-2.1E and IDA-ICE, are discussed.

  16. Evaluation of nonlinearity and validity of nonlinear modeling for complex time series.

    Science.gov (United States)

    Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo

    2007-10-01

    Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.

  17. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  18. External validation of structure-biodegradation relationship (SBR) models for predicting the biodegradability of xenobiotics.

    Science.gov (United States)

    Devillers, J; Pandard, P; Richard, B

    2013-01-01

    Biodegradation is an important mechanism for eliminating xenobiotics by biotransforming them into simple organic and inorganic products. Faced with the ever growing number of chemicals available on the market, structure-biodegradation relationship (SBR) and quantitative structure-biodegradation relationship (QSBR) models are increasingly used as surrogates of the biodegradation tests. Such models have great potential for a quick and cheap estimation of the biodegradation potential of chemicals. The Estimation Programs Interface (EPI) Suite™ includes different models for predicting the potential aerobic biodegradability of organic substances. They are based on different endpoints, methodologies and/or statistical approaches. Among them, Biowin 5 and 6 appeared the most robust, being derived from the largest biodegradation database with results obtained only from the Ministry of International Trade and Industry (MITI) test. The aim of this study was to assess the predictive performances of these two models from a set of 356 chemicals extracted from notification dossiers including compatible biodegradation data. Another set of molecules with no more than four carbon atoms and substituted by various heteroatoms and/or functional groups was also embodied in the validation exercise. Comparisons were made with the predictions obtained with START (Structural Alerts for Reactivity in Toxtree). Biowin 5 and Biowin 6 gave satisfactorily prediction results except for the prediction of readily degradable chemicals. A consensus model built with Biowin 1 allowed the diminution of this tendency.

  19. Validation of a probabilistic model for hurricane insurance loss projections in Florida

    International Nuclear Information System (INIS)

    Pinelli, J.-P.; Gurley, K.R.; Subramanian, C.S.; Hamid, S.S.; Pita, G.L.

    2008-01-01

    The Florida Public Hurricane Loss Model is one of the first public models accessible for scrutiny to the scientific community, incorporating state of the art techniques in hurricane and vulnerability modeling. The model was developed for Florida, and is applicable to other hurricane-prone regions where construction practice is similar. The 2004 hurricane season produced substantial losses in Florida, and provided the means to validate and calibrate this model against actual claim data. This paper presents the predicted losses for several insurance portfolios corresponding to hurricanes Andrew, Charley, and Frances. The predictions are validated against the actual claim data. Physical damage predictions for external building components are also compared to observed damage. The analyses show that the predictive capabilities of the model were substantially improved after the calibration against the 2004 data. The methodology also shows that the predictive capabilities of the model could be enhanced if insurance companies report more detailed information about the structures they insure and the types of damage they suffer. This model can be a powerful tool for the study of risk reduction strategies

  20. Experimental validation of TASS/SMR-S critical flow model for the integral reactor SMART

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Si Won; Ra, In Sik; Kim, Kun Yeup [ACT Co., Daejeon (Korea, Republic of); Chung, Young Jong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    An advanced integral PWR, SMART (System- Integrated Modular Advanced ReacTor) is being developed in KAERI. It has a compact size and a relatively small power rating (330MWt) compared to a conventional reactor. Because new concepts are applied to SMART, an experimental and analytical validation is necessary for the safety evaluation of SMART. The analytical safety validation is being accomplished by a safety analysis code for an integral reactor, TASS/SMR-S developed by KAERI. TASS/SMR-S uses a lumped parameter one dimensional node and path modeling for the thermal hydraulic calculation and it uses point kinetics for the reactor power calculation. It has models for a general usage such as a core heat transfer model, a wall heat structure model, a critical flow model, component models, and it also has many SMART specific models such as an once through helical coiled steam generator model, and a condensate heat transfer model. To ensure that the TASS/SMR-S code has the calculation capability for the safety evaluation of SMART, the code should be validated for the specific models with the separate effect test experimental results. In this study, TASS/SMR-S critical flow model is evaluated as compared with SMD (Super Moby Dick) experiment