WorldWideScience

Sample records for test case model

  1. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  2. GENERATING TEST CASES FOR PLATFORM INDEPENDENT MODEL BY USING USE CASE MODEL

    OpenAIRE

    Hesham A. Hassan,; Zahraa. E. Yousif

    2010-01-01

    Model-based testing refers to testing and test case generation based on a model that describes the behavior of the system. Extensive use of models throughout all the phases of software development starting from the requirement engineering phase has led to increased importance of Model Based Testing. The OMG initiative MDA has revolutionized the way models would be used for software development. Ensuring that all user requirements are addressed in system design and the design is getting suffic...

  3. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  4. Tests of spinning turbine fragment impact on casing models

    International Nuclear Information System (INIS)

    Wilbeck, J.S.

    1984-01-01

    Ten 1/11-scale model turbine missile impact tests were conducted at a Naval spin chamber test facility to assess turbine missile effects in nuclear plant design. The objective of the tests was to determine the effects of missile spin, blade crush, and target edge conditions on the impact of turbine disk fragments on the steel casing. The results were intended for use in making realistic estimates for the initial conditions of fragments that might escape the casing in the event of a disk burst in a nuclear plant. The burst of a modified gas turbine rotor in a high-speed spin chamber provided three missiles with the proper rotational and translational velocities of actual steam turbine fragments. Tests of bladed, spinning missiles were compared with previous tests of unbladed, nonspinning missiles. The total residual energy of the spinning missiles, as observed from high-speed photographs of disk burst, was the same as that of the nonspinning missiles launched in a piercing orientation. Tests with bladed missiles showed that for equal burst speeds, the residual energy of bladed missiles is less than that of unbladed missiles. Impacts of missiles near the edge of targets resulted in residual missile velocities greater than for central impact. (orig.)

  5. DECOVALEX I - Test Case 1: Coupled stress-flow model

    International Nuclear Information System (INIS)

    Rosengren, L.; Christianson, M.

    1995-12-01

    This report presents the results of the coupled stress-flow model, test case 1 of Decovalex. The model simulates the fourth loading cycle of a coupled stress-flow test and subsequent shearing up to and beyond peak shear resistance. The first loading sequence (A) consists of seven normal loading steps: 0, 5, 15, 25, 15, 5, 0 MPa. The second loading sequence (B) consists of the following eight steps: unstressed state, normal boundary loading of 25 MPa (no shearing), and then shearing of 0.5, 0.8, 2, 4, 2, 0 mm. Two different options regarding the rock joint behaviour were modeled in accordance with the problem definition. In option 1 a linear elastic joint model with Coulomb slip criterion was used. In option 2 a non-linear empirical (i.e. Barton-Bandis) joint model was used. The hydraulic condition during both load sequence A and B was a constant head of 5 m at the inlet point and 0 m at the outlet point. All model runs presented in this report were performed using the two-dimensional distinct element computer code UDEC, version 1.8. 30 refs, 36 figs

  6. Automatic Generation of Test Cases from UML Models

    Directory of Open Access Journals (Sweden)

    Constanza Pérez

    2018-04-01

    Full Text Available [Context] The growing demand for high-quality software has caused the industry to incorporate processes to enable them to comply with these standards, but increasing the cost of development. A strategy to reduce this cost is to incorporate quality evaluations from early stages of software development. A technique that facilitates this evaluation is the model-based testing, which allows to generate test cases at early phases using as input the conceptual models of the system. [Objective] In this paper, we introduce TCGen, a tool that enables the automatic generation of abstract test cases starting from UML conceptual models. [Method] The design and implementation of TCGen, a technique that applies different testing criteria to class diagrams and state transition diagrams to generates test cases, is presented as a model-based testing approach. To do that, TCGen uses UML models, which are widely used at industry and a set of algorithms that recognize the concepts in the models in order to generate abstract test cases. [Results] An exploratory experimental evaluation has been performed to compare the TCGen tool with traditional testing. [Conclusions] Even though the exploratory evaluation shows promising results, it is necessary to perform more empirical evaluations in order to generalize the results. Abstract (in Spanish: [Contexto] La creciente demanda de software de alta calidad ha provocado que la industria incorpore procesos para permitirles cumplir con estos estándares, pero aumentando el costo del desarrollo. Una estrategia para reducir este costo es incorporar evaluaciones de calidad desde las primeras etapas del desarrollo del software. Una técnica que facilita esta evaluación es la prueba basada en modelos, que permite generar casos de prueba en fases tempranas utilizando como entrada los modelos conceptuales del sistema. [Objetivo] En este artículo, presentamos TCGen, una herramienta que permite la generación automática de casos de

  7. Testing Affine Term Structure Models in Case of Transaction Costs

    NARCIS (Netherlands)

    Driessen, J.J.A.G.; Melenberg, B.; Nijman, T.E.

    1999-01-01

    In this paper we empirically analyze the impact of transaction costs on the performance of affine interest rate models. We test the implied (no arbitrage) Euler restrictions, and we calculate the specification error bound of Hansen and Jagannathan to measure the extent to which a model is

  8. Business models & business cases for point-of-care testing

    NARCIS (Netherlands)

    Staring, A.J.; Meertens, L. O.; Sikkel, N.

    2016-01-01

    Point-Of-Care Testing (POCT) enables clinical tests at or near the patient, with test results that are available instantly or in a very short time frame, to assist caregivers with immediate diagnosis and/or clinical intervention. The goal of POCT is to provide accurate, reliable, fast, and

  9. Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction

    NARCIS (Netherlands)

    Calamé, Jens R.; Ioustinova, Natalia; Romijn, J.M.T.; Smith, G.; van de Pol, Jan Cornelis

    2007-01-01

    Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to

  10. Modeling Asteroid Dynamics using AMUSE: First Test Cases

    NARCIS (Netherlands)

    Frantseva, Kateryna; Mueller, Michael; van der Tak, Floris; Helmich, Frank P.

    2015-01-01

    We are creating a dynamic model of the current asteroid population. The goal is to reproduce measured impact rates in the current Solar System, from which we'll derive delivery rates of water and organic material by tracing low-albedo C-class asteroids (using the measured albedo distribution from

  11. Mathematical Basis and Test Cases for Colloid-Facilitated Radionuclide Transport Modeling in GDSA-PFLOTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-31

    This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examples of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.

  12. Comparative Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. In the comp....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases. The comparative test cases include: ventilation, shading and geometry....

  13. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  14. Testing the Market Model – A Case Study of Fondul Proprietatea (FP)

    OpenAIRE

    Sorin Claudiu Radu

    2014-01-01

    The financial theory related to the bond portfolio analysis was coined by Harry Markowitz, an authentic’ pioneer of the modern bond theory’, and his well-thought interpretation of the bond selection model may be found in his research papers “Portfolio Selection” (Markowitz M. Harry, 1952) and “Portfolio Selection: Efficient Diversification of Investments” (Markowitz M. Harry 1960). This paper is proposed to test the market model in the Romanian stock market, case of Property Fund.

  15. Comparative Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

     This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....... This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....

  16. Modeling of a Parabolic Trough Solar Field for Acceptance Testing: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, M. J.; Mehos, M. S.; Kearney, D. W.; McMahan, A. C.

    2011-01-01

    As deployment of parabolic trough concentrating solar power (CSP) systems ramps up, the need for reliable and robust performance acceptance test guidelines for the solar field is also amplified. Project owners and/or EPC contractors often require extensive solar field performance testing as part of the plant commissioning process in order to ensure that actual solar field performance satisfies both technical specifications and performance guaranties between the involved parties. Performance test code work is currently underway at the National Renewable Energy Laboratory (NREL) in collaboration with the SolarPACES Task-I activity, and within the ASME PTC-52 committee. One important aspect of acceptance testing is the selection of a robust technology performance model. NREL1 has developed a detailed parabolic trough performance model within the SAM software tool. This model is capable of predicting solar field, sub-system, and component performance. It has further been modified for this work to support calculation at subhourly time steps. This paper presents the methodology and results of a case study comparing actual performance data for a parabolic trough solar field to the predicted results using the modified SAM trough model. Due to data limitations, the methodology is applied to a single collector loop, though it applies to larger subfields and entire solar fields. Special consideration is provided for the model formulation, improvements to the model formulation based on comparison with the collected data, and uncertainty associated with the measured data. Additionally, this paper identifies modeling considerations that are of particular importance in the solar field acceptance testing process and uses the model to provide preliminary recommendations regarding acceptable steady-state testing conditions at the single-loop level.

  17. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  18. A Multi-Process Test Case to Perform Comparative Analysis of Coastal Oceanic Models

    Science.gov (United States)

    Lemarié, F.; Burchard, H.; Knut, K.; Debreu, L.

    2016-12-01

    Due to the wide variety of choices that need to be made during the development of dynamical kernels of oceanic models, there is a strong need for an effective and objective assessment of the various methods and approaches that predominate in the community. We present here an idealized multi-scale scenario for coastal ocean models combining estuarine, coastal and shelf sea scales at midlatitude. The bathymetry, initial conditions and external forcings are defined analytically so that any model developer or user could reproduce the test case with its own numerical code. Thermally stratified conditions are prescribed and a tidal forcing is imposed as a propagating coastal Kelvin wave. The following physical processes can be assessed from the model results: estuarine process driven by tides and buoyancy gradients, the river plume dynamics, tidal fronts, and the interaction between tides and inertial oscillations. We show results obtained using the GETM (General Estuarine Transport Model) and the CROCO (Coastal and Regional Ocean Community model) models. Those two models are representative of the diversity of numerical methods in use in coastal models: GETM is based on a quasi-lagrangian vertical coordinate, a coupled space-time approach for advective terms, a TVD (Total Variation Diminishing) tracer advection scheme while CROCO is discretized with a quasi-eulerian vertical coordinate, a method of lines is used for advective terms, and tracer advection satisfies the TVB (Total Variation Bounded) property. The multiple scales are properly resolved thanks to nesting strategies, 1-way nesting for GETM and 2-way nesting for CROCO. Such test case can be an interesting experiment to continue research in numerical approaches as well as an efficient tool to allow intercomparison between structured-grid and unstructured-grid approaches. Reference : Burchard, H., Debreu, L., Klingbeil, K., Lemarié, F. : The numerics of hydrostatic structured-grid coastal ocean models: state of

  19. Developing a quality by design approach to model tablet dissolution testing: an industrial case study.

    Science.gov (United States)

    Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine

    2017-11-02

    This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.

  20. Developing and Testing a 3d Cadastral Data Model a Case Study in Australia

    Science.gov (United States)

    Aien, A.; Kalantari, M.; Rajabifard, A.; Williamson, I. P.; Shojaei, D.

    2012-07-01

    and physical extent of 3D properties and associated interests. The data model extends the traditional cadastral requirements to cover other applications such as urban planning and land valuation and taxation. A demonstration of a test system on the proposed data model is also presented. The test is based on a case study in Victoria, Australia to evaluate the effectiveness of the data model.

  1. IPRT polarized radiative transfer model intercomparison project - Three-dimensional test cases (phase B)

    Science.gov (United States)

    Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Wang, Zhen; Labonotte, Laurent C.; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred

    2018-04-01

    Initially unpolarized solar radiation becomes polarized by scattering in the Earth's atmosphere. In particular molecular scattering (Rayleigh scattering) polarizes electromagnetic radiation, but also scattering of radiation at aerosols, cloud droplets (Mie scattering) and ice crystals polarizes. Each atmospheric constituent produces a characteristic polarization signal, thus spectro-polarimetric measurements are frequently employed for remote sensing of aerosol and cloud properties. Retrieval algorithms require efficient radiative transfer models. Usually, these apply the plane-parallel approximation (PPA), assuming that the atmosphere consists of horizontally homogeneous layers. This allows to solve the vector radiative transfer equation (VRTE) efficiently. For remote sensing applications, the radiance is considered constant over the instantaneous field-of-view of the instrument and each sensor element is treated independently in plane-parallel approximation, neglecting horizontal radiation transport between adjacent pixels (Independent Pixel Approximation, IPA). In order to estimate the errors due to the IPA approximation, three-dimensional (3D) vector radiative transfer models are required. So far, only a few such models exist. Therefore, the International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to provide benchmark results for polarized radiative transfer. The group has already performed an intercomparison for one-dimensional (1D) multi-layer test cases [phase A, 1]. This paper presents the continuation of the intercomparison project (phase B) for 2D and 3D test cases: a step cloud, a cubic cloud, and a more realistic scenario including a 3D cloud field generated by a Large Eddy Simulation (LES) model and typical background aerosols. The commonly established benchmark results for 3D polarized radiative transfer are available at the IPRT website (http

  2. Empirical tests of the Chicago model and the Easterlin hypothesis: a case study of Japan.

    Science.gov (United States)

    Ohbuchi, H

    1982-05-01

    The objective of this discussion is to test the applicability of economic theory of fertility with special reference to postwar Japan and to find a clue for forecasting the future trend of fertility. The theories examined are the "Chicago model" and the "Easterlin hypothesis." The major conclusion common among the leading economic theories of fertility, which have their origin with Gary S. Becker (1960, 1965) and Richard A. Easterlin (1966), is the positive income effect, i.e., that the relationship between income and fertility is positive despite the evidence that higher income families have fewer children and that fertility has declined with economic development. To bridge the gap between theory and fact is the primary purpose of the economic theory of fertility, and each offers a different interpretation for it. The point of the Chicago model, particularly of the household decision making model of the "new home economics," is the mechanism that a positive effect of husband's income growth on fertility is offset by a negative price effect caused by the opportunity cost of wife's time. While the opportunity cost of wife's time is independent of the female wage rate for an unemployed wife, it is directly associated with the wage rate for a gainfully employed wife. Thus, the fertility response to female wages occurs only among families with an employed wife. The primary concern of empirical efforts to test the Chicago model has been with the determination of income and price elasticities. An attempt is made to test the relevance of the Chicago model and the Easterlin hypothesis in explaning the fertility movement in postwar Japan. In case of the Chicago model, the statistical results appeared fairly successful but did not match with the theory. The effect on fertility of a rise in women's real wage (and, therefore in the opportunity cost of mother's time) and of a rise in labor force participation rate of married women of childbearing age in recent years could not

  3. Integrated socio-environmental modelling: A test case in coastal Bangladesh

    Science.gov (United States)

    Lazar, Attila

    2013-04-01

    though the coastal Bangladesh test case.

  4. Modeling erosion of unsaturated compacted bentonite by groundwater flow; pinhole erosion test case

    International Nuclear Information System (INIS)

    Laurila, T.; Sane, P.; Olin, M.; Koskinen, K.

    2012-01-01

    Document available in extended abstract form only. Erosion of compacted clay material by water flow is a critical factor affecting the performance of radioactive waste confinement. Our emphasis in this work is the buffer of KBS-3V concept, proposed to be compacted MX-80 bentonite. Unsaturated erosion occurs during the saturation phase of the EBS, and the main quantity of interest is the total buffer mass carried away by a groundwater flow that induces erosion by forming piping channels near the buffer/rock interface. The purpose of this work is to provide modeling tools to support erosion experiments. Role of modeling is first to interpret experimental observations in terms of processes, and to estimate robustness of experimental results. Secondly, we seek to scale up results from the laboratory scale, particularly to time scales longer than those experimentally accessible. We have performed modeling and data analysis pertaining to tests of unsaturated clay erosion. Pinhole experiments were used to study this erosion case. The main differences to well-understood pinhole erosion tests are that the material is strongly swelling and that the water flow is not determined by the pressure head but by the total flux. Groundwater flow in the buffer is determined by the flux because pressure losses occur overwhelmingly in the surrounding rock, not in the piping channel. We formulate a simple model that links an effective solid diffusivity -based swelling model to erosion by flow on the solid/liquid interface. The swelling model is similar in concept to that developed at KTH, but simpler. Erosion in the model is caused by laminar flow in the pinhole, and happens in a narrow region at the solid/liquid interface where velocity and solid volume fraction overlap. The erosion model can be mapped to erosion by wall shear, and can thus be considered as extension of that classic erosion model. The main quantity defining the behavior of clay erosion in the model is the ratio of

  5. Airside HVAC BESTEST. Adaptation of ASHRAE RP 865 Airside HVAC Equipment Modeling Test Cases for ASHRAE Standard 140. Volume 1, Cases AE101-AE445

    Energy Technology Data Exchange (ETDEWEB)

    Neymark, J. [J. Neymark & Associates, Golden, CO (United States); Kennedy, M. [Mike D. Kennedy, Inc., Townsend, WA (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gall, J. [AAON, Inc., Tulsa, OK (United States); Knebel, D. [AAON, Inc., Tulsa, OK (United States); Henninger, R. [GARD Analytics, Inc., Arlington Heights, IL (United States); Witte, M. [GARD Analytics, Inc., Arlington Heights, IL (United States); Hong, T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McDowell, T. [Thermal Energy System Specialists, Madison, WI (United States); Yan, D. [Tsinghua Univ., Beijing (China); Zhou, X. [Tsinghua Univ., Beijing (China)

    2016-03-01

    This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.

  6. A practical model-based statistical approach for generating functional test cases: application in the automotive industry

    OpenAIRE

    Awédikian , Roy; Yannou , Bernard

    2012-01-01

    International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...

  7. PREDICTABILITY OF FINANCIAL CRISES: TESTING K.R.L. MODEL IN THE CASE OF TURKEY

    Directory of Open Access Journals (Sweden)

    Zeynep KARACOR

    2012-06-01

    Full Text Available The aim of this study is to test predictability of 2007 Global Economic Crisis which hit Turkey by the help of macroeconomic data of Turkey. K.R.L. model is used to test the predictability. By the method of analyzing various leading early warning indicators, the success of the model in forecasting the crises is surveyed. The findings do not support K.R.L. models. Possible reasons for this are stated at the article.

  8. A case study testing the cavity mode model of the magnetosphere

    Directory of Open Access Journals (Sweden)

    D. V. Sarafopoulos

    2005-07-01

    Full Text Available Based on a case study we test the cavity mode model of the magnetosphere, looking for eigenfrequencies via multi-satellite and multi-instrument measurements. Geotail and ACE provide information on the interplanetary medium that dictates the input parameters of the system; the four Cluster satellites monitor the magnetopause surface waves; the POLAR (L=9.4 and LANL 97A (L=6.6 satellites reveal two in-situ monochromatic field line resonances (FLRs with T=6 and 2.5 min, respectively; and the IMAGE ground magnetometers demonstrate latitude dependent delays in signature arrival times, as inferred by Sarafopoulos (2004b. Similar dispersive structures showing systematic delays are also extensively scrutinized by Sarafopoulos (2005 and interpreted as tightly associated with the so-called pseudo-FLRs, which show almost the same observational characteristics with an authentic FLR. In particular for this episode, successive solar wind pressure pulses produce recurring ionosphere twin vortex Hall currents which are identified on the ground as pseudo-FLRs. The BJN ground magnetometer records the pseudo-FLR (alike with the other IMAGE station responses associated with an intense power spectral density ranging from 8 to 12 min and, in addition, two discrete resonant lines with T=3.5 and 7 min. In this case study, even though the magnetosphere is evidently affected by a broad-band compressional wave originated upstream of the bow shock, nevertheless, we do not identify any cavity mode oscillation within the magnetosphere. We fail, also, to identify any of the cavity mode frequencies proposed by Samson (1992.

    Keywords. Magnetospheric physics (Magnetosphereionosphere interactions; Solar wind-magnetosphere interactions; MHD waves and instabilities

  9. Explanatory item response modelling of an abstract reasoning assessment: A case for modern test design

    OpenAIRE

    Helland, Fredrik

    2016-01-01

    Assessment is an integral part of society and education, and for this reason it is important to know what you measure. This thesis is about explanatory item response modelling of an abstract reasoning assessment, with the objective to create a modern test design framework for automatic generation of valid and precalibrated items of abstract reasoning. Modern test design aims to strengthen the connections between the different components of a test, with a stress on strong theory, systematic it...

  10. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  11. Hubble Diagram Test of Expanding and Static Cosmological Models: The Case for a Slowly Expanding Flat Universe

    Directory of Open Access Journals (Sweden)

    Laszlo A. Marosi

    2013-01-01

    Full Text Available We present a new redshift (RS versus photon travel time ( test including 171 supernovae RS data points. We extended the Hubble diagram to a range of z = 0,0141–8.1 in the hope that at high RSs, the fitting of the calculated RS/ diagrams to the observed RS data would, as predicted by different cosmological models, set constraints on alternative cosmological models. The Lambda cold dark matter (ΛCDM, the static universe model, and the case for a slowly expanding flat universe (SEU are considered. We show that on the basis of the Hubble diagram test, the static and the slowly expanding models are favored.

  12. Thermal Vacuum Test Correlation of a Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytical Model

    Science.gov (United States)

    Mckim, Stephen A.

    2016-01-01

    This thesis describes the development and correlation of a thermal model that forms the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented are presented. The thermal model was correlated to within plus or minus 3 degrees Celsius of the thermal vacuum test data, and was determined sufficient to make future propellant predictions on MMS. The model was also found to be relatively sensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed to improve temperature predictions in the upper hemisphere of the propellant tank where predictions were found to be 2 to 2.5 C lower than the test data. A road map for applying the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  13. Strategy for a Rock Mechanics Site Descriptive Model. A test case based on data from the Aespoe HRL

    International Nuclear Information System (INIS)

    Hudson, John A

    2002-06-01

    In anticipation of the SKB Site Investigations for radioactive waste disposal, an approach has been developed for the Rock Mechanics Site Descriptive Model. This approach was tested by predicting the rock mechanics properties of a 600 m x 180 m x 120 m rock volume at the Aespoe Hard Rock Laboratory (HRL) using limited borehole data of the type typically obtained during a site investigation. These predicted properties were then compared with 'best estimate' properties obtained from a study of the test rock volume using additional information, mainly tunnel data. The exercise was known as the Test Case, and is the subject of this Report. Three modelling techniques were used to predict the rock properties: the 'empirical approach' - the rock properties were estimated using rock mass classification schemes and empirical correlation formulae; the 'theoretical approach' - the rock properties were estimated using numerical modelling techniques; and the 'stress approach' - the rock stress state was estimated using primary data and numerical modelling. These approaches are described separately and respectively. Following an explanation of the context for the Test Case within the strategy for developing the Rock Mechanics Site Descriptive Model, conditions at the Aespoe HRL are described in Chapter 2. The Test Case organization and the suite of nine Protocols used to ensure that the work was appropriately guided and co-ordinated are described in Chapter 3. The methods for predicting the rock properties and the rock stress, and comparisons with the 'best estimate' properties of the actual conditions, are presented in Chapters 4 and 5. Finally, the conclusions from this Test Case exercise are given in Chapter 6. General recommendations for the management of this type of Test Case are also included

  14. Strategy for a Rock Mechanics Site Descriptive Model. A test case based on data from the Aespoe HRL

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, John A (ed.) [Rock Engineering Consultants, Welwyn Garden City (United Kingdom)

    2002-06-01

    In anticipation of the SKB Site Investigations for radioactive waste disposal, an approach has been developed for the Rock Mechanics Site Descriptive Model. This approach was tested by predicting the rock mechanics properties of a 600 m x 180 m x 120 m rock volume at the Aespoe Hard Rock Laboratory (HRL) using limited borehole data of the type typically obtained during a site investigation. These predicted properties were then compared with 'best estimate' properties obtained from a study of the test rock volume using additional information, mainly tunnel data. The exercise was known as the Test Case, and is the subject of this Report. Three modelling techniques were used to predict the rock properties: the 'empirical approach' - the rock properties were estimated using rock mass classification schemes and empirical correlation formulae; the 'theoretical approach' - the rock properties were estimated using numerical modelling techniques; and the 'stress approach' - the rock stress state was estimated using primary data and numerical modelling. These approaches are described separately and respectively. Following an explanation of the context for the Test Case within the strategy for developing the Rock Mechanics Site Descriptive Model, conditions at the Aespoe HRL are described in Chapter 2. The Test Case organization and the suite of nine Protocols used to ensure that the work was appropriately guided and co-ordinated are described in Chapter 3. The methods for predicting the rock properties and the rock stress, and comparisons with the 'best estimate' properties of the actual conditions, are presented in Chapters 4 and 5. Finally, the conclusions from this Test Case exercise are given in Chapter 6. General recommendations for the management of this type of Test Case are also included.

  15. Thermal Vacuum Test Correlation of A Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytics Model

    Science.gov (United States)

    McKim, Stephen A.

    2016-01-01

    This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  16. Developing and testing temperature models for regulated systems: a case study on the Upper Delaware River

    Science.gov (United States)

    Cole, Jeffrey C.; Maloney, Kelly O.; Schmid, Matthias; McKenna, James E.

    2014-01-01

    Water temperature is an important driver of many processes in riverine ecosystems. If reservoirs are present, their releases can greatly influence downstream water temperatures. Models are important tools in understanding the influence these releases may have on the thermal regimes of downstream rivers. In this study, we developed and tested a suite of models to predict river temperature at a location downstream of two reservoirs in the Upper Delaware River (USA), a section of river that is managed to support a world-class coldwater fishery. Three empirical models were tested, including a Generalized Least Squares Model with a cosine trend (GLScos), AutoRegressive Integrated Moving Average (ARIMA), and Artificial Neural Network (ANN). We also tested one mechanistic Heat Flux Model (HFM) that was based on energy gain and loss. Predictor variables used in model development included climate data (e.g., solar radiation, wind speed, etc.) collected from a nearby weather station and temperature and hydrologic data from upstream U.S. Geological Survey gages. Models were developed with a training dataset that consisted of data from 2008 to 2011; they were then independently validated with a test dataset from 2012. Model accuracy was evaluated using root mean square error (RMSE), Nash Sutcliffe efficiency (NSE), percent bias (PBIAS), and index of agreement (d) statistics. Model forecast success was evaluated using baseline-modified prime index of agreement (md) at the one, three, and five day predictions. All five models accurately predicted daily mean river temperature across the entire training dataset (RMSE = 0.58–1.311, NSE = 0.99–0.97, d = 0.98–0.99); ARIMA was most accurate (RMSE = 0.57, NSE = 0.99), but each model, other than ARIMA, showed short periods of under- or over-predicting observed warmer temperatures. For the training dataset, all models besides ARIMA had overestimation bias (PBIAS = −0.10 to −1.30). Validation analyses showed all models performed

  17. Developing and testing temperature models for regulated systems: A case study on the Upper Delaware River

    Science.gov (United States)

    Cole, Jeffrey C.; Maloney, Kelly O.; Schmid, Matthias; McKenna, James E.

    2014-11-01

    Water temperature is an important driver of many processes in riverine ecosystems. If reservoirs are present, their releases can greatly influence downstream water temperatures. Models are important tools in understanding the influence these releases may have on the thermal regimes of downstream rivers. In this study, we developed and tested a suite of models to predict river temperature at a location downstream of two reservoirs in the Upper Delaware River (USA), a section of river that is managed to support a world-class coldwater fishery. Three empirical models were tested, including a Generalized Least Squares Model with a cosine trend (GLScos), AutoRegressive Integrated Moving Average (ARIMA), and Artificial Neural Network (ANN). We also tested one mechanistic Heat Flux Model (HFM) that was based on energy gain and loss. Predictor variables used in model development included climate data (e.g., solar radiation, wind speed, etc.) collected from a nearby weather station and temperature and hydrologic data from upstream U.S. Geological Survey gages. Models were developed with a training dataset that consisted of data from 2008 to 2011; they were then independently validated with a test dataset from 2012. Model accuracy was evaluated using root mean square error (RMSE), Nash Sutcliffe efficiency (NSE), percent bias (PBIAS), and index of agreement (d) statistics. Model forecast success was evaluated using baseline-modified prime index of agreement (md) at the one, three, and five day predictions. All five models accurately predicted daily mean river temperature across the entire training dataset (RMSE = 0.58-1.311, NSE = 0.99-0.97, d = 0.98-0.99); ARIMA was most accurate (RMSE = 0.57, NSE = 0.99), but each model, other than ARIMA, showed short periods of under- or over-predicting observed warmer temperatures. For the training dataset, all models besides ARIMA had overestimation bias (PBIAS = -0.10 to -1.30). Validation analyses showed all models performed well; the

  18. Bulgarian fuel models developed for implementation in FARSITE simulations for test cases in Zlatograd area

    Science.gov (United States)

    Nina Dobrinkova; LaWen Hollingsworth; Faith Ann Heinsch; Greg Dillon; Georgi Dobrinkov

    2014-01-01

    As a key component of the cross-border project between Bulgaria and Greece known as OUTLAND, a team from the Bulgarian Academy of Sciences and Rocky Mountain Research Station started a collaborative project to identify and describe various fuel types for a test area in Bulgaria in order to model fire behavior for recent wildfires. Although there have been various...

  19. Case Report: HIV test misdiagnosis

    African Journals Online (AJOL)

    Case Study: HIV test misdiagnosis 124. Case Report: HIV ... A positive rapid HIV test does not require ... 3 College of Medicine - Johns Hopkins Research Project, Blantyre,. Malawi ... test results: a pilot study of three community testing sites.

  20. Eukaryotic Cell Cycle as a Test Case for Modeling Cellular Regulation in a Collaborative Problem-Solving Environment

    Science.gov (United States)

    2007-03-01

    computer models of cell cycle regulation in a variety of organisms, including yeast cells, amphibian embryos, bacterial cells and human cells. These...and meiosis ), but they do not nullify the central role played by irreversible, alternating START and FINISH transitions in the cell cycle. 32...AFRL-IF-RS-TR-2007-69 Final Technical Report March 2007 EUKARYOTIC CELL CYCLE AS A TEST CASE FOR MODELING CELLULAR REGULATION IN A

  1. Case Study: Testing with Case Studies

    Science.gov (United States)

    Herreid, Clyde Freeman

    2015-01-01

    This column provides original articles on innovations in case study teaching, assessment of the method, as well as case studies with teaching notes. This month's issue discusses using case studies to test for knowledge or lessons learned.

  2. Validation of transport models for use in repository performance assessments: a view illustrated for INTRAVAL test case 1b

    International Nuclear Information System (INIS)

    Jackson, C.P.; Lever, D.A.; Sumner, P.J.

    1991-03-01

    We present our views on validation. We consider that validation is slightly different for general models and specific models. We stress the importance of presenting for review the case for (or against) a model. We outline a formal framework for validation, which helps to ensure that all the issues are addressed. Our framework includes calibration, testing predictions, comparison with alternative models, which we consider particularly important, analysis of discrepancies, presentation, consideration of implications and suggested improved experiments. We illustrate the approach by application to an INTRAVAL test case based on laboratory experiments. Three models were considered: a simple model that included the effects of advection, dispersion and equilibrium sorption, a model that also included the effects of rock-matrix diffusion, and a model with kinetic sorption. We show that the model with rock-matrix diffusion is the only one to provide a good description of the data. We stress the implications of extrapolating to larger length and time scales for repository performance assessments. (author)

  3. Application of the genetic algorithm to blume-emery-griffiths model: Test Cases

    International Nuclear Information System (INIS)

    Erdinc, A.

    2004-01-01

    The equilibrium properties of the Blume-Emery-Griffiths (BEO) model Hamiltonian with the arbitrary bilinear (1), biquadratic (K) and crystal field interaction (D) are studied using the genetic algorithm technique. Results are compared with lowest approximation of the cluster variation method (CVM), which is identical to the mean field approximation. We found that the genetic algorithm to be very efficient for fast search at the average fraction of the spins, especially in the early stages as the system is far from the equilibrium state. A combination of the genetic algorithm followed by one of the well-tested simulation techniques seems to be an optimal approach. The curvature of the inverse magnetic susceptibility is also presented for the stable state of the BEG model

  4. Modelling ground rupture due to groundwater withdrawal: applications to test cases in China and Mexico

    Science.gov (United States)

    Franceschini, A.; Teatini, P.; Janna, C.; Ferronato, M.; Gambolati, G.; Ye, S.; Carreón-Freyre, D.

    2015-11-01

    The stress variation induced by aquifer overdraft in sedimentary basins with shallow bedrock may cause rupture in the form of pre-existing fault activation or earth fissure generation. The process is causing major detrimental effects on a many areas in China and Mexico. Ruptures yield discontinuity in both displacement and stress field that classic continuous finite element (FE) models cannot address. Interface finite elements (IE), typically used in contact mechanics, may be of great help and are implemented herein to simulate the fault geomechanical behaviour. Two main approaches, i.e. Penalty and Lagrangian, are developed to enforce the contact condition on the element interface. The incorporation of IE incorporation into a three-dimensional (3-D) FE geomechanical simulator shows that the Lagrangian approach is numerically more robust and stable than the Penalty, thus providing more reliable solutions. Furthermore, the use of a Newton-Raphson scheme to deal with the non-linear elasto-plastic fault behaviour allows for quadratic convergence. The FE - IE model is applied to investigate the likely ground rupture in realistic 3-D geologic settings. The case studies are representative of the City of Wuxi in the Jiangsu Province (China), and of the City of Queretaro, Mexico, where significant land subsidence has been accompanied by the generation of several earth fissures jeopardizing the stability and integrity of the overland structures and infrastructure.

  5. Downscale cascades in tracer transport test cases: an intercomparison of the dynamical cores in the Community Atmosphere Model CAM5

    Directory of Open Access Journals (Sweden)

    J. Kent

    2012-12-01

    Full Text Available The accurate modeling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the western hemisphere should not pass into the eastern hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.

    An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the eastern hemisphere of the separate cells test.

  6. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    Science.gov (United States)

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  7. Policy implications of achievement testing using multilevel models: The case of Brazilian elementary schools

    Directory of Open Access Journals (Sweden)

    Igor Gomes Menezes

    2016-11-01

    Full Text Available Large-scale educational assessment has been established as source of descriptive, evaluative and interpretative information that influence educational policies worldwide throughout the last third of the 20th century. In the 1990s the Brazilian Ministry of Education developed the National Basic Education Assessment System (SAEB that regularly measures management, resource and contextual school features and academic achievement in public and private institutions. In 2005, after significant piloting and review of the SAEB, a new sampling strategy was taken and Prova Brasil became the new instrument used by the Ministry to assess skills in Portuguese (reading comprehension and Mathematics (problem solving, as well as collecting contextual information concerning the school, principal, teacher, and the students. This study aims to identify which variables are predictors of academic achievement of fifth grade students on Prova Brasil. Across a large sample of students, multilevel models tested a large number of variables relevant to student achievement. This approach uncovered critical variables not commonly seen as significant in light of other achievement determinants, including student habits, teacher ethnicity, and school technological resources. As such, this approach demonstrates the value of MLM to appropriately nuanced educational policies that reflect critical influences on student achievement. Its implications for wider application for psychology studies that may have relevant impacts for policy are also discussed.

  8. Automated Test Case Generation from Highly Reliable System Requirements Models, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Software testing is a complex and expensive phase of the software development cycle. Effective software testing is especially important in mission-critical software,...

  9. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    Science.gov (United States)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting

  10. The Silicon Trypanosome : A Test Case of Iterative Model Extension in Systems Biology

    NARCIS (Netherlands)

    Achcar, Fiona; Fadda, Abeer; Haanstra, Jurgen R.; Kerkhoven, Eduard J.; Kim, Dong-Hyun; Leroux, Alejandro E.; Papamarkou, Theodore; Rojas, Federico; Bakker, Barbara M.; Barrett, Michael P.; Clayton, Christine; Girolami, Mark; Krauth-Siegel, R. Luise; Matthews, Keith R.; Breitling, Rainer; Poole, RK

    2014-01-01

    The African trypanosome, Ttypanosoma brucei, is a unicellular parasite causing African Trypanosomiasis (sleeping sickness in humans and nagana in animals). Due to some of its unique properties, it has emerged as a popular model organism in systems biology. A predictive quantitative model of

  11. LES of explosions in venting chamber: A test case for premixed turbulent combustion models

    OpenAIRE

    Vermorel , Olivier; Quillatre , Pierre; Poinsot , Thierry

    2017-01-01

    International audience; This paper presents a new experimental and Large Eddy Simulation (LES) database to study upscaling effects in vented gas explosions. The propagation of premixed flames in three setups of increasing size is investigated experimentally and numerically. The baseline model is the well-known laboratory-scale combustion chamber from Sydney (Kent et al., 2005; Masri et al., 2012); two exact replicas at scales 6 and 24.4 were set up by GexCon (Bergen, Norway). The volume ratio...

  12. Theory Testing Using Case Studies

    DEFF Research Database (Denmark)

    Møller, Ann-Kristina Løkke; Dissing Sørensen, Pernille

    2014-01-01

    The appropriateness of case studies as a tool for theory testing is still a controversial issue, and discussions about the weaknesses of such research designs have previously taken precedence over those about its strengths. The purpose of the paper is to examine and revive the approach of theory...... testing using case studies, including the associated research goal, analysis, and generalisability. We argue that research designs for theory testing using case studies differ from theorybuilding case study research designs because different research projects serve different purposes and follow different...... research paths....

  13. Theory testing using case studies

    DEFF Research Database (Denmark)

    Dissing Sørensen, Pernille; Løkke Nielsen, Ann-Kristina

    2006-01-01

    on the strengths of theory-testing case studies. We specify research paths associated with theory testing in case studies and present a coherent argument for the logic of theoretical development and refinement using case studies. We emphasize different uses of rival explanations and their implications for research...... design. Finally, we discuss the epistemological logic, i.e., the value to larger research programmes, of such studies and, following Lakatos, conclude that the value of theory-testing case studies lies beyond naïve falsification and in their contribution to developing research programmes in a progressive......Case studies may have different research goals. One such goal is the testing of small-scale and middle-range theories. Theory testing refers to the critical examination, observation, and evaluation of the 'why' and 'how' of a specified phenomenon in a particular setting. In this paper, we focus...

  14. Species delineation using Bayesian model-based assignment tests: a case study using Chinese toad-headed agamas (genus Phrynocephalus

    Directory of Open Access Journals (Sweden)

    Fu Jinzhong

    2010-06-01

    Full Text Available Abstract Background Species are fundamental units in biology, yet much debate exists surrounding how we should delineate species in nature. Species discovery now requires the use of separate, corroborating datasets to quantify independently evolving lineages and test species criteria. However, the complexity of the speciation process has ushered in a need to infuse studies with new tools capable of aiding in species delineation. We suggest that model-based assignment tests are one such tool. This method circumvents constraints with traditional population genetic analyses and provides a novel means of describing cryptic and complex diversity in natural systems. Using toad-headed agamas of the Phrynocephalus vlangalii complex as a case study, we apply model-based assignment tests to microsatellite DNA data to test whether P. putjatia, a controversial species that closely resembles P. vlangalii morphologically, represents a valid species. Mitochondrial DNA and geographic data are also included to corroborate the assignment test results. Results Assignment tests revealed two distinct nuclear DNA clusters with 95% (230/243 of the individuals being assigned to one of the clusters with > 90% probability. The nuclear genomes of the two clusters remained distinct in sympatry, particularly at three syntopic sites, suggesting the existence of reproductive isolation between the identified clusters. In addition, a mitochondrial ND2 gene tree revealed two deeply diverged clades, which were largely congruent with the two nuclear DNA clusters, with a few exceptions. Historical mitochondrial introgression events between the two groups might explain the disagreement between the mitochondrial and nuclear DNA data. The nuclear DNA clusters and mitochondrial clades corresponded nicely to the hypothesized distributions of P. vlangalii and P. putjatia. Conclusions These results demonstrate that assignment tests based on microsatellite DNA data can be powerful tools

  15. RSG Deployment Case Testing Results

    Energy Technology Data Exchange (ETDEWEB)

    Owsley, Stanley L.; Dodson, Michael G.; Hatchell, Brian K.; Seim, Thomas A.; Alexander, David L.; Hawthorne, Woodrow T.

    2005-09-01

    The RSG deployment case design is centered on taking the RSG system and producing a transport case that houses the RSG in a safe and controlled manner for transport. The transport case was driven by two conflicting constraints, first that the case be as light as possible, and second that it meet a stringent list of Military Specified requirements. The design team worked to extract every bit of weight from the design while striving to meet the rigorous Mil-Spec constraints. In the end compromises were made primarily on the specification side to control the overall weight of the transport case. This report outlines the case testing results.

  16. International Energy Agency Building Energy Simulation Test and Diagnostic Method for Heating, Ventilating, and Air-Conditioning Equipment Models (HVAC BESTEST); Volume 1: Cases E100-E200

    Energy Technology Data Exchange (ETDEWEB)

    Neymark, J.; Judkoff, R.

    2002-01-01

    This report describes the Building Energy Simulation Test for Heating, Ventilating, and Air-Conditioning Equipment Models (HVAC BESTEST) project conducted by the Tool Evaluation and Improvement International Energy Agency (IEA) Experts Group. The group was composed of experts from the Solar Heating and Cooling (SHC) Programme, Task 22, Subtask A. The current test cases, E100-E200, represent the beginning of work on mechanical equipment test cases; additional cases that would expand the current test suite have been proposed for future development.

  17. How to Choose the Suitable Template for Homology Modelling of GPCRs: 5-HT7 Receptor as a Test Case.

    Science.gov (United States)

    Shahaf, Nir; Pappalardo, Matteo; Basile, Livia; Guccione, Salvatore; Rayan, Anwar

    2016-09-01

    G protein-coupled receptors (GPCRs) are a super-family of membrane proteins that attract great pharmaceutical interest due to their involvement in almost every physiological activity, including extracellular stimuli, neurotransmission, and hormone regulation. Currently, structural information on many GPCRs is mainly obtained by the techniques of computer modelling in general and by homology modelling in particular. Based on a quantitative analysis of eighteen antagonist-bound, resolved structures of rhodopsin family "A" receptors - also used as templates to build 153 homology models - it was concluded that a higher sequence identity between two receptors does not guarantee a lower RMSD between their structures, especially when their pair-wise sequence identity (within trans-membrane domain and/or in binding pocket) lies between 25 % and 40 %. This study suggests that we should consider all template receptors having a sequence identity ≤50 % with the query receptor. In fact, most of the GPCRs, compared to the currently available resolved structures of GPCRs, fall within this range and lack a correlation between structure and sequence. When testing suitability for structure-based drug design, it was found that choosing as a template the most similar resolved protein, based on sequence resemblance only, led to unsound results in many cases. Molecular docking analyses were carried out, and enrichment factors as well as attrition rates were utilized as criteria for assessing suitability for structure-based drug design. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  19. Automating Test Activities: Test Cases Creation, Test Execution, and Test Reporting with Multiple Test Automation Tools

    OpenAIRE

    Loke Mun Sei

    2015-01-01

    Software testing has become a mandatory process in assuring the software product quality. Hence, test management is needed in order to manage the test activities conducted in the software test life cycle. This paper discusses on the challenges faced in the software test life cycle, and how the test processes and test activities, mainly on test cases creation, test execution, and test reporting is being managed and automated using several test automation tools, i.e. Jira, ...

  20. International Energy Agency Building Energy Simulation Test and Diagnostic Method for Heating, Ventilating, and Air-Conditioning Equipment Models (HVAC BESTEST): Volume 2: Cases E300-E545.

    Energy Technology Data Exchange (ETDEWEB)

    Neymark J.; Judkoff, R.

    2004-12-01

    This report documents an additional set of mechanical system test cases that are planned for inclusion in ANSI/ASHRAE STANDARD 140. The cases test a program's modeling capabilities on the working-fluid side of the coil, but in an hourly dynamic context over an expanded range of performance conditions. These cases help to scale the significance of disagreements that are less obvious in the steady-state cases. The report is Vol. 2 of HVAC BESTEST Volume 1. Volume 1 was limited to steady-state test cases that could be solved with analytical solutions. Volume 2 includes hourly dynamic effects, and other cases that cannot be solved analytically. NREL conducted this work in collaboration with the Tool Evaluation and Improvement Experts Group under the International Energy Agency (IEA) Solar Heating and Cooling Programme Task 22.

  1. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  2. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  3. Theory Testing Using Case Studies

    DEFF Research Database (Denmark)

    Sørensen, Pernille Dissing; Løkke, Ann-Kristina

    2006-01-01

    design. Finally, we discuss the epistemological logic, i.e., the value to larger research programmes, of such studies and, following Lakatos, conclude that the value of theory-testing case studies lies beyond naïve falsification and in their contribution to developing research programmes in a progressive...

  4. Case studies in ultrasonic testing

    International Nuclear Information System (INIS)

    Prasad, V.; Satheesh, C.; Varde, P.V.

    2015-01-01

    Ultrasonic testing is widely used Non Destructive Testing (NDT) method and forms the essential part of In-service inspection programme of nuclear reactors. Main application of ultrasonic testing is for volumetric scanning of weld joints followed by thickness gauging of pipelines and pressure vessels. Research reactor Dhruva has completed the first In Service Inspection programme in which about 325 weld joints have been volumetrically scanned, in addition to thickness gauging of 300 meters of pipe lines of various sizes and about 24 nos of pressure vessels. Ultrasonic testing is also used for level measurements, distance measurements and cleaning and decontamination of tools. Two case studies are brought out in this paper in which ultrasonic testing is used successfully for identification of butterfly valve opening status and extent of choking in pipe lines in Dhruva reactor systems

  5. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  6. Test case for a near-surface repository

    International Nuclear Information System (INIS)

    Elert, M.; Jones, C.; Nilsson, L.B.; Skagius, K.; Wiborgh, M.

    1998-01-01

    A test case is presented for assessment of a near-surface disposal facility for radioactive waste. The case includes waste characterization and repository design, requirements and constraints in an assessment context, scenario development, model description and test calculations

  7. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  8. A design process for using normative models in shared decision making: a case study in the context of prenatal testing.

    Science.gov (United States)

    Rapaport, Sivan; Leshno, Moshe; Fink, Lior

    2014-12-01

    Shared decision making (SDM) encourages the patient to play a more active role in the process of medical consultation and its primary objective is to find the best treatment for a specific patient. Recent findings, however, show that patient preferences cannot be easily or accurately judged on the basis of communicative exchange during routine office visits, even for patients who seek to expand their role in medical decision making (MDM). The objective of this study is to improve the quality of patient-physician communication by developing a novel design process for SDM and then demonstrating, through a case study, the applicability of this process in enabling the use of a normative model for a specific medical situation. Our design process goes through the following stages: definition of medical situation and decision problem, development/identification of normative model, adaptation of normative model, empirical analysis and development of decision support systems (DSS) tools that facilitate the SDM process in the specific medical situation. This study demonstrates the applicability of the process through the implementation of the general normative theory of MDM under uncertainty for the medical-financial dilemma of choosing a physician to perform amniocentesis. The use of normative models in SDM raises several issues, such as the goal of the normative model, the relation between the goals of prediction and recommendation, and the general question of whether it is valid to use a normative model for people who do not behave according to the model's assumptions. © 2012 John Wiley & Sons Ltd.

  9. INTRAVAL Phase 2 WIPP 1 test case report: Modeling of brine flow through halite at the Waste Isolation Pilot Plant site

    International Nuclear Information System (INIS)

    Beauheim, R.L.

    1997-05-01

    This report describes the WIPP 1 test case studied as part of INTRAVAL, an international project to study validation of geosphere transport models. The WIPP 1 test case involved simulation of measured brine-inflow rates to boreholes drilled into the halite strata surrounding the Waste Isolation Pilot Plant repository. The goal of the test case was to evaluate the use of Darcy's law to describe brine flow through halite. The general approach taken was to try to obtain values of permeability and specific capacitance that would be: (1) consistent with other available data and (2) able to provide reasonable simulations of all of the brine-inflow experiments performed in the Salado Formation. All of the teams concluded that the average permeability of the halite strata penetrated by the holes was between approximately 10 -22 and 10 -21 m 2 . Specific capacitances greater than 10 -10 Pa -1 are inconsistent with the known constitutive properties of halite and are attributed to deformation, possibly ongoing, of the halite around the WIPP excavations. All project teams found that Darcy-flow models could replicate the experimental data in a consistent and reasonable manner. Discrepancies between the data and simulations are attributed to inadequate representation in the models of processes modifying the pore-pressure field in addition to the experiments themselves, such as ongoing deformation of the rock around the excavations. Therefore, the conclusion from the test case is that Darcy-flow models can reliably be used to predict brine flow to WIPP excavations, provided that the flow modeling is coupled with measurement and realistic modeling of the pore-pressure field around the excavations. This realistic modeling of the pore-pressure field would probably require coupling to a geomechanical model of the stress evolution around the repository

  10. Considerations for test design to accommodate energy-budget models in ecotoxicology: a case study for acetone in the pond snail Lymnaea stagnalis.

    Science.gov (United States)

    Barsi, Alpar; Jager, Tjalling; Collinet, Marc; Lagadic, Laurent; Ducrot, Virginie

    2014-07-01

    Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information is provided regarding test design for TKTD modeling. A case study is presented for the pond snail Lymnaea stagnalis exposed to the narcotic compound acetone, in which the data on all endpoints were analyzed together using a relatively simple TKTD model called DEBkiss. Furthermore, the influence of the data used for calibration on accuracy and precision of model parameters is discussed. The DEBkiss model described toxic effects on survival, growth, and reproduction over time well, within a single integrated analysis. Regarding the parameter estimates (e.g., no-effect concentration), precision rather than accuracy was affected depending on which data set was used for model calibration. In addition, the present study shows that the intrinsic sensitivity of snails to acetone stays the same across different life stages, including the embryonic stage. In fact, the data on egg development allowed for selection of a unique metabolic mode of action for the toxicant. Practical and theoretical considerations for test design to accommodate TKTD modeling are discussed in the hope that this information will aid other researchers to make the best possible use of their test animals. © 2014 SETAC.

  11. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  12. A GIS model predicting potential distributions of a lineage: a test case on hermit spiders (Nephilidae: Nephilengys).

    Science.gov (United States)

    Năpăruş, Magdalena; Kuntner, Matjaž

    2012-01-01

    Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World), N. livida (Madagascar), N. malabarensis (S-SE Asia), and N. papuana (Australasia). For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range), a large part of Brazil and the Guianas (area of synanthropic spread), and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (sub)tropics. Our model is a customizable GIS tool intended to predict current and future potential distributions of globally distributed terrestrial lineages. Its predictive

  13. A GIS model predicting potential distributions of a lineage: a test case on hermit spiders (Nephilidae: Nephilengys.

    Directory of Open Access Journals (Sweden)

    Magdalena Năpăruş

    Full Text Available BACKGROUND: Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. METHODOLOGY/PRINCIPAL FINDINGS: We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World, N. livida (Madagascar, N. malabarensis (S-SE Asia, and N. papuana (Australasia. For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range, a large part of Brazil and the Guianas (area of synanthropic spread, and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (subtropics. CONCLUSIONS: Our model is a customizable GIS tool intended to predict current and future potential

  14. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    Science.gov (United States)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  15. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  16. Testing the standard model

    International Nuclear Information System (INIS)

    Gordon, H.; Marciano, W.; Williams, H.H.

    1982-01-01

    We summarize here the results of the standard model group which has studied the ways in which different facilities may be used to test in detail what we now call the standard model, that is SU/sub c/(3) x SU(2) x U(1). The topics considered are: W +- , Z 0 mass, width; sin 2 theta/sub W/ and neutral current couplings; W + W - , Wγ; Higgs; QCD; toponium and naked quarks; glueballs; mixing angles; and heavy ions

  17. Ligand binding modes from low resolution GPCR models and mutagenesis: chicken bitter taste receptor as a test-case.

    Science.gov (United States)

    Di Pizio, Antonella; Kruetzfeldt, Louisa-Marie; Cheled-Shoval, Shira; Meyerhof, Wolfgang; Behrens, Maik; Niv, Masha Y

    2017-08-15

    Bitter taste is one of the basic taste modalities, warning against consuming potential poisons. Bitter compounds activate members of the bitter taste receptor (Tas2r) subfamily of G protein-coupled receptors (GPCRs). The number of functional Tas2rs is species-dependent. Chickens represent an intriguing minimalistic model, because they detect the bitter taste of structurally different molecules with merely three bitter taste receptor subtypes. We investigated the binding modes of several known agonists of a representative chicken bitter taste receptor, ggTas2r1. Because of low sequence similarity between ggTas2r1 and crystallized GPCRs (~10% identity, ~30% similarity at most), the combination of computational approaches with site-directed mutagenesis was used to characterize the agonist-bound conformation of ggTas2r1 binding site between TMs 3, 5, 6 and 7. We found that the ligand interactions with N93 in TM3 and/or N247 in TM5, combined with hydrophobic contacts, are typically involved in agonist recognition. Next, the ggTas2r1 structural model was successfully used to identify three quinine analogues (epiquinidine, ethylhydrocupreine, quinidine) as new ggTas2r1 agonists. The integrated approach validated here may be applicable to additional cases where the sequence identity of the GPCR of interest and the existing experimental structures is low.

  18. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  19. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  20. Evidence of system: A network model case-study of seventh grade science assessment practices from classrooms to the state test

    Science.gov (United States)

    Piety, Philip John

    With science education in the United States entering a period of greater accountability, this study investigated how student learning in science was assessed by educators within one state, asking what systemic assessment approaches existed and how the information from them was used. Conducted during the 20o6-2007 school year, this research developed and piloted a network-model case study design that included teachers, principals, administrators, and the state test development process, as well as several state-level professional associations. The data analyzed included observations, interviews, surveys, and both public and private documents. Some data were secondary. This design produced an empirical depiction of practice with a web of related cases. The network model expands on the hierarchical (nested) models often assumed in the growing literature on how information is used in educational contexts by showing multiple ways in which individuals are related through organizational structures. Seven case study teachers, each employing assessment methods largely unique and invisible to others in their schools, illustrate one set of assessment practices. The only alternative to classroom assessments that could be documented was the annual state accountability test. These two assessment species were neither tightly coupled nor distinct. Some teachers were partners in developing state test instruments, and in some cases the annual test could be seen as a school management resource. Boundary practices---activities where these two systems connected---were opportunities to identify challenges to policy implementation in science education. The challenges include standards, cognition, vocabulary, and classroom equipment. The boundary practices, along with the web of connections, provide the outlines of potential (and often unrealized) synergistic relationships. This model shows diverse indigenous practices and adaptations by actors responding to pressures of change and

  1. Theoretical Models, Assessment Frameworks and Test Construction.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  2. Radiation Belt Test Model

    Science.gov (United States)

    Freeman, John W.

    2000-10-01

    Rice University has developed a dynamic model of the Earth's radiation belts based on real-time data driven boundary conditions and full adiabaticity. The Radiation Belt Test Model (RBTM) successfully replicates the major features of storm-time behavior of energetic electrons: sudden commencement induced main phase dropout and recovery phase enhancement. It is the only known model to accomplish the latter. The RBTM shows the extent to which new energetic electrons introduced to the magnetosphere near the geostationary orbit drift inward due to relaxation of the magnetic field. It also shows the effects of substorm related rapid motion of magnetotail field lines for which the 3rd adiabatic invariant is violated. The radial extent of this violation is seen to be sharply delineated to a region outside of 5Re, although this distance is determined by the Hilmer-Voigt magnetic field model used by the RBTM. The RBTM appears to provide an excellent platform on which to build parameterized refinements to compensate for unknown acceleration processes inside 5Re where adiabaticity is seen to hold. Moreover, built within the framework of the MSFM, it offers the prospect of an operational forecast model for MeV electrons.

  3. Casing pull tests for directionally drilled environmental wells

    International Nuclear Information System (INIS)

    Staller, G.E.; Wemple, R.P.; Layne, R.R.

    1994-11-01

    A series of tests to evaluate several types of environmental well casings have been conducted by Sandia National Laboratories (SNL) and it's industrial partner, The Charles Machine Works, Inc. (CMW). A test bed was constructed at the CMW test range to model a typical shallow, horizontal, directionally drilled wellbore. Four different types of casings were pulled through this test bed. The loads required to pull the casings through the test bed and the condition of the casing material were documented during the pulling operations. An additional test was conducted to make a comparison of test bed vs actual wellbore casing pull loads. A directionally drilled well was emplaced by CMW to closely match the test bed. An instrumented casing was installed in the well and the pull loads recorded. The completed tests are reviewed and the results reported

  4. Casing pull tests for directionally drilled environmental wells

    Energy Technology Data Exchange (ETDEWEB)

    Staller, G.E.; Wemple, R.P. [Sandia National Labs., Albuquerque, NM (United States); Layne, R.R. [Charles Machine Works, Inc., Perry, OK (United States)

    1994-11-01

    A series of tests to evaluate several types of environmental well casings have been conducted by Sandia National Laboratories (SNL) and it`s industrial partner, The Charles Machine Works, Inc. (CMW). A test bed was constructed at the CMW test range to model a typical shallow, horizontal, directionally drilled wellbore. Four different types of casings were pulled through this test bed. The loads required to pull the casings through the test bed and the condition of the casing material were documented during the pulling operations. An additional test was conducted to make a comparison of test bed vs actual wellbore casing pull loads. A directionally drilled well was emplaced by CMW to closely match the test bed. An instrumented casing was installed in the well and the pull loads recorded. The completed tests are reviewed and the results reported.

  5. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  6. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    Science.gov (United States)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  7. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  8. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  9. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  10. A Combined Hydrological and Hydraulic Model for Flood Prediction in Vietnam Applied to the Huong River Basin as a Test Case Study

    Directory of Open Access Journals (Sweden)

    Dang Thanh Mai

    2017-11-01

    Full Text Available A combined hydrological and hydraulic model is presented for flood prediction in Vietnam. This model is applied to the Huong river basin as a test case study. Observed flood flows and water surface levels of the 2002–2005 flood seasons are used for model calibration, and those of the 2006–2007 flood seasons are used for validation of the model. The physically based distributed hydrologic model WetSpa is used for predicting the generation and propagation of flood flows in the mountainous upper sub-basins, and proves to predict flood flows accurately. The Hydrologic Engineering Center River Analysis System (HEC-RAS hydraulic model is applied to simulate flood flows and inundation levels in the downstream floodplain, and also proves to predict water levels accurately. The predicted water profiles are used for mapping of inundations in the floodplain. The model may be useful in developing flood forecasting and early warning systems to mitigate losses due to flooding in Vietnam.

  11. Collaborative testing of turbulence models

    Science.gov (United States)

    Bradshaw, P.

    1992-12-01

    This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.

  12. Considerations for test design to accommodate energy-budget models in ecotoxicology: a case study for acetone in the pond snail lymnaea stagnalis.

    NARCIS (Netherlands)

    Barsi, A.; Jager, T.; Collinet, M.; Lagadic, L.; Ducrot, V.

    2014-01-01

    Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information

  13. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available observation of atmospheric CO(sub2) concentrations at fixed monitoring stations. The LPDM model, which can be used to derive the sensitivity matrix used in an inversion, was run for each potential site for the months of July (representative of the Southern...

  14. Greenhouse gas network design using backward Lagrangian particle dispersion modelling - Part 1: Methodology and Australian test case

    Science.gov (United States)

    Ziehn, T.; Nickless, A.; Rayner, P. J.; Law, R. M.; Roff, G.; Fraser, P.

    2014-09-01

    This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes, comprising contributions from the biosphere and fossil fuel combustion, and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS) at 12 km resolution at an hourly timescale. Prior uncertainties are derived on a weekly timescale for biosphere fluxes and fossil fuel emissions from high-resolution model runs using the Community Atmosphere Biosphere Land Exchange (CABLE) model and the Fossil Fuel Data Assimilation System (FFDAS) respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground-based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimisation scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50%, we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.

  15. Greenhouse gas network design using backward Lagrangian particle dispersion modelling − Part 1: Methodology and Australian test case

    Directory of Open Access Journals (Sweden)

    T. Ziehn

    2014-09-01

    Full Text Available This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes, comprising contributions from the biosphere and fossil fuel combustion, and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS at 12 km resolution at an hourly timescale. Prior uncertainties are derived on a weekly timescale for biosphere fluxes and fossil fuel emissions from high-resolution model runs using the Community Atmosphere Biosphere Land Exchange (CABLE model and the Fossil Fuel Data Assimilation System (FFDAS respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground-based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimisation scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50%, we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.

  16. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  17. Testing Cases under Title VII.

    Science.gov (United States)

    Rothschild, Michael; Werden, Gregory J.

    This paper discusses Congressional and judicial attempts to deal with the problem of employment practices which lead to discriminatory outcomes but which may not be discriminatory in intent. The use of paper and pencil tests as standards for hiring and promotion is focused on as an example of this type of employment practice. An historical account…

  18. Usability Testing: A Case Study.

    Science.gov (United States)

    Chisman, Janet; Walbridge, Sharon; Diller, Karen

    1999-01-01

    Discusses the development and results of usability testing of Washington State University's Web-based OPAC (online public access catalog); examines how easily users could navigate the catalog and whether they understood what they were seeing; and identifies problems and what action if any was taken. (LRW)

  19. Tools for Test Case Generation

    NARCIS (Netherlands)

    Belinfante, Axel; Frantzen, Lars; Schallhart, Christian; Broy, Manfred; Jonsson, Bengt; Katoen, Joost P.; Leucker, Martin; Pretschner, Alexander

    2005-01-01

    The preceding parts of this book have mainly dealt with test theory, aimed at improving the practical techniques which are applied by testers to enhance the quality of soft- and hardware systems. Only if these academic results can be efficiently and successfully transferred back to practice, they

  20. Test case for a near-surface repository

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M.; Jones, C. [Kemakta Konsult AB, Stockholm (Sweden); Nilsson, L.B. [Swedish Nuclear Fuel and Waste Co, Stockholm (Sweden); Skagius, K.; Wiborgh, M. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-09-01

    A test case is presented for assessment of a near-surface disposal facility for radioactive waste. The case includes waste characterization and repository design, requirements and constraints in an assessment context, scenario development, model description and test calculations 6 refs, 12 tabs, 16 figs

  1. Computing the Absorption and Emission Spectra of 5-Methylcytidine in Different Solvents: A Test-Case for Different Solvation Models.

    Science.gov (United States)

    Martínez-Fernández, L; Pepino, A J; Segarra-Martí, J; Banyasz, A; Garavelli, M; Improta, R

    2016-09-13

    The optical spectra of 5-methylcytidine in three different solvents (tetrahydrofuran, acetonitrile, and water) is measured, showing that both the absorption and the emission maximum in water are significantly blue-shifted (0.08 eV). The absorption spectra are simulated based on CAM-B3LYP/TD-DFT calculations but including solvent effects with three different approaches: (i) a hybrid implicit/explicit full quantum mechanical approach, (ii) a mixed QM/MM static approach, and (iii) a QM/MM method exploiting the structures issuing from molecular dynamics classical simulations. Ab-initio Molecular dynamics simulations based on CAM-B3LYP functionals have also been performed. The adopted approaches all reproduce the main features of the experimental spectra, giving insights on the chemical-physical effects responsible for the solvent shifts in the spectra of 5-methylcytidine and providing the basis for discussing advantages and limitations of the adopted solvation models.

  2. Test Report for MSFC Test No. 83-2: Pressure scaled water impact test of a 12.5 inch diameter model of the Space Shuttle solid rocket booster filament wound case and external TVC PCD

    Science.gov (United States)

    1983-01-01

    Water impact tests using a 12.5 inch diameter model representing a 8.56 percent scale of the Space Shuttle Solid Rocket Booster configuration were conducted. The two primary objectives of this SRB scale model water impact test program were: 1. Obtain cavity collapse applied pressure distributions for the 8.56 percent rigid body scale model FWC pressure magnitudes as a function of full-scale initial impact conditions at vertical velocities from 65 to 85 ft/sec, horizontal velocities from 0 to 45 ft/sec, and angles from -10 to +10 degrees. 2. Obtain rigid body applied pressures on the TVC pod and aft skirt internal stiffener rings at initial impact and cavity collapse loading events. In addition, nozzle loads were measured. Full scale vertical velocities of 65 to 85 ft/sec, horizontal velocities of 0 to 45 ft/sec, and impact angles from -10 to +10 degrees simulated.

  3. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    Directory of Open Access Journals (Sweden)

    S. Ars

    2017-12-01

    Full Text Available This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping

  4. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    Science.gov (United States)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances

  5. Measuring Test Case Similarity to Support Test Suite Understanding

    NARCIS (Netherlands)

    Greiler, M.S.; Van Deursen, A.; Zaidman, A.E.

    2012-01-01

    Preprint of paper published in: TOOLS 2012 - Proceedings of the 50th International Conference, Prague, Czech Republic, May 29-31, 2012; doi:10.1007/978-3-642-30561-0_8 In order to support test suite understanding, we investigate whether we can automatically derive relations between test cases. In

  6. Test case preparation using a prototype

    OpenAIRE

    Treharne, Helen; Draper, J.; Schneider, Steve A.

    1998-01-01

    This paper reports on the preparation of test cases using a prototype within the context of a formal development. It describes an approach to building a prototype using an example. It discusses how a prototype contributes to the testing activity as part of a lifecycle based on the use of formal methods. The results of applying the approach to an embedded avionics case study are also presented.

  7. Making System Dynamics Cool IV : Teaching & Testing with Cases & Quizzes

    NARCIS (Netherlands)

    Pruyt, E.

    2012-01-01

    This follow-up paper presents cases and multiple choice questions for teaching and testing System Dynamics modeling. These cases and multiple choice questions were developed and used between January 2012 and April 2012 a large System Dynamics course (250+ 2nd year BSc and 40+ MSc students per year)

  8. Automation of Test Cases for Web Applications : Automation of CRM Test Cases

    OpenAIRE

    Seyoum, Alazar

    2012-01-01

    The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...

  9. Test case prioritization using Cuscuta search

    Directory of Open Access Journals (Sweden)

    Mukesh Mann

    2014-12-01

    Full Text Available Most companies are under heavy time and resource constraints when it comes to testing a software system. Test prioritization technique(s allows the most useful tests to be executed first, exposing faults earlier in the testing process. Thus makes software testing more efficient and cost effective by covering maximum faults in minimum time. But test case prioritization is not an easy and straightforward process and it requires huge efforts and time. Number of approaches is available with their proclaimed advantages and limitations, but accessibility of any one of them is a subject dependent. In this paper, artificial Cuscuta search algorithm (CSA inspired by real Cuscuta parasitism is used to solve time constraint prioritization problem. We have applied CSA for prioritizing test cases in an order of maximum fault coverage with minimum test suite execution and compare its effectiveness with different prioritization ordering. Taking into account the experimental results, we conclude that (i The average percentage of faults detection (APFD is 82.5% using our proposed CSA ordering which is equal to the APFD of optimal and ant colony based ordering whereas No ordering, Random ordering and Reverse ordering has 76.25%, 75%, 68.75% of APFD respectively.

  10. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases...... suggest that our method provides a sound foundation for rapid development of high quality system models....

  11. Modelling the pile load test

    Directory of Open Access Journals (Sweden)

    Prekop Ľubomír

    2017-01-01

    Full Text Available This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from experiment.

  12. Modelling the pile load test

    OpenAIRE

    Prekop Ľubomír

    2017-01-01

    This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from exper...

  13. Semiclassical modelling of finite-pulse effects on non-adiabatic photodynamics via initial condition filtering: The predissociation of NaI as a test case

    Energy Technology Data Exchange (ETDEWEB)

    Martínez-Mesa, Aliezer [Departmento de Física Teórica, Universidad de la Habana, San Lázaro y L, La Habana 10400 (Cuba); Institut für Chemie, Universität Potsdam, Karl-Liebknecht-Strasse 24-25, D-14476 Potsdam-Golm (Germany); Saalfrank, Peter [Institut für Chemie, Universität Potsdam, Karl-Liebknecht-Strasse 24-25, D-14476 Potsdam-Golm (Germany)

    2015-05-21

    Femtosecond-laser pulse driven non-adiabatic spectroscopy and dynamics in molecular and condensed phase systems continue to be a challenge for theoretical modelling. One of the main obstacles is the “curse of dimensionality” encountered in non-adiabatic, exact wavepacket propagation. A possible route towards treating complex molecular systems is via semiclassical surface-hopping schemes, in particular if they account not only for non-adiabatic post-excitation dynamics but also for the initial optical excitation. One such approach, based on initial condition filtering, will be put forward in what follows. As a simple test case which can be compared with exact wavepacket dynamics, we investigate the influence of the different parameters determining the shape of a laser pulse (e.g., its finite width and a possible chirp) on the predissociation dynamics of a NaI molecule, upon photoexcitation of the A(0{sup +}) state. The finite-pulse effects are mapped into the initial conditions for semiclassical surface-hopping simulations. The simulated surface-hopping diabatic populations are in qualitative agreement with the quantum mechanical results, especially concerning the subpicosend photoinduced dynamics, the main deviations being the relative delay of the non-adiabatic transitions in the semiclassical picture. Likewise, these differences in the time-dependent electronic populations calculated via the semiclassical and the quantum methods are found to have a mild influence on the overall probability density distribution. As a result, the branching ratios between the bound and the dissociative reaction channels and the time-evolution of the molecular wavepacket predicted by the semiclassical method agree with those computed using quantum wavepacket propagation. Implications for more challenging molecular systems are given.

  14. Time-Optimal Real-Time Test Case Generation using UPPAAL

    DEFF Research Database (Denmark)

    Hessel, Anders; Larsen, Kim Guldstrand; Nielsen, Brian

    2004-01-01

    Testing is the primary software validation technique used by industry today, but remains ad hoc, error prone, and very expensive. A promising improvement is to automatically generate test cases from formal models of the system under test. We demonstrate how to automatically generate real...... test purposes or generated automatically from various coverage criteria of the model.......-time conformance test cases from timed automata specifications. Specifically we demonstrate how to fficiently generate real-time test cases with optimal execution time i.e test cases that are the fastest possible to execute. Our technique allows time optimal test cases to be generated using manually formulated...

  15. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  16. Testing the Technology Acceptance Model: HIV case managers' intention to use a continuity of care record with context-specific links.

    Science.gov (United States)

    Schnall, Rebecca; Bakken, Suzanne

    2011-09-01

    To assess the applicability of the Technology Acceptance Model (TAM) constructs in explaining HIV case managers' behavioural intention to use a continuity of care record (CCR) with context-specific links designed to meet their information needs. Data were collected from 94 case managers who provide care to persons living with HIV (PLWH) using an online survey comprising three components: (1) demographic information: age, gender, ethnicity, race, Internet usage and computer experience; (2) mock-up of CCR with context-specific links; and items related to TAM constructs. Data analysis included: principal components factor analysis (PCA), assessment of internal consistency reliability and univariate and multivariate analysis. PCA extracted three factors (Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use), explained variance = 84.9%, Cronbach's ά = 0.69-0.91. In a linear regression model, Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use explained 43.6% (p Technology assessed.

  17. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  18. Couplex1 test case nuclear - Waste disposal far field simulation

    International Nuclear Information System (INIS)

    2001-01-01

    This first COUPLEX test case is to compute a simplified Far Field model used in nuclear waste management simulation. From the mathematical point of view the problem is of convection diffusion type but the parameters are highly varying from one layer to another. Another particularity is the very concentrated nature of the source, both in space and in time. (author)

  19. Gifted and Talented Education: A National Test Case in Peoria.

    Science.gov (United States)

    Fetterman, David M.

    1986-01-01

    This article presents a study of a program in Peoria, Illinois, for the gifted and talented that serves as a national test case for gifted education and minority enrollment. It was concluded that referral, identification, and selection were appropriate for the program model but that inequalities resulted from socioeconomic variables. (Author/LMO)

  20. Turbine-missile casing exit tests

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Sliter, G.E.

    1978-01-01

    Nuclear power plant designers are required to provide safety-related components with adequate protection against hypothetical turbine-missile impacts. In plants with a ''peninsula'' arrangement, protection is provided by installing the turbine axis radially from the reactor building, so that potential missile trajectories are not in line with the plant. In plants with a ''non-peninsula'' arrangement (turbine axis perpendicular to a radius), designers rely on the low probability of a missile strike and on the protection provided by reinforced concrete walls in order to demonstrate an adequate level of protection USNRC Regulatory Guide 1.115). One of the critical first steps in demonstrating adequacy is the determination of the energy and spin of the turbine segments as they exit the turbine casing. The spin increases the probability that a subsequent impact with a protective barrier will be off-normal and therefore less severe than the normal impact assumed in plant designs. Two full-scale turbine-missile casing exit tests which were conducted by Sandia Laboratories at their rocket-sled facility in Albuquerque, New Mexico, are described. Because of wide variations in turbine design details, postulated failure conditions, and missile exit scenarios, the conditions for the two tests were carefully selected to be as prototypical as possible, while still maintaining the well-controlled and well-characterized test conditions needed for generating benchmark data

  1. NET model coil test possibilities

    International Nuclear Information System (INIS)

    Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.

    1987-11-01

    A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de

  2. Prioritizing Test Cases for Memory Leaks in Android Applications

    Institute of Scientific and Technical Information of China (English)

    Ju Qian; Di Zhou

    2016-01-01

    Mobile applications usually can only access limited amount of memory. Improper use of the memory can cause memory leaks, which may lead to performance slowdowns or even cause applications to be unexpectedly killed. Although a large body of research has been devoted into the memory leak diagnosing techniques after leaks have been discovered, it is still challenging to find out the memory leak phenomena at first. Testing is the most widely used technique for failure discovery. However, traditional testing techniques are not directed for the discovery of memory leaks. They may spend lots of time on testing unlikely leaking executions and therefore can be inefficient. To address the problem, we propose a novel approach to prioritize test cases according to their likelihood to cause memory leaks in a given test suite. It firstly builds a prediction model to determine whether each test can potentially lead to memory leaks based on machine learning on selected code features. Then, for each input test case, we partly run it to get its code features and predict its likelihood to cause leaks. The most suspicious test cases will be suggested to run at first in order to reveal memory leak faults as soon as possible. Experimental evaluation on several Android applications shows that our approach is effective.

  3. Test cases for interface tracking methods: methodology and current status

    International Nuclear Information System (INIS)

    Lebaigue, O.; Jamet, D.; Lemonnier, E.

    2004-01-01

    Full text of publication follows:In the past decade, a large number of new methods have been developed to deal with interfaces in the numerical simulation of two-phase flows. We have collected a set of 36 test cases, which can be seen as a tool to help engineers and researchers selecting the most appropriate method(s) for their specific fields of application. This set can be use: - To perform an initial evaluation of the capabilities of available methods with regard to the specificity of the final application and the most important features to be recovered from the simulation. - To measure the maximum mesh size to be used for a given physical problem in order to obtain an accurate enough solution. - To assess and quantify the performances of a selected method equipped with its set of physical models. The computation of a well-documented test case allows estimating the error due to the numerical technique by comparison with reference solutions. This process is compulsory to gain confidence and credibility on the prediction capabilities of a numerical method and its physical models. - To broaden the capabilities of a given numerical technique. The test cases may be used to identify the need for improvement of the overall numerical scheme or to determine the physical part of the model, which is responsible for the observed limitations. Each test case falls within one of the following categories: - Analytical solutions of well-known sets of equations corresponding to simple geometrical situations. - Reference numerical solutions of moderately complex problems, produced by accurate methods (e.g., boundary Fitted coordinate method) on refined meshes. - Separate effects analytical experiments. The presentation will suggest how to use the test cases for assessing the physical models and the numerical methods. The expected fallout of using test cases is indeed on the one hand to identify the merits of existing methods and on the other hand to orient further research towards

  4. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 2. Special test cases

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-08-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. Volume 1, titled ''Guideline Approach,'' consists of Chapters 1 through 5 and a glossary. Chapters 2 through 5 provide the more detailed discussions about the code selection approach. This volume, Volume 2, consists of four appendices reporting on the technical evaluation test cases designed to help verify the accuracy of ground-water transport codes. 20 refs

  5. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  6. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  7. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  8. Pile Model Tests Using Strain Gauge Technology

    Science.gov (United States)

    Krasiński, Adam; Kusio, Tomasz

    2015-09-01

    Ordinary pile bearing capacity tests are usually carried out to determine the relationship between load and displacement of pile head. The measurement system required in such tests consists of force transducer and three or four displacement gauges. The whole system is installed at the pile head above the ground level. This approach, however, does not give us complete information about the pile-soil interaction. We can only determine the total bearing capacity of the pile, without the knowledge of its distribution into the shaft and base resistances. Much more information can be obtained by carrying out a test of instrumented pile equipped with a system for measuring the distribution of axial force along its core. In the case of pile model tests the use of such measurement is difficult due to small scale of the model. To find a suitable solution for axial force measurement, which could be applied to small scale model piles, we had to take into account the following requirements: - a linear and stable relationship between measured and physical values, - the force measurement accuracy of about 0.1 kN, - the range of measured forces up to 30 kN, - resistance of measuring gauges against aggressive counteraction of concrete mortar and against moisture, - insensitivity to pile bending, - economical factor. These requirements can be fulfilled by strain gauge sensors if an appropriate methodology is used for test preparation (Hoffmann [1]). In this paper, we focus on some aspects of the application of strain gauge sensors for model pile tests. The efficiency of the method is proved on the examples of static load tests carried out on SDP model piles acting as single piles and in a group.

  9. Testing the compounding structure of the CP-INARCH model

    OpenAIRE

    Weiß, Christian H.; Gonçalves, Esmeralda; Lopes, Nazaré Mendes

    2017-01-01

    A statistical test to distinguish between a Poisson INARCH model and a Compound Poisson INARCH model is proposed, based on the form of the probability generating function of the compounding distribution of the conditional law of the model. For first-order autoregression, the normality of the test statistics’ asymptotic distribution is established, either in the case where the model parameters are specified, or when such parameters are consistently estimated. As the test statistics’ law involv...

  10. Test model of WWER core

    International Nuclear Information System (INIS)

    Tikhomirov, A. V.; Gorokhov, A. K.

    2007-01-01

    The objective of this paper is creation of precision test model for WWER RP neutron-physics calculations. The model is considered as a tool for verification of deterministic computer codes that enables to reduce conservatism of design calculations and enhance WWER RP competitiveness. Precision calculations were performed using code MCNP5/1/ (Monte Carlo method). Engineering computer package Sapfir 9 5andRC V VER/2/ is used in comparative analysis of the results, it was certified for design calculations of WWER RU neutron-physics characteristic. The object of simulation is the first fuel loading of Volgodon NPP RP. Peculiarities of transition in calculation using MCNP5 from 2D geometry to 3D geometry are shown on the full-scale model. All core components as well as radial and face reflectors, automatic regulation in control and protection system control rod are represented in detail description according to the design. The first stage of application of the model is assessment of accuracy of calculation of the core power. At the second stage control and protection system control rod worth was assessed. Full scale RP representation in calculation using code MCNP5 is time consuming that calls for parallelization of computational problem on multiprocessing computer (Authors)

  11. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    ). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992...... was on the Dolos breakwater with a high superstructure, where there was almost no overtopping. This case is believed to be the most dangerous one. The test of the Dolos breakwater with a low superstructure was also performed. The objective of the last part of the experiment is to investigate the influence...

  12. a Comparison among Different Optimization Levels in 3d Multi-Sensor Models. a Test Case in Emergency Context: 2016 Italian Earthquake

    Science.gov (United States)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2017-02-01

    In sudden emergency contexts that affect urban centres and built heritage, the latest Geomatics technique solutions must enable the demands of damage documentation, risk assessment, management and data sharing as efficiently as possible, in relation to the danger condition, to the accessibility constraints of areas and to the tight deadlines needs. In recent times, Unmanned Vehicle System (UAV) equipped with cameras are more and more involved in aerial survey and reconnaissance missions, and they are behaving in a very cost-effective way in the direction of 3D documentation and preliminary damage assessment. More and more UAV equipment with low-cost sensors must become, in the future, suitable in every situation of documentation, but above all in damages and uncertainty frameworks. Rapidity in acquisition times and low-cost sensors are challenging marks, and they could be taken into consideration maybe with time spending processing. The paper will analyze and try to classify the information content in 3D aerial and terrestrial models and the importance of metric and non-metric withdrawable information that should be suitable for further uses, as the structural analysis one. The test area is an experience of Team Direct from Politecnico di Torino in centre Italy, where a strong earthquake occurred in August 2016. This study is carried out on a stand-alone damaged building in Pescara del Tronto (AP), with a multi-sensor 3D survey. The aim is to evaluate the contribution of terrestrial and aerial quick documentation by a SLAM based LiDAR and a camera equipped multirotor UAV, for a first reconnaissance inspection and modelling in terms of level of details, metric and non-metric information.

  13. Model test of boson mappings

    International Nuclear Information System (INIS)

    Navratil, P.; Dobes, J.

    1992-01-01

    Methods of boson mapping are tested in calculations for a simple model system of four protons and four neutrons in single-j distinguishable orbits. Two-body terms in the boson images of the fermion operators are considered. Effects of the seniority v=4 states are thus included. The treatment of unphysical states and the influence of boson space truncation are particularly studied. Both the Dyson boson mapping and the seniority boson mapping as dictated by the similarity transformed Dyson mapping do not seem to be simply amenable to truncation. This situation improves when the one-body form of the seniority image of the quadrupole operator is employed. Truncation of the boson space is addressed by using the effective operator theory with a notable improvement of results

  14. Large block migration experiments: INTRAVAL phase 1, Test Case 9

    Energy Technology Data Exchange (ETDEWEB)

    Gureghian, A.B.; Noronha, C.J. (Battelle, Willowbrook, IL (USA). Office of Waste Technology Development); Vandergraaf, T.T. (Atomic Energy of Canada Ltd., Ottawa, ON (Canada))

    1990-08-01

    The development of INTRAVAL Test Case 9, as presented in this report, was made possible by a past subsidiary agreement to the bilateral cooperative agreement between the US Department of Energy (DOE) and Atomic Energy of Canada Limited (AECL) encompassing various aspects of nuclear waste disposal research. The experimental aspect of this test case, which included a series of laboratory experiments designed to quantify the migration of tracers in a single, natural fracture, was undertaken by AECL. The numerical simulation of the results of these experiments was performed by the Battelle Office of Waste Technology Development (OWTD) by calibrating an in-house analytical code, FRACFLO, which is capable of predicting radionuclide transport in an idealized fractured rock. Three tracer migration experiments were performed, using nonsorbing uranine dye for two of them and sorbing Cs-137 for the third. In addition, separate batch experiments were performed to determine the fracture surface and rock matrix sorption coefficients for Cs-137. The two uranine tracer migration experiment were used to calculate the average fracture aperture and to calibrate the model for the fracture dispersivity and matrix diffusion coefficient. The predictive capability of the model was then tested by simulating the third, Cs-137, tracer test without changing the parameter values determined from the other experiments. Breakthrough curves of both the experimental and numerical results obtained at the outlet face of the fracture are presented for each experiment. The reported spatial concentration profiles for the rock matrix are based solely on numerical predictions. 22 refs., 12 figs., 8 tabs.

  15. HYBRID DATA APPROACH FOR SELECTING EFFECTIVE TEST CASES DURING THE REGRESSION TESTING

    OpenAIRE

    Mohan, M.; Shrimali, Tarun

    2017-01-01

    In the software industry, software testing becomes more important in the entire software development life cycle. Software testing is one of the fundamental components of software quality assurances. Software Testing Life Cycle (STLC)is a process involved in testing the complete software, which includes Regression Testing, Unit Testing, Smoke Testing, Integration Testing, Interface Testing, System Testing & etc. In the STLC of Regression testing, test case selection is one of the most importan...

  16. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    Science.gov (United States)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  17. Predicate Argument Structure Analysis for Use Case Description Modeling

    Science.gov (United States)

    Takeuchi, Hironori; Nakamura, Taiga; Yamaguchi, Takahira

    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.

  18. Integration testing through reusing representative unit test cases for high-confidence medical software.

    Science.gov (United States)

    Shin, Youngsul; Choi, Yunja; Lee, Woo Jin

    2013-06-01

    As medical software is getting larger-sized, complex, and connected with other devices, finding faults in integrated software modules gets more difficult and time consuming. Existing integration testing typically takes a black-box approach, which treats the target software as a black box and selects test cases without considering internal behavior of each software module. Though it could be cost-effective, this black-box approach cannot thoroughly test interaction behavior among integrated modules and might leave critical faults undetected, which should not happen in safety-critical systems such as medical software. This work anticipates that information on internal behavior is necessary even for integration testing to define thorough test cases for critical software and proposes a new integration testing method by reusing test cases used for unit testing. The goal is to provide a cost-effective method to detect subtle interaction faults at the integration testing phase by reusing the knowledge obtained from unit testing phase. The suggested approach notes that the test cases for the unit testing include knowledge on internal behavior of each unit and extracts test cases for the integration testing from the test cases for the unit testing for a given test criteria. The extracted representative test cases are connected with functions under test using the state domain and a single test sequence to cover the test cases is produced. By means of reusing unit test cases, the tester has effective test cases to examine diverse execution paths and find interaction faults without analyzing complex modules. The produced test sequence can have test coverage as high as the unit testing coverage and its length is close to the length of optimal test sequences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Automated Test Case Generation for an Autopilot Requirement Prototype

    Science.gov (United States)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  20. 46 CFR 154.431 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.431 Section 154.431 Shipping COAST GUARD... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank...

  1. 46 CFR 154.449 - Model test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.449 Section 154.449 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF... § 154.449 Model test. The following analyzed data of a model test of structural elements for independent...

  2. Relational Constraint Driven Test Case Synthesis for Web Applications

    Directory of Open Access Journals (Sweden)

    Xiang Fu

    2010-09-01

    Full Text Available This paper proposes a relational constraint driven technique that synthesizes test cases automatically for web applications. Using a static analysis, servlets can be modeled as relational transducers, which manipulate backend databases. We present a synthesis algorithm that generates a sequence of HTTP requests for simulating a user session. The algorithm relies on backward symbolic image computation for reaching a certain database state, given a code coverage objective. With a slight adaptation, the technique can be used for discovering workflow attacks on web applications.

  3. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension

  4. A Dynamic Life-cycle Model for the Provisioning of Software Testing Services: Experiences from A Case Study in the Chinese ICT Sourcing Market

    OpenAIRE

    Lu, Yikun; Käkölä, Timo

    2011-01-01

    ICT-enabled international sourcing of software-intensive systems and services (eSourcing) is a powerful strategy for managing businesses more effectively. China is becoming a superpower for eSourcing service provisioning, but most Chinese providers are small or medium-sized and leverage the mediated eSourcing model, delivering services to foreign ICT clients that interface with end-clients onshore. This model restricts the providers to low-value projects. This paper probes eSourci...

  5. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  6. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  7. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  8. Predicting the oral pharmacokinetic profiles of multiple-unit (pellet) dosage forms using a modeling and simulation approach coupled with biorelevant dissolution testing: case example diclofenac sodium.

    Science.gov (United States)

    Kambayashi, Atsushi; Blume, Henning; Dressman, Jennifer B

    2014-07-01

    The objective of this research was to characterize the dissolution profile of a poorly soluble drug, diclofenac, from a commercially available multiple-unit enteric coated dosage form, Diclo-Puren® capsules, and to develop a predictive model for its oral pharmacokinetic profile. The paddle method was used to obtain the dissolution profiles of this dosage form in biorelevant media, with the exposure to simulated gastric conditions being varied in order to simulate the gastric emptying behavior of pellets. A modified Noyes-Whitney theory was subsequently fitted to the dissolution data. A physiologically-based pharmacokinetic (PBPK) model for multiple-unit dosage forms was designed using STELLA® software and coupled with the biorelevant dissolution profiles in order to simulate the plasma concentration profiles of diclofenac from Diclo-Puren® capsule in both the fasted and fed state in humans. Gastric emptying kinetics relevant to multiple-units pellets were incorporated into the PBPK model by setting up a virtual patient population to account for physiological variations in emptying kinetics. Using in vitro biorelevant dissolution coupled with in silico PBPK modeling and simulation it was possible to predict the plasma profile of this multiple-unit formulation of diclofenac after oral administration in both the fasted and fed state. This approach might be useful to predict variability in the plasma profiles for other drugs housed in multiple-unit dosage forms. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Testing the generalized partial credit model

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a

  10. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  11. Conformance test development with the Java modeling language

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.

    2017-01-01

    In order to claim conformance with a Java Specification Request, a Java implementation has to pass all tests in an associated Technology Compatibility Kit (TCK). This paper presents a model-based development of a TCK test suite and a test execution tool for the draft Safety-Critical Java (SCJ......) profile specification. The Java Modeling Language (JML) is used to model conformance constraints for the profile. JML annotations define contracts for classes and interfaces. The annotations are translated by a tool into runtime assertion checks.Hereby the design and elaboration of the concrete test cases...

  12. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  13. Computerized Adaptive Testing. A Case Study.

    Science.gov (United States)

    1980-12-01

    Vocational Interest Blank in 1927 [Dubois 1970]. 3. German Contributions Cattell also studied for a period of three years in Leipzig under Wilhelm Wundt in the...world’s first psychological laboratory, 2 founded by Wundt in 1879 [Heidbreder 1933]. 2William James’ laboratory, established at Harvard in 1875, did...have become important parts of psychological test theory. Under Wundt , Spearman’s principal endeavor was experimental psychology, but he also found time

  14. The Model Identification Test: A Limited Verbal Science Test

    Science.gov (United States)

    McIntyre, P. J.

    1972-01-01

    Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)

  15. Geochemical Testing And Model Development - Residual Tank Waste Test Plan

    International Nuclear Information System (INIS)

    Cantrell, K.J.; Connelly, M.P.

    2010-01-01

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  16. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  17. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  18. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  19. Correlates between Models of Virulence for Mycobacterium tuberculosis among Isolates of the Central Asian Lineage: a Case for Lysozyme Resistance Testing?

    Science.gov (United States)

    Casali, Nicola; Clark, Simon O.; Hooper, Richard; Williams, Ann; Velji, Preya; Gonzalo, Ximena

    2015-01-01

    Virulence factors (VFs) contribute to the emergence of new human Mycobacterium tuberculosis strains, are lineage dependent, and are relevant to the development of M. tuberculosis drugs/vaccines. VFs were sought within M. tuberculosis lineage 3, which has the Central Asian (CAS) spoligotype. Three isolates were selected from clusters previously identified as dominant in London, United Kingdom. Strain-associated virulence was studied in guinea pig, monocyte-derived macrophage, and lysozyme resistance assays. Whole-genome sequencing, single nucleotide polymorphism (SNP) analysis, and a literature review contributed to the identification of SNPs of interest. The animal model revealed borderline differences in strain-associated pathogenicity. Ex vivo, isolate C72 exhibited statistically significant differences in intracellular growth relative to C6 and C14. SNP candidates inducing lower fitness levels included 123 unique nonsynonymous SNPs, including three located in genes (lysX, caeA, and ponA2) previously identified as VFs in the laboratory-adapted reference strain H37Rv and shown to confer lysozyme resistance. C72 growth was most affected by lysozyme in vitro. A BLAST search revealed that all three SNPs of interest (C35F, P76Q, and P780R) also occurred in Tiruvallur, India, and in Uganda. Unlike C72, however, no single isolate identified through BLAST carried all three SNPs simultaneously. CAS isolates representative of three medium-sized human clusters demonstrated differential outcomes in models commonly used to estimate strain-associated virulence, supporting the idea that virulence varies within, not just across, M. tuberculosis lineages. Three VF SNPs of interest were identified in two additional locations worldwide, which suggested independent selection and supported a role for these SNPs in virulence. The relevance of lysozyme resistance to strain virulence remains to be established. PMID:25776753

  20. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  1. Testing the coherence between occupational exposure limits for inhalation and their biological limit values with a generalized PBPK-model: the case of 2-propanol and acetone.

    Science.gov (United States)

    Huizer, Daan; Huijbregts, Mark A J; van Rooij, Joost G M; Ragas, Ad M J

    2014-08-01

    The coherence between occupational exposure limits (OELs) and their corresponding biological limit values (BLVs) was evaluated for 2-propanol and acetone. A generic human PBPK model was used to predict internal concentrations after inhalation exposure at the level of the OEL. The fraction of workers with predicted internal concentrations lower than the BLV, i.e. the 'false negatives', was taken as a measure for incoherence. The impact of variability and uncertainty in input parameters was separated by means of nested Monte Carlo simulation. Depending on the exposure scenario considered, the median fraction of the population for which the limit values were incoherent ranged from 2% to 45%. Parameter importance analysis showed that body weight was the main factor contributing to interindividual variability in blood and urine concentrations and that the metabolic parameters Vmax and Km were the most important sources of uncertainty. This study demonstrates that the OELs and BLVs for 2-propanol and acetone are not fully coherent, i.e. enforcement of BLVs may result in OELs being violated. In order to assess the acceptability of this "incoherence", a maximum population fraction at risk of exceeding the OEL should be specified as well as a minimum level of certainty in predicting this fraction. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  3. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    2001-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L' Air Liquide in the form of a NET contract. (author)

  4. Test facility TIMO for testing the ITER model cryopump

    International Nuclear Information System (INIS)

    Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.

    1999-01-01

    Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L'Air Liquide in the form of a NET contract. (author)

  5. Origin of honeycombs: Testing the hydraulic and case hardening hypotheses

    Science.gov (United States)

    Bruthans, Jiří; Filippi, Michal; Slavík, Martin; Svobodová, Eliška

    2018-02-01

    Cavernous weathering (cavernous rock decay) is a global phenomenon, which occurs in porous rocks around the world. Although honeycombs and tafoni are considered to be the most common products of this complex process, their origin and evolution are as yet not fully understood. The two commonly assumed formation hypotheses - hydraulic and case hardening - were tested to elucidate the origin of honeycombs on sandstone outcrops in a humid climate. Mechanical and hydraulic properties of the lips (walls between adjacent pits) and backwalls (bottoms of pits) of the honeycombs were determined via a set of established and novel approaches. While the case hardening hypothesis was not supported by the determinations of either tensile strength, drilling resistance or porosity, the hydraulic hypothesis was clearly supported by field measurements and laboratory tests. Fluorescein dye visualization of capillary zone, vapor zone, and evaporation front upon their contact, demonstrated that the evaporation front reaches the honeycomb backwalls under low water flow rate, while the honeycomb lips remain dry. During occasional excessive water flow events, however, the evaporation front may shift to the lips, while the backwalls become moist as a part of the capillary zone. As the zone of evaporation corresponds to the zone of potential salt weathering, it is the spatial distribution of the capillary and vapor zones which dictates whether honeycombs are created or the rock surface is smoothed. A hierarchical model of factors related to the hydraulic field was introduced to obtain better insights into the process of cavernous weathering.

  6. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  7. Experimentally testing the standard cosmological model

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, Ω b , remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that Ω b ∼ 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming Ω total = 1) and the need for dark baryonic matter, since Ω visible b . Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M x approx-gt 20 GeV and an interaction weaker than the Z 0 coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for ν-masses may imply that the ν τ is a good hot dark matter candidate. 73 refs., 5 figs

  8. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  9. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  10. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  11. Field testing of bioenergetic models

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1985-01-01

    Doubly labeled water provides a direct measure of the rate of carbon dioxide production by free-living animals. With appropriate conversion factors, based on chemical composition of the diet and assimilation efficiency, field metabolic rate (FMR), in units of energy expenditure, and field feeding rate can be estimated. Validation studies indicate that doubly labeled water measurements of energy metabolism are accurate to within 7% in reptiles, birds, and mammals. This paper discusses the use of doubly labeled water to generate empirical models for FMR and food requirements for a variety of animals

  12. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  13. Comparison of Critical Flow Models' Evaluations for SBLOCA Tests

    International Nuclear Information System (INIS)

    Kim, Yeon Sik; Park, Hyun Sik; Cho, Seok

    2016-01-01

    A comparison of critical flow models between the Trapp-Ransom and Henry-Fauske models for all SBLOCA (small break loss of coolant accident) scenarios of the ATLAS (Advanced thermal-hydraulic test loop for accident simulation) facility was performed using the MARS-KS code. For the comparison of the two critical models, the accumulated break mass was selected as the main parameter for the comparison between the analyses and tests. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL (cold leg) break and 25%, 50%, and 100% DVI (direct vessel injection) breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR (pressurizer) pressure and collapsed core water level, were also compared between the two critical models. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL break and 25%, 50%, and 100% DVI breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR pressure and collapsed core water level, were also compared between the two critical models. From the comparison between the two critical models for the CL breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 2', 6', and 8.5' CL breaks. In addition, from the comparison between the two critical models for the DVI breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 5%, 50%, and 100% DVI breaks. In the case of the 50% and 100% breaks, the two critical models predicted the test data quite well.

  14. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  15. TESTING GARCH-X TYPE MODELS

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2017-01-01

    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...... the nuisance parameters lie on the boundary of the parameter space, nor on lack of identification. Our general results on GARCH-X type models are applied to Gaussian based GARCH-X models, GARCH-X models with Student's t-distributed innovations as well as the integer-valued GARCH-X (PAR-X) models....

  16. Testing the generalized partial credit model

    OpenAIRE

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a generalization of the PCM (GPCM), a further generalization of the one-parameter logistic model, is discussed. The model is defined and the conditional maximum likelihood procedure for the method is describe...

  17. Herbalife hepatotoxicity: Evaluation of cases with positive reexposure tests.

    Science.gov (United States)

    Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Schwarzenboeck, Alexander; Eickhoff, Axel

    2013-07-27

    To analyze the validity of applied test criteria and causality assessment methods in assumed Herbalife hepatotoxicity with positive reexposure tests. We searched the Medline database for suspected cases of Herbalife hepatotoxicity and retrieved 53 cases including eight cases with a positive unintentional reexposure and a high causality level for Herbalife. First, analysis of these eight cases focused on the data quality of the positive reexposure cases, requiring a baseline value of alanine aminotransferase (ALT) Herbalife in these eight cases were probable (n = 1), unlikely (n = 4), and excluded (n = 3). Confounding variables included low data quality, alternative diagnoses, poor exclusion of important other causes, and comedication by drugs and herbs in 6/8 cases. More specifically, problems were evident in some cases regarding temporal association, daily doses, exact start and end dates of product use, actual data of laboratory parameters such as ALT, and exact dechallenge characteristics. Shortcomings included scattered exclusion of hepatitis A-C, cytomegalovirus and Epstein Barr virus infection with only globally presented or lacking parameters. Hepatitis E virus infection was considered in one single patient and found positive, infections by herpes simplex virus and varicella zoster virus were excluded in none. Only one case fulfilled positive reexposure test criteria in initially assumed Herbalife hepatotoxicity, with lower CIOMS based causality gradings for the other cases than hitherto proposed.

  18. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  19. Sample test cases using the environmental computer code NECTAR

    International Nuclear Information System (INIS)

    Ponting, A.C.

    1984-06-01

    This note demonstrates a few of the many different ways in which the environmental computer code NECTAR may be used. Four sample test cases are presented and described to show how NECTAR input data are structured. Edited output is also presented to illustrate the format of the results. Two test cases demonstrate how NECTAR may be used to study radio-isotopes not explicitly included in the code. (U.K.)

  20. Making System Dynamics Cool III : New Hot Teaching & Testing Cases

    NARCIS (Netherlands)

    Pruyt, E.

    2011-01-01

    This follow-up paper presents seven actual cases for testing and teaching System Dynamics developed and used between January 2010 and January 2011 for one of the largest System Dynamics courses (250+ students per year) at Delft University of Technology in the Netherlands. The cases presented in this

  1. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  2. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  3. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  4. Modelling, simulation and visualisation for electromagnetic non-destructive testing

    International Nuclear Information System (INIS)

    Ilham Mukriz Zainal Abidin; Abdul Razak Hamzah

    2010-01-01

    This paper reviews the state-of-the art and the recent development of modelling, simulation and visualization for eddy current Non-Destructive Testing (NDT) technique. Simulation and visualization has aid in the design and development of electromagnetic sensors and imaging techniques and systems for Electromagnetic Non-Destructive Testing (ENDT); feature extraction and inverse problems for Quantitative Non-Destructive Testing (QNDT). After reviewing the state-of-the art of electromagnetic modelling and simulation, case studies of Research and Development in eddy current NDT technique via magnetic field mapping and thermography for eddy current distribution are discussed. (author)

  5. SPOC Benchmark Case: SNRE Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    2016-02-01

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.

  6. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  7. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  8. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  9. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  10. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  11. A Model for Random Student Drug Testing

    Science.gov (United States)

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  12. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  13. Is the standard model really tested?

    International Nuclear Information System (INIS)

    Takasugi, E.

    1989-01-01

    It is discussed how the standard model is really tested. Among various tests, I concentrate on the CP violation phenomena in K and B meson system. Especially, the resent hope to overcome the theoretical uncertainty in the evaluation on the CP violation of K meson system is discussed. (author)

  14. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  15. A new fit-for-purpose model testing framework: Decision Crash Tests

    Science.gov (United States)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  16. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  17. The International intraval project. Phase 1 test cases

    International Nuclear Information System (INIS)

    1992-01-01

    This report contains a description of the test cases adopted in Phase 1 of the international cooperation project INTRAVAL. Seventeen test cases based on bench-scale experiments in laboratory, field tests and natural analogue studies, have been included in the study. The test cases are described in terms of experimental design and types of available data. In addition, some quantitative examples of available data are given as well as references to more extensive documentation of the experiments on which the test cases are based. Fithteen test cases examples are given: 1 Mass transfer through clay by diffusion and advection. 2 Uranium migration in crystalline bore cores, small scale pressure infiltration experiments. 3 Radionuclide migration in single natural fractures in granite. 4 Tracer tests in a deep basalt flow top. 5 Flow and tracer experiment in crystalline rock based on the Stripa 3-D experiment. 6 Tracer experiment in a fracture zone at the Finnsjon research area. 7 Synthetic data base, based on single fracture migration experiments in Grimsel rock laboratory. 8 Natural analogue studies at Pocos de Caldas, Minais Gerais, Brazil. Redox-front and radionuclide movement in an open pit uranium mine. 9 Natural analogue studies at the Koongarra site in the Alligator Rivers area of the Northern Territory, Australia. 10 Large block migration experiments in a block of crystalline rock. 11 Unsaturated flow and transport experiments performed at Las Cruces, New Mexico. 12 Flow and transport experiment in unsaturated fractured rock performed at the Apache Leap Tuff site, Arizona. 13 Experiments in partially saturated tuffaceous rocks performed in the G-tunnel underground facility at the Nevada Test site, USA. 14 Experimental study of brine transport in porous media. 15 Groundwater flow in the vicinity of the Gorleben Salt Dome, Federal Republic of Germany

  18. Highly Automated Agile Testing Process: An Industrial Case Study

    Directory of Open Access Journals (Sweden)

    Jarosław Berłowski

    2016-09-01

    Full Text Available This paper presents a description of an agile testing process in a medium size software project that is developed using Scrum. The research methods used is the case study were as follows: surveys, quantifiable project data sources and qualitative project members opinions were used for data collection. Challenges related to the testing process regarding a complex project environment and unscheduled releases were identified. Based on the obtained results, we concluded that the described approach addresses well the aforementioned issues. Therefore, recommendations were made with regard to the employed principles of agility, specifically: continuous integration, responding to change, test automation and test driven development. Furthermore, an efficient testing environment that combines a number of test frameworks (e.g. JUnit, Selenium, Jersey Test with custom-developed simulators is presented.

  19. Modeling of the Case Grammatical Meaning

    Directory of Open Access Journals (Sweden)

    Алексей Львович Новиков

    2014-12-01

    Full Text Available The article raises the problem of constructing a semantic model to describe the meaning of the grammatical category of case in the languages of different types. The main objective of this publication - to provide an overview of different points of view on the semantic structure of the category of case and to compare different models of case semantics. Initial impulse to the development of problems of case semantics became the grammar and typological ideas of A.A. Potebnya and R. Jakobson. The basis of these models, which differ from each other in the number and nature of the allocation of features is the idea of the possibility of representing grammatical meaning as a structured set of semantic features. The analysis shows that the construction of formal models of grammatical categories is impossible without referring to the content of the dominant semantic features in the structure of grammatical meaning. Despite all the difficulties of modeling grammatical semantics, to construct a semantic model of case is an interesting and promising task of general morphology and typological linguistics.

  20. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)

    1987-01-01

    Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references

  1. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations

  2. Kinematic tests of exotic flat cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations.

  3. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  4. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  5. Mining Product Data Models: A Case Study

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2014-01-01

    Full Text Available This paper presents two case studies used to prove the validity of some data-flow mining algorithms. We proposed the data-flow mining algorithms because most part of mining algorithms focuses on the control-flow perspective. First case study uses event logs generated by an ERP system (Navision after we set several trackers on the data elements needed in the process analyzed; while the second case study uses the event logs generated by YAWL system. We offered a general solution of data-flow model extraction from different data sources. In order to apply the data-flow mining algorithms the event logs must comply a certain format (using InputOutput extension. But to respect this format, a set of conversion tools is needed. We depicted the conversion tools used and how we got the data-flow models. Moreover, the data-flow model is compared to the control-flow model.

  6. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...... and implementing abstractions will improve the applicability of model checking in practice....

  7. Old star clusters: Bench tests of low mass stellar models

    Directory of Open Access Journals (Sweden)

    Salaris M.

    2013-03-01

    Full Text Available Old star clusters in the Milky Way and external galaxies have been (and still are traditionally used to constrain the age of the universe and the timescales of galaxy formation. A parallel avenue of old star cluster research considers these objects as bench tests of low-mass stellar models. This short review will highlight some recent tests of stellar evolution models that make use of photometric and spectroscopic observations of resolved old star clusters. In some cases these tests have pointed to additional physical processes efficient in low-mass stars, that are not routinely included in model computations. Moreover, recent results from the Kepler mission about the old open cluster NGC6791 are adding new tight constraints to the models.

  8. Accuracy tests of the tessellated SLBM model

    International Nuclear Information System (INIS)

    Ramirez, A L; Myers, S C

    2007-01-01

    We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are

  9. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  10. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  11. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  12. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  13. An Iterative Procedure for Efficient Testing of B2B: A Case in Messaging Service Tests

    Energy Technology Data Exchange (ETDEWEB)

    Kulvatunyou, Boonserm [ORNL

    2007-03-01

    Testing is a necessary step in systems integration. Testing in the context of inter-enterprise, business-to-business (B2B) integration is more difficult and expensive than intra-enterprise integration. Traditionally, the difficulty is alleviated by conducting the testing in two stages: conformance testing and then interoperability testing. In conformance testing, systems are tested independently against a reference system. In interoperability testing, they are tested simultaneously against one another. In the traditional approach for testing, these two stages are performed sequentially with little feedback between them. In addition, test results and test traces are left only to human analysis or even discarded if the solution passes the test. This paper proposes an approach where test results and traces from both the conformance and interoperability tests are analyzed for potential interoperability issues; conformance test cases are then derived from the analysis. The result is that more interoperability issues can be resolved in the lower-cost conformance testing mode; consequently, time and cost required for achieving interoparble solutions are reduced.

  14. Making System Dynamics Cool? Using Hot Testing & Teaching Cases

    NARCIS (Netherlands)

    Pruyt, E.

    2009-01-01

    This paper deals with the use of ‘hot’ real-world cases for both testing and teaching purposes such as in the Introductory System Dynamics course at Delft University of Technology in the Netherlands. The paper starts with a brief overview of the System Dynamics curriculum. Then the problem-oriented

  15. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  16. FARO base case post-test analysis by COMETA code

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)

    1995-09-01

    The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.

  17. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  18. Negative Exercise Stress Test: Does it Mean Anything? Case study

    Directory of Open Access Journals (Sweden)

    Hassan A. Mohamed

    2007-01-01

    Full Text Available Despite its low sensitivity and specificity (67% and 72%, respectively, exercise testing has remained one of the most widely used noninvasive tests to determine the prognosis in patients with suspected or established coronary disease.As a screening test for coronary artery disease, the exercise stress test is useful in that it is relatively simple and inexpensive. It has been considered particularly helpful in patients with chest pain syndromes who have moderate probability for coronary artery disease, and in whom the resting electrocardiogram (ECG is normal. The following case presentation and discussion will question the predictive value of a negative stress testing in patients with moderate probability for coronary artery disease.

  19. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  20. Software Test Description (STD) for the Globally Relocatable Navy Tide/Atmospheric Modeling System (PCTides)

    National Research Council Canada - National Science Library

    Posey, Pamela

    2002-01-01

    The purpose of this Software Test Description (STD) is to establish formal test cases to be used by personnel tasked with the installation and verification of the Globally Relocatable Navy Tide/Atmospheric Modeling System (PCTides...

  1. Testing of a steel containment vessel model

    International Nuclear Information System (INIS)

    Luk, V.K.; Hessheimer, M.F.; Matsumoto, T.; Komine, K.; Costello, J.F.

    1997-01-01

    A mixed-scale containment vessel model, with 1:10 in containment geometry and 1:4 in shell thickness, was fabricated to represent an improved, boiling water reactor (BWR) Mark II containment vessel. A contact structure, installed over the model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. This paper describes the pretest preparations and the conduct of the high pressure test of the model performed on December 11-12, 1996. 4 refs., 2 figs

  2. Precision tests of the Standard Model

    International Nuclear Information System (INIS)

    Ol'shevskij, A.G.

    1996-01-01

    The present status of the precision measurements of electroweak observables is discussed with the special emphasis on the results obtained recently. All together these measurements provide the basis for the stringent test of the Standard Model and determination of the SM parameters. 22 refs., 23 figs., 11 tabs

  3. Binomial test models and item difficulty

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1979-01-01

    In choosing a binomial test model, it is important to know exactly what conditions are imposed on item difficulty. In this paper these conditions are examined for both a deterministic and a stochastic conception of item responses. It appears that they are more restrictive than is generally

  4. Shallow foundation model tests in Europe

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav; Simonini, P.; Arslan, U.; Georgiodis, M.; Laue, J.; Pinto, I.

    1999-01-01

    Roč. 2, č. 4 (1999), s. 447-475 ISSN 1436-6517. [Int. Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundations * model tests * sandy subsoil * bearing capacity * settlement Subject RIV: JM - Building Engineering

  5. Testing a Dilaton Gravity Model Using Nucleosynthesis

    International Nuclear Information System (INIS)

    Boran, S.; Kahya, E. O.

    2014-01-01

    Big bang nucleosynthesis (BBN) offers one of the most strict evidences for the Λ-CDM cosmology at present, as well as the cosmic microwave background (CMB) radiation. In this work, our main aim is to present the outcomes of our calculations related to primordial abundances of light elements, in the context of higher dimensional steady-state universe model in the dilaton gravity. Our results show that abundances of light elements (primordial D, 3 He, 4 He, T, and 7 Li) are significantly different for some cases, and a comparison is given between a particular dilaton gravity model and Λ-CDM in the light of the astrophysical observations

  6. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  7. Testing mechanistic models of growth in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2015-11-22

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. © 2015 The Author(s).

  8. Using Glucose Tolerance Tests to Model Insulin Secretion and Clearance

    Directory of Open Access Journals (Sweden)

    Anthony Shannon

    2005-04-01

    Full Text Available The purpose of the studies described in this paper is to develop theoretically and to validate experimentally mathematical compartment models which can be used to predict plasma insulin levels in patients with diabetes mellitus (DM. In the case of Type 2 Diabetes Mellitus (T2DM, the C-peptide levels in the plasma were measured as part of routine glucose tolerance tests in order to estimate the prehepatic insulin secretion rates. In the case of Type 1 Diabetes Mellitus (T1DM, a radioactive labelled insulin was used to measure the absorption rate of insulin after a subcutaneous injection of insulin. Both models gave close fits between theoretical estimates and experimental data, and, unlike other models, it is not necessary to seed these models with initial estimates.

  9. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  10. Testing proton spin models with polarized beams

    International Nuclear Information System (INIS)

    Ramsey, G.P.

    1991-01-01

    We review models for spin-weighted parton distributions in a proton. Sum rules involving the nonsinglet components of the structure function xg 1 p help narrow the range of parameters in these models. The contribution of the γ 5 anomaly term depends on the size of the integrated polarized gluon distribution and experimental predictions depend on its size. We have proposed three models for the polarized gluon distributions, whose range is considerable. These model distributions give an overall range is considerable. These model distributions give an overall range of parameters that can be tested with polarized beam experiments. These are discussed with regard to specific predictions for polarized beam experiments at energies typical of UNK

  11. Active earth pressure model tests versus finite element analysis

    Science.gov (United States)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  12. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  13. OTEC riser cable model and prototype testing

    Science.gov (United States)

    Kurt, J. P.; Schultz, J. A.; Roblee, L. H. S.

    1981-12-01

    Two different OTEC riser cables have been developed to span the distance between a floating OTEC power plant and the ocean floor. The major design concerns for a riser cable in the dynamic OTEC environment are fatigue, corrosion, and electrical/mechanical aging of the cable components. The basic properties of the cable materials were studied through tests on model cables and on samples of cable materials. Full-scale prototype cables were manufactured and were tested to measure their electrical and mechanical properties and performance. The full-scale testing was culminated by the electrical/mechanical fatigue test, which exposes full-scale cables to simultaneous tension, bending and electrical loads, all in a natural seawater environment.

  14. Prospective Tests on Biological Models of Acupuncture

    Directory of Open Access Journals (Sweden)

    Charles Shang

    2009-01-01

    Full Text Available The biological effects of acupuncture include the regulation of a variety of neurohumoral factors and growth control factors. In science, models or hypotheses with confirmed predictions are considered more convincing than models solely based on retrospective explanations. Literature review showed that two biological models of acupuncture have been prospectively tested with independently confirmed predictions: The neurophysiology model on the long-term effects of acupuncture emphasizes the trophic and anti-inflammatory effects of acupuncture. Its prediction on the peripheral effect of endorphin in acupuncture has been confirmed. The growth control model encompasses the neurophysiology model and suggests that a macroscopic growth control system originates from a network of organizers in embryogenesis. The activity of the growth control system is important in the formation, maintenance and regulation of all the physiological systems. Several phenomena of acupuncture such as the distribution of auricular acupuncture points, the long-term effects of acupuncture and the effect of multimodal non-specific stimulation at acupuncture points are consistent with the growth control model. The following predictions of the growth control model have been independently confirmed by research results in both acupuncture and conventional biomedical sciences: (i Acupuncture has extensive growth control effects. (ii Singular point and separatrix exist in morphogenesis. (iii Organizers have high electric conductance, high current density and high density of gap junctions. (iv A high density of gap junctions is distributed as separatrices or boundaries at body surface after early embryogenesis. (v Many acupuncture points are located at transition points or boundaries between different body domains or muscles, coinciding with the connective tissue planes. (vi Some morphogens and organizers continue to function after embryogenesis. Current acupuncture research suggests a

  15. Physical modelling and testing in environmental geotechnics

    International Nuclear Information System (INIS)

    Garnier, J.; Thorel, L.; Haza, E.

    2000-01-01

    The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper is analysed in INIS data base for its specific interest in nuclear industry. The other ones, concerning the energy, are analyzed in ETDE data base

  16. Hypervapotron flow testing with rapid prototype models

    International Nuclear Information System (INIS)

    Driemeyer, D.; Hellwig, T.; Kubik, D.; Langenderfer, E.; Mantz, H.; McSmith, M.; Jones, B.; Butler, J.

    1995-01-01

    A flow test model of the inlet section of a three channel hypervapotron plate that has been proposed as a heat sink in the ITER divertor was prepared using a rapid prototyping stereolithography process that is widely used for component development in US industry. An existing water flow loop at the University of Illinois is being used for isothermal flow tests to collect pressure drop data for comparison with proposed vapotron friction factor correlations. Differential pressure measurements are taken, across the test section inlet manifold, the vapotron channel (about a seven inch length), the outlet manifold and the inlet-to-outlet. The differential pressures are currently measured with manometers. Tests were conducted at flow velocities from 1--10 m/s to cover the full range of ITER interest. A tap was also added for a small hypodermic needle to inject dye into the flow channel at several positions to examine the nature of the developing flow field at the entrance to the vapotron section. Follow-on flow tests are planned using a model with adjustable flow channel dimensions to permit more extensive pressure drop data to be collected. This information will be used to update vapotron design correlations for ITER

  17. Physical modelling and testing in environmental geotechnics

    Energy Technology Data Exchange (ETDEWEB)

    Garnier, J.; Thorel, L.; Haza, E. [Laboratoire Central des Ponts et Chaussees a Nantes, 44 - Nantes (France)

    2000-07-01

    The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper has been analyzed in INIS data base for its specific interest in nuclear industry.

  18. Paternity tests in Mexico: Results obtained in 3005 cases.

    Science.gov (United States)

    García-Aceves, M E; Romero Rentería, O; Díaz-Navarro, X X; Rangel-Villalobos, H

    2018-04-01

    National and international reports regarding the paternity testing activity scarcely include information from Mexico and other Latin American countries. Therefore, we report different results from the analysis of 3005 paternity cases analyzed during a period of five years in a Mexican paternity testing laboratory. Motherless tests were the most frequent (77.27%), followed by trio cases (20.70%); the remaining 2.04% included different cases of kinship reconstruction. The paternity exclusion rate was 29.58%, higher but into the range reported by the American Association of Blood Banks (average 24.12%). We detected 65 mutations, most of them involving one-step (93.8% and the remaining were two-step mutations (6.2%) thus, we were able to estimate the paternal mutation rate for 17 different STR loci: 0.0018 (95% CI 0.0005-0.0047). Five triallelic patterns and 12 suspected null alleles were detected during this period; however, re-amplification of these samples with a different Human Identification (HID) kit confirmed the homozygous genotypes, which suggests that most of these exclusions actually are one-step mutations. HID kits with ≥20 STRs detected more exclusions, diminishing the rate of inconclusive results with isolated exclusions (Powerplex 21 kit (20 STRs) and Powerplex Fusion kit (22 STRs) offered similar PI (p = 0.379) and average number of exclusions (PE) (p = 0.339) when a daughter was involved in motherless tests. In brief, besides to report forensic parameters from paternity tests in Mexico, results describe improvements to solve motherless paternity tests using HID kits with ≥20 STRs instead of one including 15 STRs. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Computer tomography of flows external to test models

    Science.gov (United States)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  20. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...... diameter to water depth ratio and the wave hight to water depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles....

  1. Observational tests of FRW world models

    International Nuclear Information System (INIS)

    Lahav, Ofer

    2002-01-01

    Observational tests for the cosmological principle are reviewed. Assuming the FRW metric we then summarize estimates of cosmological parameters from various datasets, in particular the cosmic microwave background and the 2dF galaxy redshift survey. These and other analyses suggest a best-fit Λ-cold dark matter model with Ω m = 1 - Ω l ∼ 0.3 and H 0 ∼ 70 km s -1 Mpc -1 . It is remarkable that different measurements converge to this 'concordance model', although it remains to be seen if the two main components of this model, the dark matter and the dark energy, are real entities or just 'epicycles'. We point out some open questions related to this fashionable model

  2. Mathematical modelling a case studies approach

    CERN Document Server

    Illner, Reinhard; McCollum, Samantha; Roode, Thea van

    2004-01-01

    Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...

  3. Full scale turbine-missile casing exit tests

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Schamaun, J.T.; Sliter, G.E.

    1979-01-01

    Two full-scale tests have simulated the impact of a fragment from a failed turbine disk upon the steel casing of a low-pressure steam turbine with the objective of providing data for making more realistic assessments of turbine missile effects for nuclear power plant designers. Data were obtained on both the energy-absorbing mechanisms of the impact process and the post-impact trajectory of the fragment. (orig.)

  4. Preliminary Test for Constitutive Models of CAP

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  5. Preliminary Test for Constitutive Models of CAP

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  6. DWPF PCCS version 2.0 test case

    International Nuclear Information System (INIS)

    Brown, K.G.; Pickett, M.A.

    1992-01-01

    To verify the operation of the Product Composition Control System (PCCS), a test case specific to DWPF operation was developed. The values and parameters necessary to demonstrate proper DWPF product composition control have been determined and are presented in this paper. If this control information (i.e., for transfers and analyses) is entered into the PCCS as illustrated in this paper, and the results obtained correspond to the independently-generated results, it can safely be said that the PCCS is operating correctly and can thus be used to control the DWPF. The independent results for this test case will be generated and enumerated in a future report. This test case was constructed along the lines of the normal DWPF operation. Many essential parameters are internal to the PCCS (e.g., property constraint and variance information) and can only be manipulated by personnel knowledgeable of the Symbolics reg-sign hardware and software. The validity of these parameters will rely on induction from observed PCCS results. Key process control values are entered into the PCCS as they would during normal operation. Examples of the screens used to input specific process control information are provided. These inputs should be entered into the PCCS database, and the results generated should be checked against the independent, computed results to confirm the validity of the PCCS

  7. A Human Proximity Operations System test case validation approach

    Science.gov (United States)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  8. Business model stress testing : A practical approach to test the robustness of a business model

    NARCIS (Netherlands)

    Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.

    Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The

  9. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  10. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  11. Designing healthy communities: Testing the walkability model

    OpenAIRE

    Zuniga-Teran, Adriana; Orr, Barron; Gimblett, Randy; Chalfoun, Nader; Marsh, Stuart; Guertin, David; Going, Scott

    2017-01-01

    Research from multiple domains has provided insights into how neighborhood design can be improved to have a more favorable effect on physical activity, a concept known as walkability. The relevant research findings/hypotheses have been integrated into a Walkability Framework, which organizes the design elements into nine walkability categories. The purpose of this study was to test whether this conceptual framework can be used as a model to measure the interactions between the built environme...

  12. Inverse hydrochemical models of aqueous extracts tests

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, L.; Samper, J.; Montenegro, L.

    2008-10-10

    Aqueous extract test is a laboratory technique commonly used to measure the amount of soluble salts of a soil sample after adding a known mass of distilled water. Measured aqueous extract data have to be re-interpreted in order to infer porewater chemical composition of the sample because porewater chemistry changes significantly due to dilution and chemical reactions which take place during extraction. Here we present an inverse hydrochemical model to estimate porewater chemical composition from measured water content, aqueous extract, and mineralogical data. The model accounts for acid-base, redox, aqueous complexation, mineral dissolution/precipitation, gas dissolution/ex-solution, cation exchange and surface complexation reactions, of which are assumed to take place at local equilibrium. It has been solved with INVERSE-CORE{sup 2D} and been tested with bentonite samples taken from FEBEX (Full-scale Engineered Barrier EXperiment) in situ test. The inverse model reproduces most of the measured aqueous data except bicarbonate and provides an effective, flexible and comprehensive method to estimate porewater chemical composition of clays. Main uncertainties are related to kinetic calcite dissolution and variations in CO2(g) pressure.

  13. Concept Test of a Smoking Cessation Smart Case.

    Science.gov (United States)

    Comello, Maria Leonora G; Porter, Jeannette H

    2018-04-05

    Wearable/portable devices that unobtrusively detect smoking and contextual data offer the potential to provide Just-In-Time Adaptive Intervention (JITAI) support for mobile cessation programs. Little has been reported on the development of these technologies. To address this gap, we offer a case report of users' experiences with a prototype "smart" cigarette case that automatically tracks time and location of smoking. Small-scale user-experience studies are typical of iterative product design and are especially helpful when proposing novel ideas. The purpose of the study was to assess concept acceptability and potential for further development. We tested the prototype case with a small sample of potential users (n = 7). Participants used the hardware/software for 2 weeks and reconvened for a 90-min focus group to discuss experiences and provide feedback. Participants liked the smart case in principle but found the prototype too bulky for easy portability. The potential for the case to convey positive messages about self also emerged as a finding. Participants indicated willingness to pay for improved technology (USD $15-$60 on a one-time basis). The smart case is a viable concept, but design detail is critical to user acceptance. Future research should examine designs that maximize convenience and that explore the device's ability to cue intentions and other cognitions that would support cessation. This study is the first to our knowledge to report formative research on the smart case concept. This initial exploration provides insights that may be helpful to other developers of JITAI-support technology.

  14. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  15. Blast Testing and Modelling of Composite Structures

    DEFF Research Database (Denmark)

    Giversen, Søren

    The motivation for this work is based on a desire for finding light weight alternatives to high strength steel as the material to use for armouring in military vehicles. With the use of high strength steel, an increase in the level of armouring has a significant impact on the vehicle weight......, affecting for example the manoeuvrability and top speed negatively, which ultimately affects the safety of the personal in the vehicle. Strong and light materials, such as fibre reinforced composites, could therefore act as substitutes for the high strength steel, and minimize the impact on the vehicle...... work this set-up should be improved such that the modelled pressure can be validated. For tests performed with a 250g charge load comparisons with model data showed poor agreement. This was found to be due to improper design of the modelled laminate panels, where the layer interface delamination...

  16. BIOMOVS test scenario model comparison using BIOPATH

    International Nuclear Information System (INIS)

    Grogan, H.A.; Van Dorp, F.

    1986-07-01

    This report presents the results of the irrigation test scenario, presented in the BIOMOVS intercomparison study, calculated by the computer code BIOPATH. This scenario defines a constant release of Tc-99 and Np-237 into groundwater that is used for irrigation. The system of compartments used to model the biosphere is based upon an area in northern Switzerland and is essentially the same as that used in Projekt Gewaehr to assess the radiological impact of a high level waste repository. Two separate irrigation methods are considered, namely ditch and overhead irrigation. Their influence on the resultant activities calculated in the groundwater, soil and different foodproducts, as a function of time, is evaluated. The sensitivity of the model to parameter variations is analysed which allows a deeper understanding of the model chain. These results are assessed subjectively in a first effort to realistically quantify the uncertainty associated with each calculated activity. (author)

  17. Thermal modelling of Advanced LIGO test masses

    International Nuclear Information System (INIS)

    Wang, H; Dovale Álvarez, M; Mow-Lowry, C M; Freise, A; Blair, C; Brooks, A; Kasprzack, M F; Ramette, J; Meyers, P M; Kaufer, S; O’Reilly, B

    2017-01-01

    High-reflectivity fused silica mirrors are at the epicentre of today’s advanced gravitational wave detectors. In these detectors, the mirrors interact with high power laser beams. As a result of finite absorption in the high reflectivity coatings the mirrors suffer from a variety of thermal effects that impact on the detectors’ performance. We propose a model of the Advanced LIGO mirrors that introduces an empirical term to account for the radiative heat transfer between the mirror and its surroundings. The mechanical mode frequency is used as a probe for the overall temperature of the mirror. The thermal transient after power build-up in the optical cavities is used to refine and test the model. The model provides a coating absorption estimate of 1.5–2.0 ppm and estimates that 0.3 to 1.3 ppm of the circulating light is scattered onto the ring heater. (paper)

  18. Tests and comparisons of gravity models.

    Science.gov (United States)

    Marsh, J. G.; Douglas, B. C.

    1971-01-01

    Optical observations of the GEOS satellites were used to obtain orbital solutions with different sets of geopotential coefficients. The solutions were compared before and after modification to high order terms (necessary because of resonance) and were then analyzed by comparing subsequent observations with predicted trajectories. The most important source of error in orbit determination and prediction for the GEOS satellites is the effect of resonance found in most published sets of geopotential coefficients. Modifications to the sets yield greatly improved orbits in most cases. The results of these comparisons suggest that with the best optical tracking systems and gravity models, satellite position error due to gravity model uncertainty can reach 50-100 m during a heavily observed 5-6 day orbital arc. If resonant coefficients are estimated, the uncertainty is reduced considerably.

  19. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  20. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...

  1. Carbon Back Sputter Modeling for Hall Thruster Testing

    Science.gov (United States)

    Gilland, James H.; Williams, George J.; Burt, Jonathan M.; Yim, John T.

    2016-01-01

    In support of wear testing for the Hall Effect Rocket with Magnetic Shielding (HERMeS) program, the back sputter from a Hall effect thruster plume has been modeled for the NASA Glenn Research Centers Vacuum Facility 5. The predicted wear at a near-worst case condition of 600 V, 12.5 kW was found to be on the order of 3 4 mkhour in a fully carbon-lined chamber. A more detailed numerical monte carlo code was also modified to estimate back sputter for a detailed facility and pumping configuration. This code demonstrated similar back sputter rate distributions, but is not yet accurately modeling the magnitudes. The modeling has been benchmarked to recent HERMeS wear testing, using multiple microbalance measurements. These recent measurements have yielded values, on the order of 1.5- 2 microns/khour.

  2. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  3. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  4. Adversarial life testing: A Bayesian negotiation model

    International Nuclear Information System (INIS)

    Rufo, M.J.; Martín, J.; Pérez, C.J.

    2014-01-01

    Life testing is a procedure intended for facilitating the process of making decisions in the context of industrial reliability. On the other hand, negotiation is a process of making joint decisions that has one of its main foundations in decision theory. A Bayesian sequential model of negotiation in the context of adversarial life testing is proposed. This model considers a general setting for which a manufacturer offers a product batch to a consumer. It is assumed that the reliability of the product is measured in terms of its lifetime. Furthermore, both the manufacturer and the consumer have to use their own information with respect to the quality of the product. Under these assumptions, two situations can be analyzed. For both of them, the main aim is to accept or reject the product batch based on the product reliability. This topic is related to a reliability demonstration problem. The procedure is applied to a class of distributions that belong to the exponential family. Thus, a unified framework addressing the main topics in the considered Bayesian model is presented. An illustrative example shows that the proposed technique can be easily applied in practice

  5. Experimental impact testing and analysis of composite fan cases

    Science.gov (United States)

    Vander Klok, Andrew Joe

    For aircraft engine certification, one of the requirements is to demonstrate the ability of the engine to withstand a fan blade-out (FBO) event. A FBO event may be caused by fatigue failure of the fan blade itself or by impact damage of foreign objects such as bird strike. An un-contained blade can damage flight critical engine components or even the fuselage. The design of a containment structure is related to numerous parameters such as the blade tip speed; blade material, size and shape; hub/tip diameter; fan case material, configuration, rigidity, etc. To investigate all parameters by spin experiments with a full size rotor assembly can be prohibitively expensive. Gas gun experiments can generate useful data for the design of engine containment cases at much lower costs. To replicate the damage modes similar to that on a fan case in FBO testing, the gas gun experiment has to be carefully designed. To investigate the experimental procedure and data acquisition techniques for FBO test, a low cost, small spin rig was first constructed. FBO tests were carried out with the small rig. The observed blade-to-fan case interactions were similar to those reported using larger spin rigs. The small rig has the potential in a variety of applications from investigating FBO events, verifying concept designs of rotors, to developing spin testing techniques. This rig was used in the developments of the notched blade releasing mechanism, a wire trigger method for synchronized data acquisition, high speed video imaging and etc. A relationship between the notch depth and the release speed was developed and verified. Next, an original custom designed spin testing facility was constructed. Driven by a 40HP, 40,000rpm air turbine, the spin rig is housed in a vacuum chamber of phi72inx40in (1829mmx1016mm). The heavily armored chamber is furnished with 9 viewports. This facility enables unprecedented investigations of FBO events. In parallel, a 15.4ft (4.7m) long phi4.1inch (105mm

  6. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  7. Automatic WSDL-guided Test Case Generation for PropEr Testing of Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Sagonas

    2012-10-01

    Full Text Available With web services already being key ingredients of modern web systems, automatic and easy-to-use but at the same time powerful and expressive testing frameworks for web services are increasingly important. Our work aims at fully automatic testing of web services: ideally the user only specifies properties that the web service is expected to satisfy, in the form of input-output relations, and the system handles all the rest. In this paper we present in detail the component which lies at the heart of this system: how the WSDL specification of a web service is used to automatically create test case generators that can be fed to PropEr, a property-based testing tool, to create structurally valid random test cases for its operations and check its responses. Although the process is fully automatic, our tool optionally allows the user to easily modify its output to either add semantic information to the generators or write properties that test for more involved functionality of the web services.

  8. Tests on thirteen navy type model propellers

    Science.gov (United States)

    Durand, W F

    1927-01-01

    The tests on these model propellers were undertaken for the purpose of determining the performance coefficients and characteristics for certain selected series of propellers of form and type as commonly used in recent navy designs. The first series includes seven propellers of pitch ratio varying by 0.10 to 1.10, the area, form of blade, thickness, etc., representing an arbitrary standard propeller which had shown good results. The second series covers changes in thickness of blade section, other things equal, and the third series, changes in blade area, other things equal. These models are all of 36-inch diameter. Propellers A to G form the series on pitch ratio, C, N. I. J the series on thickness of section, and K, M, C, L the series on area. (author)

  9. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  10. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  11. A prevalence-based association test for case-control studies.

    Science.gov (United States)

    Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M

    2008-11-01

    Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.

  12. Pescara benchmark: overview of modelling, testing and identification

    International Nuclear Information System (INIS)

    Bellino, A; Garibaldi, L; Marchesiello, S; Brancaleoni, F; Gabriele, S; Spina, D; Bregant, L; Carminelli, A; Catania, G; Sorrentino, S; Di Evangelista, A; Valente, C; Zuccarino, L

    2011-01-01

    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  13. The Linear Logistic Test Model (LLTM as the methodological foundation of item generating rules for a new verbal reasoning test

    Directory of Open Access Journals (Sweden)

    HERBERT POINSTINGL

    2009-06-01

    Full Text Available Based on the demand for new verbal reasoning tests to enrich psychological test inventory, a pilot version of a new test was analysed: the 'Family Relation Reasoning Test' (FRRT; Poinstingl, Kubinger, Skoda & Schechtner, forthcoming, in which several basic cognitive operations (logical rules have been embedded/implemented. Given family relationships of varying complexity embedded in short stories, testees had to logically conclude the correct relationship between two individuals within a family. Using empirical data, the linear logistic test model (LLTM; Fischer, 1972, a special case of the Rasch model, was used to test the construct validity of the test: The hypothetically assumed basic cognitive operations had to explain the Rasch model's item difficulty parameters. After being shaped in LLTM's matrices of weights ((qij, none of these operations were corroborated by means of the Andersen's Likelihood Ratio Test.

  14. Test of the Bank Lending Channel: The Case of Poland

    Directory of Open Access Journals (Sweden)

    Yu HSING

    2013-11-01

    Full Text Available This paper tests the bank lending channel for Poland based on a simultaneousequation model consisting of demand for and supply of bank loans. The three-stage least squares method is employed in empirical work. This paper finds support for a bank lending channel for Poland. Expansionary monetary policy through a lower money market rate or open market purchase of government bonds to increase bank reserves/deposits would increase bank loan supply.

  15. BDA special care case mix model.

    Science.gov (United States)

    Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L

    2010-04-10

    Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.

  16. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  17. Common Occupational Disability Tests and Case Law References: An Ontario MVA perspective on interpretation and best practice methodology supporting a holistic model, Part I of III (Pre-104 IRB).

    Science.gov (United States)

    Salmon, J Douglas; Gouws, Jacques J; Bachmann, Corina Anghel

    2016-05-01

    This three-part paper presents practical holistic models of determining impairment and occupational disability with respect to common "own occupation" and "any occupation" definitions. The models consider physical, emotional and cognitive impairments in unison, and draw upon case law support for empirically based functional assessment of secondary cognitive symptoms arising from psychological conditions, including chronic pain disorders. Case law is presented, primarily in the context of Ontario motor vehicle accident legislation, to demonstrate how triers of fact have addressed occupational disability in the context of chronic pain; and interpreted the "own occupation" and "any occupation" definitions. In interpreting the definitions of "own occupation" and "any occupation", courts have considered various concepts, such as: work as an integrated whole, competitive productivity, demonstrated job performance vs. employment, work adaptation relative to impairment stability, suitable work, retraining considerations, self-employment, and remuneration/socio-economic status. The first segment of the paper reviews the above concepts largely in the context of pre-104 Income Replacement Benefit (IRB) entitlement, while the second segment focuses on post-104 IRB entitlement. In the final segment, the paper presents a critical evaluation of computerized transferable skills analysis (TSAs) in the occupational disability context. By contrast, support is offered for the notion that (neuro) psychovocational assessments and situational work assessments should play a key role in "own occupation" disability determination, even where specific vocational rehabilitation/retraining recommendations are not requested by the referral source (e.g., insurer disability examination).

  18. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  19. Transfer of drug dissolution testing by statistical approaches: Case study

    Science.gov (United States)

    AL-Kamarany, Mohammed Amood; EL Karbane, Miloud; Ridouan, Khadija; Alanazi, Fars K.; Hubert, Philippe; Cherrah, Yahia; Bouklouze, Abdelaziz

    2011-01-01

    The analytical transfer is a complete process that consists in transferring an analytical procedure from a sending laboratory to a receiving laboratory. After having experimentally demonstrated that also masters the procedure in order to avoid problems in the future. Method of transfers is now commonplace during the life cycle of analytical method in the pharmaceutical industry. No official guideline exists for a transfer methodology in pharmaceutical analysis and the regulatory word of transfer is more ambiguous than for validation. Therefore, in this study, Gauge repeatability and reproducibility (R&R) studies associated with other multivariate statistics appropriates were successfully applied for the transfer of the dissolution test of diclofenac sodium as a case study from a sending laboratory A (accredited laboratory) to a receiving laboratory B. The HPLC method for the determination of the percent release of diclofenac sodium in solid pharmaceutical forms (one is the discovered product and another generic) was validated using accuracy profile (total error) in the sender laboratory A. The results showed that the receiver laboratory B masters the test dissolution process, using the same HPLC analytical procedure developed in laboratory A. In conclusion, if the sender used the total error to validate its analytical method, dissolution test can be successfully transferred without mastering the analytical method validation by receiving laboratory B and the pharmaceutical analysis method state should be maintained to ensure the same reliable results in the receiving laboratory. PMID:24109204

  20. Testing an integral conceptual model of frailty.

    Science.gov (United States)

    Gobbens, Robbert J; van Assen, Marcel A; Luijkx, Katrien G; Schols, Jos M

    2012-09-01

    This paper is a report of a study conducted to test three hypotheses derived from an integral conceptual model of frailty.   The integral model of frailty describes the pathway from life-course determinants to frailty to adverse outcomes. The model assumes that life-course determinants and the three domains of frailty (physical, psychological, social) affect adverse outcomes, the effect of disease(s) on adverse outcomes is mediated by frailty, and the effect of frailty on adverse outcomes depends on the life-course determinants. In June 2008 a questionnaire was sent to a sample of community-dwelling people, aged 75 years and older (n = 213). Life-course determinants and frailty were assessed using the Tilburg frailty indicator. Adverse outcomes were measured using the Groningen activity restriction scale, the WHOQOL-BREF and questions regarding healthcare utilization. The effect of seven self-reported chronic diseases was examined. Life-course determinants, chronic disease(s), and frailty together explain a moderate to large part of the variance of the seven continuous adverse outcomes (26-57%). All these predictors together explained a significant part of each of the five dichotomous adverse outcomes. The effect of chronic disease(s) on all 12 adverse outcomes was mediated at least partly by frailty. The effect of frailty domains on adverse outcomes did not depend on life-course determinants. Our finding that the adverse outcomes are differently and uniquely affected by the three domains of frailty (physical, psychological, social), and life-course determinants and disease(s), emphasizes the importance of an integral conceptual model of frailty. © 2011 Blackwell Publishing Ltd.

  1. Pion interferometric tests of transport models

    Energy Technology Data Exchange (ETDEWEB)

    Padula, S.S.; Gyulassy, M.; Gavin, S. (Lawrence Berkeley Lab., CA (USA). Nuclear Science Div.)

    1990-01-08

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.).

  2. Pion interferometric tests of transport models

    International Nuclear Information System (INIS)

    Padula, S.S.; Gyulassy, M.; Gavin, S.

    1990-01-01

    In hadronic reactions, the usual space-time interpretation of pion interferometry often breaks down due to strong correlations between spatial and momentum coordinates. We derive a general interferometry formula based on the Wigner density formalism that allows for arbitrary phase space and multiparticle correlations. Correction terms due to intermediate state pion cascading are derived using semiclassical hadronic transport theory. Finite wave packets are used to reveal the sensitivity of pion interference effects on the details of the production dynamics. The covariant generalization of the formula is shown to be equivalent to the formula derived via an alternate current ensemble formalism for minimal wave packets and reduces in the nonrelativistic limit to a formula derived by Pratt. The final expression is ideally suited for pion interferometric tests of Monte Carlo transport models. Examples involving gaussian and inside-outside phase space distributions are considered. (orig.)

  3. Experimental Tests of the Algebraic Cluster Model

    Science.gov (United States)

    Gai, Moshe

    2018-02-01

    The Algebraic Cluster Model (ACM) of Bijker and Iachello that was proposed already in 2000 has been recently applied to 12C and 16O with much success. We review the current status in 12C with the outstanding observation of the ground state rotational band composed of the spin-parity states of: 0+, 2+, 3-, 4± and 5-. The observation of the 4± parity doublet is a characteristic of (tri-atomic) molecular configuration where the three alpha- particles are arranged in an equilateral triangular configuration of a symmetric spinning top. We discuss future measurement with electron scattering, 12C(e,e’) to test the predicted B(Eλ) of the ACM.

  4. 30 CFR 250.520 - When do I have to perform a casing diagnostic test?

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When do I have to perform a casing diagnostic... Operations Casing Pressure Management § 250.520 When do I have to perform a casing diagnostic test? (a) You must perform a casing diagnostic test within 30 days after first observing or imposing casing pressure...

  5. Test Automation Process Improvement A case study of BroadSoft

    OpenAIRE

    Gummadi, Jalendar

    2016-01-01

    This master thesis research is about improvement of test automation process at BroadSoft Finland as a case study. Test automation project recently started at BroadSoft but the project is not properly integrated in to existing process. Project is about converting manual test cases to automation test cases. The aim of this thesis is about studying existing BroadSoft test process and studying different test automation frameworks. In this thesis different test automation process are studied ...

  6. Two Bayesian tests of the GLOMOsys Model.

    Science.gov (United States)

    Field, Sarahanne M; Wagenmakers, Eric-Jan; Newell, Ben R; Zeelenberg, René; van Ravenzwaaij, Don

    2016-12-01

    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMO sys model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Experimental tests of the standard model

    International Nuclear Information System (INIS)

    Nodulman, L.

    1998-01-01

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of α EM in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G F , most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered

  8. Computerized Classification Testing with the Rasch Model

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  9. Mathematical modelling with case studies using Maple and Matlab

    CERN Document Server

    Barnes, B

    2014-01-01

    Introduction to Mathematical ModelingMathematical models An overview of the book Some modeling approaches Modeling for decision makingCompartmental Models Introduction Exponential decay and radioactivity Case study: detecting art forgeries Case study: Pacific rats colonize New Zealand Lake pollution models Case study: Lake Burley Griffin Drug assimilation into the blood Case study: dull, dizzy, or dead? Cascades of compartments First-order linear DEs Equilibrium points and stability Case study: money, money, money makes the world go aroundModels of Single PopulationsExponential growth Density-

  10. Pumping tests in nonuniform aquifers - The radially symmetric case

    Science.gov (United States)

    Butler, J.J.

    1988-01-01

    Traditionally, pumping-test-analysis methodology has been limited to applications involving aquifers whose properties are assumed uniform in space. This work attempts to assess the applicability of analytical methodology to a broader class of units with spatially varying properties. An examination of flow behavior in a simple configuration consisting of pumping from the center of a circular disk embedded in a matrix of differing properties is the basis for this investigation. A solution describing flow in this configuration is obtained through Laplace-transform techniques using analytical and numerical inversion schemes. Approaches for the calculation of flow properties in conditions that can be roughly represented by this simple configuration are proposed. Possible applications include a wide variety of geologic structures, as well as the case of a well skin resulting from drilling or development. Of more importance than the specifics of these techniques for analysis of water-level responses is the insight into flow behavior during a pumping test that is provided by the large-time form of the derived solution. The solution reveals that drawdown during a pumping test can be considered to consist of two components that are dependent and independent of near-well properties, respectively. Such an interpretation of pumping-test drawdown allows some general conclusions to be drawn concerning the relationship between parameters calculated using analytical approaches based on curve-matching and those calculated using approaches based on the slope of a semilog straight line plot. The infinite-series truncation that underlies the semilog analytical approaches is shown to remove further contributions of near-well material to total drawdown. In addition, the semilog distance-drawdown approach is shown to yield an expression that is equivalent to the Thiem equation. These results allow some general recommendations to be made concerning observation-well placement for pumping

  11. Interactive modelling with stakeholders in two cases in flood management

    Science.gov (United States)

    Leskens, Johannes; Brugnach, Marcela

    2013-04-01

    New policies on flood management called Multi-Level Safety (MLS), demand for an integral and collaborative approach. The goal of MLS is to minimize flood risks by a coherent package of protection measures, crisis management and flood resilience measures. To achieve this, various stakeholders, such as water boards, municipalities and provinces, have to collaborate in composing these measures. Besides the many advances this integral and collaborative approach gives, the decision-making environment becomes also more complex. Participants have to consider more criteria than they used to do and have to take a wide network of participants into account, all with specific perspectives, cultures and preferences. In response, sophisticated models are developed to support decision-makers in grasping this complexity. These models provide predictions of flood events and offer the opportunity to test the effectiveness of various measures under different criteria. Recent model advances in computation speed and model flexibility allow stakeholders to directly interact with a hydrological hydraulic model during meetings. Besides a better understanding of the decision content, these interactive models are supposed to support the incorporation of stakeholder knowledge in modelling and to support mutual understanding of different perspectives of stakeholders To explore the support of interactive modelling in integral and collaborate policies, such as MLS, we tested a prototype of an interactive flood model (3Di) with respect to a conventional model (Sobek) in two cases. The two cases included the designing of flood protection measures in Amsterdam and a flood event exercise in Delft. These case studies yielded two main results. First, we observed that in the exploration phase of a decision-making process, stakeholders participated actively in interactive modelling sessions. This increased the technical understanding of complex problems and the insight in the effectiveness of various

  12. Inference and testing on the boundary in extended constant conditional correlation GARCH models

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    2017-01-01

    We consider inference and testing in extended constant conditional correlation GARCH models in the case where the true parameter vector is a boundary point of the parameter space. This is of particular importance when testing for volatility spillovers in the model. The large-sample properties...

  13. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  14. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  15. Economic contract theory tests models of mutualism.

    Science.gov (United States)

    Weyl, E Glen; Frederickson, Megan E; Yu, Douglas W; Pierce, Naomi E

    2010-09-07

    Although mutualisms are common in all ecological communities and have played key roles in the diversification of life, our current understanding of the evolution of cooperation applies mostly to social behavior within a species. A central question is whether mutualisms persist because hosts have evolved costly punishment of cheaters. Here, we use the economic theory of employment contracts to formulate and distinguish between two mechanisms that have been proposed to prevent cheating in host-symbiont mutualisms, partner fidelity feedback (PFF) and host sanctions (HS). Under PFF, positive feedback between host fitness and symbiont fitness is sufficient to prevent cheating; in contrast, HS posits the necessity of costly punishment to maintain mutualism. A coevolutionary model of mutualism finds that HS are unlikely to evolve de novo, and published data on legume-rhizobia and yucca-moth mutualisms are consistent with PFF and not with HS. Thus, in systems considered to be textbook cases of HS, we find poor support for the theory that hosts have evolved to punish cheating symbionts; instead, we show that even horizontally transmitted mutualisms can be stabilized via PFF. PFF theory may place previously underappreciated constraints on the evolution of mutualism and explain why punishment is far from ubiquitous in nature.

  16. Space engineering modeling and optimization with case studies

    CERN Document Server

    Pintér, János

    2016-01-01

    This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...

  17. Hall Thruster Thermal Modeling and Test Data Correlation

    Science.gov (United States)

    Myers, James; Kamhawi, Hani; Yim, John; Clayman, Lauren

    2016-01-01

    The life of Hall Effect thrusters are primarily limited by plasma erosion and thermal related failures. NASA Glenn Research Center (GRC) in cooperation with the Jet Propulsion Laboratory (JPL) have recently completed development of a Hall thruster with specific emphasis to mitigate these limitations. Extending the operational life of Hall thursters makes them more suitable for some of NASA's longer duration interplanetary missions. This paper documents the thermal model development, refinement and correlation of results with thruster test data. Correlation was achieved by minimizing uncertainties in model input and recognizing the relevant parameters for effective model tuning. Throughout the thruster design phase the model was used to evaluate design options and systematically reduce component temperatures. Hall thrusters are inherently complex assemblies of high temperature components relying on internal conduction and external radiation for heat dispersion and rejection. System solutions are necessary in most cases to fully assess the benefits and/or consequences of any potential design change. Thermal model correlation is critical since thruster operational parameters can push some components/materials beyond their temperature limits. This thruster incorporates a state-of-the-art magnetic shielding system to reduce plasma erosion and to a lesser extend power/heat deposition. Additionally a comprehensive thermal design strategy was employed to reduce temperatures of critical thruster components (primarily the magnet coils and the discharge channel). Long term wear testing is currently underway to assess the effectiveness of these systems and consequently thruster longevity.

  18. Rupture tests with reactor pressure vessel head models

    International Nuclear Information System (INIS)

    Talja, H.; Keinaenen, H.; Hosio, E.; Pankakoski, P.H.; Rahka, K.

    2003-01-01

    In the LISSAC project (LImit Strains in Severe ACcidents), partly funded by the EC Nuclear Fission and Safety Programme within the 5th Framework programme, an extensive experimental and computational research programme is conducted to study the stress state and size dependence of ultimate failure strains. The results are aimed especially to make the assessment of severe accident cases more realistic. For the experiments in the LISSAC project a block of material of the German Biblis C reactor pressure vessel was available. As part of the project, eight reactor pressure vessel head models from this material (22 NiMoCr 3 7) were tested up to rupture at VTT. The specimens were provided by Forschungszentrum Karlsruhe (FzK). These tests were performed under quasistatic pressure load at room temperature. Two specimens sizes were tested and in half of the tests the specimens contain holes describing the control rod penetrations of an actual reactor pressure vessel head. These specimens were equipped with an aluminium liner. All six tests with the smaller specimen size were conducted successfully. In the test with the large specimen with holes, the behaviour of the aluminium liner material proved to differ from those of the smaller ones. As a consequence the experiment ended at the failure of the liner. The specimen without holes yielded results that were in very good agreement with those from the small specimens. (author)

  19. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  20. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  1. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    International Nuclear Information System (INIS)

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  2. Sensitivity analysis for near-surface disposal in argillaceous media using NAMMU-HYROCOIN Level 3-Test case 1

    International Nuclear Information System (INIS)

    Miller, D.R.; Paige, R.W.

    1988-07-01

    HYDROCOIN is an international project for comparing groundwater flow models and modelling strategies. Level 3 of the project concerns the application of groundwater flow models to repository performance assessment with emphasis on the treatment of sensitivity and uncertainty in models and data. Level 3, test case 1 concerns sensitivity analysis of the groundwater flow around a radioactive waste repository situated in a near surface argillaceous formation. Work on this test case has been carried out by Harwell and will be reported in full in the near future. This report presents the results obtained using the computer program NAMMU. (author)

  3. Real-Time Extended Interface Automata for Software Testing Cases Generation

    Directory of Open Access Journals (Sweden)

    Shunkun Yang

    2014-01-01

    Full Text Available Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system.

  4. Real-Time Extended Interface Automata for Software Testing Cases Generation

    Science.gov (United States)

    Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin

    2014-01-01

    Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080

  5. Comparison of model propeller tests with airfoil theory

    Science.gov (United States)

    Durand, William F; Lesley, E P

    1925-01-01

    The purpose of the investigation covered by this report was the examination of the degree of approach which may be anticipated between laboratory tests on model airplane propellers and results computed by the airfoil theory, based on tests of airfoils representative of successive blade sections. It is known that the corrections of angles of attack and for aspect ratio, speed, and interference rest either on experimental data or on somewhat uncertain theoretical assumptions. The general situation as regards these four sets of corrections is far from satisfactory, and while it is recognized that occasion exists for the consideration of such corrections, their determination in any given case is a matter of considerable uncertainty. There exists at the present time no theory generally accepted and sufficiently comprehensive to indicate the amount of such corrections, and the application to individual cases of the experimental data available is, at best, uncertain. While the results of this first phase of the investigation are less positive than had been hoped might be the case, the establishment of the general degree of approach between the two sets of results which might be anticipated on the basis of this simpler mode of application seems to have been desirable.

  6. Modeling of environmentally significant interfaces: Two case studies

    International Nuclear Information System (INIS)

    Williford, R.E.

    2006-01-01

    When some parameters cannot be easily measured experimentally, mathematical models can often be used to deconvolute or interpret data collected on complex systems, such as those characteristic of many environmental problems. These models can help quantify the contributions of various physical or chemical phenomena that contribute to the overall behavior, thereby enabling the scientist to control and manipulate these phenomena, and thus to optimize the performance of the material or device. In the first case study presented here, a model is used to test the hypothesis that oxygen interactions with hydrogen on the catalyst particles of solid oxide fuel cell anodes can sometimes occur a finite distance away from the triple phase boundary (TPB), so that such reactions are not restricted to the TPB as normally assumed. The model may help explain a discrepancy between the observed structure of SOFCs and their performance. The second case study develops a simple physical model that allows engineers to design and control the sizes and shapes of mesopores in silica thin films. Such pore design can be useful for enhancing the selectivity and reactivity of environmental sensors and catalysts. This paper demonstrates the mutually beneficial interactions between experiment and modeling in the solution of a wide range of problems

  7. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  8. Standard Model theory calculations and experimental tests

    International Nuclear Information System (INIS)

    Cacciari, M.; Hamel de Monchenault, G.

    2015-01-01

    To present knowledge, all the physics at the Large Hadron Collider (LHC) can be described in the framework of the Standard Model (SM) of particle physics. Indeed the newly discovered Higgs boson with a mass close to 125 GeV seems to confirm the predictions of the SM. Thus, besides looking for direct manifestations of the physics beyond the SM, one of the primary missions of the LHC is to perform ever more stringent tests of the SM. This requires not only improved theoretical developments to produce testable predictions and provide experiments with reliable event generators, but also sophisticated analyses techniques to overcome the formidable experimental environment of the LHC and perform precision measurements. In the first section, we describe the state of the art of the theoretical tools and event generators that are used to provide predictions for the production cross sections of the processes of interest. In section 2, inclusive cross section measurements with jets, leptons and vector bosons are presented. Examples of differential cross sections, charge asymmetries and the study of lepton pairs are proposed in section 3. Finally, in section 4, we report studies on the multiple production of gauge bosons and constraints on anomalous gauge couplings

  9. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1990-01-01

    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information

  10. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  11. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  12. Modal Identification of a Time-Invariant 6-Storey Model Test RC-Frame from Free Decay Tests using Multi-Variate Models

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Nielsen, Søren R. K.; Kirkegaard, Poul Henning

    1997-01-01

    in the comparison. The data investigated are sampled from a laboratory model of a plane 6-storey, 2-bay RC-frame. The laboratory model is excited at the top storey where two different types of excitation where considered. In the first case the structure was excited in the first mode and in the second case......The scope of the paper is to apply multi-variate time-domain models for identification of eginfrequencies and mode shapes of a time- invariant model test Reinforced Concrete (RC) frame from measured decays. The frequencies and mode shapes of interest are the two lowest ones since they are normally...

  13. Modal Identification of a Time-Invariant 6-Storey Model Test RC-Frame from Free Decay Tests using Multi-Variate Models

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Nielsen, Søren R. K.; Kirkegaard, Poul Henning

    in the comparison. The data investigated are sampled from a laboratory model of a plane 6-storey, 2-bay RC-frame. The laboratory model is excited at the top storey where two different types of excitation where considered. In the first case the structure was excited in the first mode and in the second case......The scope of the paper is to apply multi-variate time-domain models for identification of eginfrequencies and mode shapes of a time- invariant model test Reinforced Concrete (RC) frame from measured decays. The frequencies and mode shapes of interest are the two lowest ones since they are normally...

  14. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna

    2017-01-01

    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting

  15. Thermohydraulic tests in nuclear fuel model

    International Nuclear Information System (INIS)

    Ladeira, L.C.D.; Navarro, M.A.

    1984-01-01

    The main experimental works performed in the Thermohydraulics Laboratory of the NUCLEBRAS Nuclear Technology Development Center, in the field of thermofluodynamics are briefly described. These works include the performing of steady-state flow tests in single tube test sections, and the design and construction of a rod bundle test section, which will be also used for those kind of testes. Mention is made of the works to be performed in the near future, related to steady-state and transient flow tests. (Author) [pt

  16. 30 CFR 250.522 - When do I have to repeat casing diagnostic testing?

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When do I have to repeat casing diagnostic... Operations Casing Pressure Management § 250.522 When do I have to repeat casing diagnostic testing? Casing diagnostic testing must be repeated according to the following table: When * * * you must repeat diagnostic...

  17. The shadow continuum : testing the records continuum model through the Djogdja Documenten and the migrated archives

    NARCIS (Netherlands)

    Karabinos, Michael Joseph

    2015-01-01

    This dissertation tests the universal suitability of the records continuum model by using two cases from the decolonization of Southeast Asia. The continuum model is a new model of records visualization invented in the 1990s that sees records as free to move throughout four ‘dimensions’ rather than

  18. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    Science.gov (United States)

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  19. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  20. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  1. Air injection test on a Kaplan turbine: prototype - model comparison

    Science.gov (United States)

    Angulo, M.; Rivetti, A.; Díaz, L.; Liscia, S.

    2016-11-01

    Air injection is a very well-known resource to reduce pressure pulsation magnitude in turbines, especially on Francis type. In the case of large Kaplan designs, even when not so usual, it could be a solution to mitigate vibrations arising when tip vortex cavitation phenomenon becomes erosive and induces structural vibrations. In order to study this alternative, aeration tests were performed on a Kaplan turbine at model and prototype scales. The research was focused on efficiency of different air flow rates injected in reducing vibrations, especially at the draft tube and the discharge ring and also in the efficiency drop magnitude. It was found that results on both scales presents the same trend in particular for vibration levels at the discharge ring. The efficiency drop was overestimated on model tests while on prototype were less than 0.2 % for all power output. On prototype, air has a beneficial effect in reducing pressure fluctuations up to 0.2 ‰ of air flow rate. On model high speed image computing helped to quantify the volume of tip vortex cavitation that is strongly correlated with the vibration level. The hydrophone measurements did not capture the cavitation intensity when air is injected, however on prototype, it was detected by a sonometer installed at the draft tube access gallery.

  2. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  3. Optimization models for flight test scheduling

    Science.gov (United States)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  4. Test Driven Development of Scientific Models

    Science.gov (United States)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  5. FUMEX cases 1, 2, and 3 calculated pre-test and post-test results

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, S; Vitkova, M; Passage, G; Manolova, M; Simeonova, V [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika; Scheglov, A; Proselkov, V [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation); Kharalampieva, Ts [Kombinat Atomna Energetika, Kozloduj (Bulgaria)

    1994-12-31

    Two versions (modified pre-test and modified post-test) of PIN-micro code were used to analyse the fuel rod behaviour of three FUMEX experiments. The experience of applying PIN-micro code with its simple structure and old conception of the steady-state operation shows significant difficulties in treating the complex processes like those in FUMEX experiments. These difficulties were partially overcame through different model modifications and corrections based on special engineering estimations and the results obtained as a whole do not seem unreasonable. The calculations have been performed by a group from two Bulgarian institutions in collaboration with specialists from the Kurchatov Research Center. 1 tab., 14 figs., 8 refs.

  6. Test Driven Development of Scientific Models

    Science.gov (United States)

    Clune, Thomas L.

    2012-01-01

    Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.

  7. Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling

    Directory of Open Access Journals (Sweden)

    Paul-Antoine Santoni

    2011-01-01

    Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.

  8. Yield surface investigation of alloys during model disk spin tests

    Directory of Open Access Journals (Sweden)

    E. P. Kuzmin

    2014-01-01

    Full Text Available Gas-turbine engines operate under heavy subsequently static loading conditions. Disks of gas-turbine engine are high loaded parts of irregular shape having intensive stress concentrators wherein a 3D stress strain state occurs. The loss of load-carrying capability or burst of disk can lead to severe accident or disaster. Therefore, development of methods to assess deformations and to predict burst is one of the most important problems.Strength assessment approaches are used at all levels of engine creation. In recent years due to actively developing numerical method, particularly FEA, it became possible to investigate load-carrying capability of irregular shape disks, to use 3D computing schemes including flow theory and different options of force and deformation failure criteria. In spite of a wide progress and practical use of strength assessment approaches, there is a lack of detailed research data on yield surface of disk alloys. The main purpose of this work is to validate the use of basis hypothesis of flow theory and investigate the yield surface of disk alloys during the disks spin test.The results of quasi-static numerical simulation of spin tests of model disk made from high-temperature forged alloy are presented. To determine stress-strain state of disk during loading finite element analysis is used. Simulation of elastic-plastic strain fields was carried out using incremental theory of plasticity with isotropic hardening. Hardening function was taken from the results of specimens tensile test. Specimens were cut from a sinkhead of model disk. The paper investigates the model sensitivity affected by V.Mises and Tresca yield criteria as well as the Hosford model. To identify the material model parameters the eddy current sensors were used in the experimental approach to measure rim radial displacements during the load-unload of spin test. The results of calculation made using different material models were compared with the

  9. Nuclear code case development of printed-circuit heat exchangers with thermal and mechanical performance testing

    Energy Technology Data Exchange (ETDEWEB)

    Aakre, Shaun R. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Mechanical Engineering; Jentz, Ian W. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Mechanical Engineering; Anderson, Mark H. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Mechanical Engineering

    2018-03-27

    The U.S. Department of Energy has agreed to fund a three-year integrated research project to close technical gaps involved with compact heat exchangers to be used in nuclear applications. This paper introduces the goals of the project, the research institutions, and industrial partners working in collaboration to develop a draft Boiler and Pressure Vessel Code Case for this technology. Heat exchanger testing, as well as non-destructive and destructive evaluation, will be performed by researchers across the country to understand the performance of compact heat exchangers. Testing will be performed using coolants and conditions proposed for Gen IV Reactor designs. Preliminary observations of the mechanical failure mechanisms of the heat exchangers using destructive and non-destructive methods is presented. Unit-cell finite element models assembled to help predict the mechanical behavior of these high-temperature components are discussed as well. Performance testing methodology is laid out in this paper along with preliminary modeling results, an introduction to x-ray and neutron inspection techniques, and results from a recent pressurization test of a printed-circuit heat exchanger. The operational and quality assurance knowledge gained from these models and validation tests will be useful to developers of supercritical CO2 systems, which commonly employ printed-circuit heat exchangers.

  10. A Case for Nebula Scale Mixing Between Non-Carbonaceous and Carbonaceous Chondrite Reservoirs: Testing the Grand Tack Model with Chromium Isotopic Composition of Almahata Sitta Stone 91A

    Science.gov (United States)

    Sanborn, M. E.; Yin, Q.-Z.; Goodrich, C. A.; Zolensky, M.; Fioretti, A. M.

    2017-01-01

    There is an increasing number of Cr-O-Ti isotope studies that show solar system materials are divided into two main populations, one carbonaceous chondrite (CC)-like and the other is non-carbonaceous (NC)-like, with minimal mixing attributed to a gap opened in the protoplanetary disk due to Jupiter's formation. The Grand Tack model suggests there should be large-scale mixing between S- and C-type asteroids, an idea supported by our recent work on chondrule (Delta)17O-e54Cr isotope systematics. The Almahata Sitta (AhS) meteorite provides a unique opportunity to test the Grand Tack model. The meteorite fell to Earth in October 2008 and has been linked to the asteroid 2008 TC3 which was discovered just prior to the fall of the AhS stones. The AhS meteorite is composed of up to 700 individual pieces with approx.140 of those pieces having some geochemical and/or petrologic studies. Almahata Sitta is an anomalous polymict ureilite with other meteorite components, including enstatite, ordinary, and carbonaceous chondrites with an approximate abundance of 70% ureilites and 30% chondrites. This observation has lead to the suggestion that TC3 2008 was a loosely aggregated rubble pile-like asteroid with the non-ureilite sample clasts within the rubble-pile. Due to the loosely-aggregated nature of AhS, the object disintegrated during atmospheric entry resulting in the weakly held clasts falling predominantly as individual stones in the AhS collection area. However, recent work has identified one sample of AhS, sample 91A, which may represent two different lithologies coexisting within a single stone. The predominate lithology type in 91A appears to be that of a C2 chondrite based on mineralogy but also contains olivine, pyroxene, and albite that have ureilite-like compositions. Previous Cr isotope investigations into AhS stones are sparse and what data is available show nearly uniform isotopic composition similar to that of typical ureilites with negative e54Cr values.

  11. HIV Testing among Canadian Tuberculosis Cases from 1997 to 1998

    Directory of Open Access Journals (Sweden)

    Tara Harris

    2006-01-01

    Full Text Available BACKGROUND: Recent evidence suggests a global rise in adult tuberculosis (TB cases associated with HIV/AIDS. The World Health Organization, the United States Centers for Disease Control and Prevention, and the Public Health Agency of Canada advocate universal screening of all TB cases for HIV. The contribution of HIV to the TB burden in Canada remains unclear.

  12. A test of inflated zeros for Poisson regression models.

    Science.gov (United States)

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  13. System Dynamic Modelling for a Balanced Scorecard: A Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen; Nielsen, Erland Hejn

    Purpose - The purpose of this research is to make an analytical model of the BSC foundation by using a dynamic simulation approach for a 'hypothetical case' model, based on only part of an actual case study of BSC. Design/methodology/approach - The model includes five perspectives and a number...

  14. Testing spatial heterogeneity with stock assessment models

    DEFF Research Database (Denmark)

    Jardim, Ernesto; Eero, Margit; Silva, Alexandra

    2018-01-01

    sub-populations and applied to two case studies, North Sea cod (Gadus morua) and Northeast Atlantic sardine (Sardina pilchardus). Considering that the biological components of a population can be partitioned into discrete spatial units, we extended this idea into a property of additivity of sub......, the better the diffusion process will be detected. On the other hand it showed that weak to moderate diffusion processes are not easy to identify and large differences between sub-populations productivities may be confounded with weak diffusion processes. The application to North Sea cod and Atlantic sardine...... exemplified how much insight can be gained. In both cases the results obtained were sufficiently robust to support the regional analysis....

  15. Modelling of ultrasonic nondestructive testing of cracks in claddings

    International Nuclear Information System (INIS)

    Bostroem, Anders; Zagbai, Theo

    2006-05-01

    Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry. To develop and qualify the methods extensive experimental work with test blocks is usually required. This can be very time-consuming and costly and it also requires a good physical intuition of the situation. A reliable mathematical model of the testing situation can, therefore, be very valuable and cost-effective as it can reduce experimental work significantly. A good mathematical model enhances the physical intuition and is very useful for parametric studies, as a pedagogical tool, and for the qualification of procedures and personnel. The present project has been concerned with the modelling of defects in claddings. A cladding is a layer of material that is put on for corrosion protection, in the nuclear power industry this layer is often an austenitic steel that is welded onto the surface. The cladding is usually anisotropic and to some degree it is most likely also inhomogeneous, particularly in that the direction of the anisotropy is varying. This degree of inhomogeneity is unknown but probably not very pronounced so for modelling purposes it may be a valid assumption to take the cladding to be homogeneous. However, another important complicating factor with claddings is that the interface between the cladding and the base material is often corrugated. This corrugation can have large effects on the transmission of ultrasound through the interface and can thus greatly affect the detectability of defects in the cladding. In the present project the only type of defect that is considered is a planar crack that is situated inside the cladding. The investigations are, furthermore, limited to two dimensions, and the crack is then only a straight line. The crack can be arbitrarily oriented and situated, but it must not intersect the interface to the base material. The crack can be surface-breaking, and this is often the case of most practical interest, but it should then be

  16. Modelling of ultrasonic nondestructive testing of cracks in claddings

    Energy Technology Data Exchange (ETDEWEB)

    Bostroem, Anders; Zagbai, Theo [Calmers Univ. of Technology, Goeteborg (Sweden). Dept. of Applied Mechanics

    2006-05-15

    Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry. To develop and qualify the methods extensive experimental work with test blocks is usually required. This can be very time-consuming and costly and it also requires a good physical intuition of the situation. A reliable mathematical model of the testing situation can, therefore, be very valuable and cost-effective as it can reduce experimental work significantly. A good mathematical model enhances the physical intuition and is very useful for parametric studies, as a pedagogical tool, and for the qualification of procedures and personnel. The present project has been concerned with the modelling of defects in claddings. A cladding is a layer of material that is put on for corrosion protection, in the nuclear power industry this layer is often an austenitic steel that is welded onto the surface. The cladding is usually anisotropic and to some degree it is most likely also inhomogeneous, particularly in that the direction of the anisotropy is varying. This degree of inhomogeneity is unknown but probably not very pronounced so for modelling purposes it may be a valid assumption to take the cladding to be homogeneous. However, another important complicating factor with claddings is that the interface between the cladding and the base material is often corrugated. This corrugation can have large effects on the transmission of ultrasound through the interface and can thus greatly affect the detectability of defects in the cladding. In the present project the only type of defect that is considered is a planar crack that is situated inside the cladding. The investigations are, furthermore, limited to two dimensions, and the crack is then only a straight line. The crack can be arbitrarily oriented and situated, but it must not intersect the interface to the base material. The crack can be surface-breaking, and this is often the case of most practical interest, but it should then be

  17. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  18. Using a fuzzy comprehensive evaluation method to determine product usability: A test case.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.

  19. Safety assessment of near surface radioactive waste disposal facilities: Model intercomparison using simple hypothetical data (Test Case 1). First report of NSARS. Part of the co-ordinated research programme on the safety assessment of near surface radioactive waste disposal facilities (NSARS)

    International Nuclear Information System (INIS)

    1995-11-01

    In many countries near surface disposal is the preferred option for the comparatively large volumes of low and intermediate level wastes which arise during nuclear power plant operations, nuclear fuel reprocessing and also for the wastes arising from radionuclide applications in hospitals and research establishments. Near surface disposal is also widely practised in the case of hazardous wastes from chemical industries. It is obviously necessary to show that waste disposal methods are safe and that both man and the environment will be adequately protected. Following a previous related Co-ordinated Research Programme (CRP) on ''Migration and Biological Transfer of Radionuclides from Shallow Land Burial'' during 1985 to 1989 (IAEA-TECDOC-579, Vienna, 1990), the issue of reliability of safety assessments was identified as an important topic for further support and development. A new CRP was formulated with the acronym NSARS (Near Surface Radioactive Waste Disposal Safety Assessment Reliability Study). This technical document is the first report of from the CRP and contains the intercomparison of results of the first test exercise (Test Case 1) on modelling of potential radiation exposures as a result of near surface disposal. Test Case 1 is based on entirely hypothetical data and includes consideration of exposures due to leaching and as a result of human intrusion into the site. Refs, figs and tabs

  20. Safety assessment of near surface radioactive waste disposal facilities: Model intercomparison using simple hypothetical data (Test Case 1). First report of NSARS. Part of the co-ordinated research programme on the safety assessment of near surface radioactive waste disposal facilities (NSARS)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-11-01

    In many countries near surface disposal is the preferred option for the comparatively large volumes of low and intermediate level wastes which arise during nuclear power plant operations, nuclear fuel reprocessing and also for the wastes arising from radionuclide applications in hospitals and research establishments. Near surface disposal is also widely practised in the case of hazardous wastes from chemical industries. It is obviously necessary to show that waste disposal methods are safe and that both man and the environment will be adequately protected. Following a previous related Co-ordinated Research Programme (CRP) on ``Migration and Biological Transfer of Radionuclides from Shallow Land Burial`` during 1985 to 1989 (IAEA-TECDOC-579, Vienna, 1990), the issue of reliability of safety assessments was identified as an important topic for further support and development. A new CRP was formulated with the acronym NSARS (Near Surface Radioactive Waste Disposal Safety Assessment Reliability Study). This technical document is the first report of from the CRP and contains the intercomparison of results of the first test exercise (Test Case 1) on modelling of potential radiation exposures as a result of near surface disposal. Test Case 1 is based on entirely hypothetical data and includes consideration of exposures due to leaching and as a result of human intrusion into the site. Refs, figs and tabs.

  1. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  2. Modelling of ultrasonic nondestructive testing in anisotropic materials - Rectangular crack

    International Nuclear Information System (INIS)

    Bostroem, A.

    2001-12-01

    Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry when searching for defects, in particular cracks. To develop and qualify testing procedures extensive experimental work on test blocks is usually required. This can take a lot of time and therefore be quite costly. A good mathematical model of the testing situation is therefore of great value as it can reduce the experimental work to a great extent. A good model can be very useful for parametric studies and as a pedagogical tool. A further use of a model is as a tool in the qualification of personnel. In anisotropic materials, e.g. austenitic welds, the propagation of ultrasound becomes much more complicated as compared to isotropic materials. Therefore, modelling is even more useful for anisotropic materials, and it in particular has a greater pedagogical value. The present project has been concerned with a further development of the anisotropic capabilities of the computer program UTDefect, which has so far only contained a strip-like crack as the single defect type for anisotropic materials. To be more specific, the scattering by a rectangular crack in an anisotropic component has been studied and the result is adapted to include transmitting and receiving ultrasonic probes. The component under study is assumed to be anisotropic with arbitrary anisotropy. On the other hand, it is assumed to be homogeneous, and this in particular excludes most welds, where it is seldom an adequate approximation to assume homogeneity. The anisotropy may be arbitrarily oriented and the same is true of the rectangular crack. The crack may also be located near a backside of the component. To solve the scattering problem for the crack an integral equation method is used. The probe model has been developed in an earlier project and to compute the signal response in the receiving probe an electromechanical reciprocity argument is employed. As a rectangle is a truly 3D scatterer the sizes of the

  3. Stoner–Wohlfarth model for the anisotropic case

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Marcos F. de, E-mail: mcampos@metal.eeimvr.uff.br [Programa de Pós-graduação em Engenharia Metalúrgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420,27255-125 Volta Redonda, Rio de Janeiro (Brazil); Sampaio da Silva, Fernanda A. [Programa de Pós-graduação em Engenharia Metalúrgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420,27255-125 Volta Redonda, Rio de Janeiro (Brazil); Perigo, Elio A. [Laboratory for the Physics of Advanced Materials, University of Luxembourg, L1511 Luxembourg (Luxembourg); Castro, José A. de [Programa de Pós-graduação em Engenharia Metalúrgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420,27255-125 Volta Redonda, Rio de Janeiro (Brazil)

    2013-11-15

    The Stoner–Wohlfarth (SW) model was calculated for the anisotropic case, by assuming crystallographical texture distributions as Gaussian, Lorentzian and Cos{sup n} (alpha). All these distributions were tested and both Gaussian and Cos{sup n} (alpha) give similar results for M{sub r}/M{sub s} above 0.8. However, the use of Cos{sup n} (alpha) makes it easier to find analytical expressions representing texture. The Lorentzian distribution is a suitable choice for not well aligned magnets, or magnets with a high fraction of misaligned grains. It is discussed how to obtain the alignment degree M{sub r}/M{sub s} directly from two measurements of magnetic remanence at the transverse and parallel directions to the alignment direction of the magnet. It is demonstrated that even the well aligned magnets with M{sub r}/M{sub s}=0.96 present coercive field of 60–70% of the anisotropy field, depending on the chosen distribution. The anisotropic SW model was used for discussing hysteresis squareness. Improving the crystalographical texture, the loop squareness also increases. - Highlights: • The Stoner–Wohlfarth model was calculated for the anisotropic case. • Different distribution functions for texture description were compared and discussed. • Lorentzian distribution is adequate for not well oriented magnets. • Determination of the alignment ratio M{sub r}/M{sub s} from 2 remanence measurements. • Prediction of the coercive field in Stoner–Wohlfarth aligned magnets.

  4. Simple Algorithms to Calculate Asymptotic Null Distributions of Robust Tests in Case-Control Genetic Association Studies in R

    Directory of Open Access Journals (Sweden)

    Wing Kam Fung

    2010-02-01

    Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.

  5. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  6. The Clinical and Economic Benefits of Co-Testing Versus Primary HPV Testing for Cervical Cancer Screening: A Modeling Analysis.

    Science.gov (United States)

    Felix, Juan C; Lacey, Michael J; Miller, Jeffrey D; Lenhart, Gregory M; Spitzer, Mark; Kulkarni, Rucha

    2016-06-01

    Consensus United States cervical cancer screening guidelines recommend use of combination Pap plus human papillomavirus (HPV) testing for women aged 30 to 65 years. An HPV test was approved by the Food and Drug Administration in 2014 for primary cervical cancer screening in women age 25 years and older. Here, we present the results of clinical-economic comparisons of Pap plus HPV mRNA testing including genotyping for HPV 16/18 (co-testing) versus DNA-based primary HPV testing with HPV 16/18 genotyping and reflex cytology (HPV primary) for cervical cancer screening. A health state transition (Markov) model with 1-year cycling was developed using epidemiologic, clinical, and economic data from healthcare databases and published literature. A hypothetical cohort of one million women receiving triennial cervical cancer screening was simulated from ages 30 to 70 years. Screening strategies compared HPV primary to co-testing. Outcomes included total and incremental differences in costs, invasive cervical cancer (ICC) cases, ICC deaths, number of colposcopies, and quality-adjusted life years for cost-effectiveness calculations. Comprehensive sensitivity analyses were performed. In a simulation cohort of one million 30-year-old women modeled up to age 70 years, the model predicted that screening with HPV primary testing instead of co-testing could lead to as many as 2,141 more ICC cases and 2,041 more ICC deaths. In the simulation, co-testing demonstrated a greater number of lifetime quality-adjusted life years (22,334) and yielded $39.0 million in savings compared with HPV primary, thereby conferring greater effectiveness at lower cost. Model results demonstrate that co-testing has the potential to provide improved clinical and economic outcomes when compared with HPV primary. While actual cost and outcome data are evaluated, these findings are relevant to U.S. healthcare payers and women's health policy advocates seeking cost-effective cervical cancer screening

  7. Testing Pearl Model In Three European Sites

    Science.gov (United States)

    Bouraoui, F.; Bidoglio, G.

    The Plant Protection Product Directive (91/414/EEC) stresses the need of validated models to calculate predicted environmental concentrations. The use of models has become an unavoidable step before pesticide registration. In this context, European Commission, and in particular DGVI, set up a FOrum for the Co-ordination of pes- ticide fate models and their USe (FOCUS). In a complementary effort, DG research supported the APECOP project, with one of its objective being the validation and im- provement of existing pesticide fate models. The main topic of research presented here is the validation of the PEARL model for different sites in Europe. The PEARL model, actually used in the Dutch pesticide registration procedure, was validated in three well- instrumented sites: Vredepeel (the Netherlands), Brimstone (UK), and Lanna (Swe- den). A step-wise procedure was used for the validation of the PEARL model. First the water transport module was calibrated, and then the solute transport module, using tracer measurements keeping unchanged the water transport parameters. The Vrede- peel site is characterised by a sandy soil. Fourteen months of measurements were used for the calibration. Two pesticides were applied on the site: bentazone and etho- prophos. PEARL predictions were very satisfactory for both soil moisture content, and pesticide concentration in the soil profile. The Brimstone site is characterised by a cracking clay soil. The calibration was conducted on a time series measurement of 7 years. The validation consisted in comparing predictions and measurement of soil moisture at different soil depths, and in comparing the predicted and measured con- centration of isoproturon in the drainage water. The results, even if in good agreement with the measuremens, highlighted the limitation of the model when the preferential flow becomes a dominant process. PEARL did not reproduce well soil moisture pro- file during summer months, and also under-predicted the arrival of

  8. Thermal-Chemical Model Of Subduction: Results And Tests

    Science.gov (United States)

    Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.

    2005-12-01

    Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.

  9. Testing Software Development Project Productivity Model

    Science.gov (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  10. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  11. Experimental tests of proton spin models

    International Nuclear Information System (INIS)

    Ramsey, G.P.; Argonne National Lab., IL

    1989-01-01

    We have developed models for the spin-weighted quark and gluon distribution in a longitudinally polarized proton. The model parameters are determined from current algebra sum rules and polarized deep-inelastic scattering data. A number of different scenarios are presented for the fraction of spin carried the constituent parton distributions. A possible long-range experimental program is suggested for measuring various hard scattering processes using polarized lepton and proton beams. With the knowledge gained from these experiments, we can begin to understand the parton contributions to the proton spin. 28 refs., 5 figs

  12. Idealized tropical cyclone simulations of intermediate complexity: A test case for AGCMs

    Directory of Open Access Journals (Sweden)

    Kevin Reed

    2012-04-01

    Full Text Available The paper introduces a moist, deterministic test case of intermediate complexity for Atmospheric General Circulation Models (AGCMs. We suggest pairing an AGCM dynamical core with simple physical parameterizations to test the evolution of a single, idealized, initially weak vortex into a tropical cyclone. The initial conditions are based on an initial vortex seed that is in gradient-wind and hydrostatic balance. The suggested ``simple-physics'' package consists of parameterizations of bulk aerodynamic surface fluxes for moisture, sensible heat and momentum, boundary layer diffusion, and large-scale condensation. Such a configuration includes the important driving mechanisms for tropical cyclones, and leads to a rapid intensification of the initial vortex over a forecast period of ten days. The simple-physics test paradigm is not limited to tropical cyclones, and can be universally applied to other flow fields. The physical parameterizations are described in detail to foster model intercomparisons.The characteristics of the intermediate-complexity test case are demonstrated with the help of four hydrostatic dynamical cores that are part of the Community Atmosphere Model version 5 (CAM 5 developed at the National Center for Atmospheric Research (NCAR. In particular, these are the Finite-Volume, Spectral Element, and spectral transform Eulerian and semi-Lagrangian dynamical cores that are coupled to the simple-physics suite. The simulations show that despite the simplicity of the physics forcings the models develop the tropical cyclone at horizontal grid spacings of about 55 km and finer. The simple-physics simulations reveal essential differences in the storm's structure and strength due to the choice of the dynamical core. Similar differences are also seen in complex full-physics aqua-planet experiments with CAM 5 which serve as a motivator for this work. The results suggest that differences in complex full-physics simulations can be, at least

  13. Reference materials and representative test materials: the nanotechnology case

    International Nuclear Information System (INIS)

    Roebben, G.; Rasmussen, K.; Kestens, V.; Linsinger, T. P. J.; Rauscher, H.; Emons, H.; Stamm, H.

    2013-01-01

    An increasing number of chemical, physical and biological tests are performed on manufactured nanomaterials for scientific and regulatory purposes. Existing test guidelines and measurement methods are not always directly applicable to or relevant for nanomaterials. Therefore, it is necessary to verify the use of the existing methods with nanomaterials, thereby identifying where modifications are needed, and where new methods need to be developed and validated. Efforts for verification, development and validation of methods as well as quality assurance of (routine) test results significantly benefit from the availability of suitable test and reference materials. This paper provides an overview of the existing types of reference materials and introduces a new class of test materials for which the term ‘representative test material’ is proposed. The three generic concepts of certified reference material, reference material(non-certified) and representative test material constitute a comprehensive system of benchmarks that can be used by all measurement and testing communities, regardless of their specific discipline. This paper illustrates this system with examples from the field of nanomaterials, including reference materials and representative test materials developed at the European Commission’s Joint Research Centre, in particular at the Institute for Reference Materials and Measurements (IRMM), and at the Institute for Health and Consumer Protection (IHCP).

  14. INTRAVAL Working group 2 summary report on Phase 2 analysis of the Finnsjoen test case

    International Nuclear Information System (INIS)

    Andersson, Peter; Winberg, A.

    1994-01-01

    A comprehensive series of tracer tests on a relatively large scale have been performed by SKB at Finnsjoen, Sweden, to increase understanding of transport phenomena which govern migration of radionuclides in major fracture zones. The conducted experiments were subsequently selected as a test in the international INTRAVAL Project, in part because the tests at Finnsjoe invite to direct address of validation of geosphere models. This report summarizes the study of the Finnsjoe test case within INTRAVAL Phase 2, which has involved nine project teams from seven countries. Porous media approaches in two dimensions dominated, although some project teams utilized one-dimensional transport models, and even three-dimensional approaches on a larger scale. The dimensionality employed did not appear to be decisive for the ability to reproduce the observed field responses. It was also demonstrated that stochastic approaches can be used in a validation process. Only four out of nine project teams studied more than one process. The general conclusion drawn is that flow and transport in the studied zone is governed by advection and that hydrodynamic dispersion is needed to explain the breakthrough curves. Matrix diffusion is assumed to have small or negligible effect. The performed analysis is dominated by numerical approaches applied on scales on the order of a 1000m. Taking scale alone into account, the results of most teams are possible to compare. A variety of validation aspects have been considered. Five teams utilized a model calibrated on one test, to predict another, whereas the two teams utilizing stochastic continuum approaches addressed; 1) validity of extrapolation of a model calibrated on one transport scale to a larger scale, 2) performance assessment implications of choice of underlying distribution model for hydraulic conductivity, respectively. 37 refs

  15. Economic Crisis and Marital Problems in Turkey: Testing the Family Stress Model

    Science.gov (United States)

    Aytac, Isik A.; Rankin, Bruce H.

    2009-01-01

    This paper applied the family stress model to the case of Turkey in the wake of the 2001 economic crisis. Using structural equation modeling and a nationally representative urban sample of 711 married women and 490 married men, we tested whether economic hardship and the associated family economic strain on families resulted in greater marital…

  16. Port Adriano, 2D-Model tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Meinert, Palle; Andersen, Thomas Lykke

    the crown wall have been measured. The model has been subjected to irregular waves corresponding to typical conditions offshore from the intended prototype location. Characteristic situations have been video recorded. The stability of the toe has been investigated. The wave-generated forces on the caisson...

  17. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    . Furthermore,Gurson-Tvergaard-Needleman model predictions from a top-down approach are employed to gain insightinto the mechanisms governing crack initiation and subsequent propagation in small punch experiments.An accurate assessment of micromechanical toughness parameters from the SPT...

  18. Testing structural stability in macroeconometric models

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Hashimzade, N.; Thornton, M.A.

    2013-01-01

    Since the earliest days of macroeconometric analysis, researchers have been concerned about the appropriateness of the assumption that model parameters remain constant over long periods of time; for example see Tinbergen (1939). This concern is also central to the so-called Lucas (1976) critique

  19. Model Testing - Bringing the Ocean into the Laboratory

    DEFF Research Database (Denmark)

    Aage, Christian

    2000-01-01

    Hydrodynamic model testing, the principle of bringing the ocean into the laboratory to study the behaviour of the ocean itself and the response of man-made structures in the ocean in reduced scale, has been known for centuries. Due to an insufficient understanding of the physics involved, however......, the early model tests often gave incomplete or directly misleading results.This keynote lecture deals with some of the possibilities and problems within the field of hydrodynamic and hydraulic model testing....

  20. Stress-testing the Standard Model at the LHC

    CERN Document Server

    2016-01-01

    With the high-energy run of the LHC now underway, and clear manifestations of beyond-Standard-Model physics not yet seen in data from the previous run, the search for new physics at the LHC may be a quest for small deviations with big consequences. If clear signals are present, precise predictions and measurements will again be crucial for extracting the maximum information from the data, as in the case of the Higgs boson. Precision will therefore remain a key theme for particle physics research in the coming years. The conference will provide a forum for experimentalists and theorists to identify the challenges and refine the tools for high-precision tests of the Standard Model and searches for signals of new physics at Run II of the LHC. Topics to be discussed include: pinning down Standard Model corrections to key LHC processes; combining fixed-order QCD calculations with all-order resummations and parton showers; new developments in jet physics concerning jet substructure, associated jets and boosted je...

  1. Evaluation of Test-Driven Development : An Industrial Case Study

    NARCIS (Netherlands)

    Wasmus, H.; Gross, H.G.

    2007-01-01

    Test-driven development is a novel software development practice and part of the Extreme Programming paradigm. It is based on the principle that tests should be designed and written for a module iteratively, while the code of the module is devised. This is the opposite of what is usual in current

  2. Building Energy Simulation Test for Existing Homes (BESTEST-EX); Phase 1 Test Procedure: Building Thermal Fabric Cases

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron [National Renewable Energy Lab. (NREL), Golden, CO (United States; Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States; Bianchi, Marcus [National Renewable Energy Lab. (NREL), Golden, CO (United States; Neymark, Joel [J. Neymark & Associates, Golden, CO (United States)

    2010-08-01

    This report documents the initial Phase 1 test process for testing the reliability of software models that predict retrofit energy savings of existing homes, including their associated calibration methods.

  3. Testing the EKC hypothesis by considering trade openness, urbanization, and financial development: the case of Turkey.

    Science.gov (United States)

    Ozatac, Nesrin; Gokmenoglu, Korhan K; Taspinar, Nigar

    2017-07-01

    This study investigates the environmental Kuznets curve (EKC) hypothesis for the case of Turkey from 1960 to 2013 by considering energy consumption, trade, urbanization, and financial development variables. Although previous literature examines various aspects of the EKC hypothesis for the case of Turkey, our model augments the basic model with several covariates to develop a better understanding of the relationship among the variables and to refrain from omitted variable bias. The results of the bounds test and the error correction model under autoregressive distributed lag mechanism suggest long-run relationships among the variables as well as proof of the EKC and the scale effect in Turkey. A conditional Granger causality test reveals that there are causal relationships among the variables. Our findings can have policy implications including the imposition of a "polluter pays" mechanism, such as the implementation of a carbon tax for pollution trading, to raise the urban population's awareness about the importance of adopting renewable energy and to support clean, environmentally friendly technology.

  4. Cavitating Propeller Performance in Inclined Shaft Conditions with OpenFOAM: PPTC 2015 Test Case

    Science.gov (United States)

    Gaggero, Stefano; Villa, Diego

    2018-05-01

    In this paper, we present our analysis of the non-cavitating and cavitating unsteady performances of the Potsdam Propeller Test Case (PPTC) in oblique flow. For our calculations, we used the Reynolds-averaged Navier-Stokes equation (RANSE) solver from the open-source OpenFOAM libraries. We selected the homogeneous mixture approach to solve for multiphase flow with phase change, using the volume of fluid (VoF) approach to solve the multiphase flow and modeling the mass transfer between vapor and water with the Schnerr-Sauer model. Comparing the model results with the experimental measurements collected during the Second Workshop on Cavitation and Propeller Performance - SMP'15 enabled our assessment of the reliability of the open-source calculations. Comparisons with the numerical data collected during the workshop enabled further analysis of the reliability of different flow solvers from which we produced an overview of recommended guidelines (mesh arrangements and solver setups) for accurate numerical prediction even in off-design conditions. Lastly, we propose a number of calculations using the boundary element method developed at the University of Genoa for assessing the reliability of this dated but still widely adopted approach for design and optimization in the preliminary stages of very demanding test cases.

  5. Business model dynamics: a case survey

    OpenAIRE

    de Reuver, Mark; Bouwman, Harry; Maclnnes, Ian

    2009-01-01

    In the turbulent world of e-commerce, companies can only survive by continuously reinventing their business models. However, because most studies look at business models as snapshots in time, there is little insight into how changing market-related, technological and regulatory conditions generally drive revisions in business models. In this paper, we examine which types of external drivers are strongest in forcing business models to change throughout their life cycle. To do so, we study 45 l...

  6. Model tests in RAMONA and NEPTUN

    International Nuclear Information System (INIS)

    Hoffmann, H.; Ehrhard, P.; Weinberg, D.; Carteciano, L.; Dres, K.; Frey, H.H.; Hayafune, H.; Hoelle, C.; Marten, K.; Rust, K.; Thomauske, K.

    1995-01-01

    In order to demonstrate passive decay heat removal (DHR) in an LMR such as the European Fast Reactor, the RAMONA and NEPTUN facilities, with water as a coolant medium, were used to measure transient flow data corresponding to a transition from forced convection (under normal operation) to natural convection under DHR conditions. The facilities were 1:20 and 1:5 models, respectively, of a pool-type reactor including the IHXs, pumps, and immersed coolers. Important results: The decay heat can be removed from all parts of the primary system by natural convection, even if the primary fluid circulation through the IHX is interrupted. This result could be transferred to liquid metal cooling by experiments in models with thermohydraulic similarity. (orig.)

  7. A magnetorheological actuation system: test and model

    International Nuclear Information System (INIS)

    John, Shaju; Chaudhuri, Anirban; Wereley, Norman M

    2008-01-01

    Self-contained actuation systems, based on frequency rectification of the high frequency motion of an active material, can produce high force and stroke output. Magnetorheological (MR) fluids are active fluids whose rheological properties can be altered by the application of a magnetic field. By using MR fluids as the energy transmission medium in such hybrid devices, a valving system with no moving parts can be implemented and used to control the motion of an output cylinder shaft. The MR fluid based valves are configured in the form of an H-bridge to produce bi-directional motion in an output cylinder by alternately applying magnetic fields in the two opposite arms of the bridge. The rheological properties of the MR fluid are modeled using both Bingham plastic and bi-viscous models. In this study, the primary actuation is performed using a compact terfenol-D rod driven pump and frequency rectification of the rod motion is done using passive reed valves. The pump and reed valve configuration along with MR fluidic valves form a compact hydraulic actuation system. Actuator design, analysis and experimental results are presented in this paper. A time domain model of the actuator is developed and validated using experimental data

  8. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  9. Cases as Shared Inquiry: A Dialogical Model of Teacher Preparation.

    Science.gov (United States)

    Harrington, Helen L.; Garrison, James W.

    1992-01-01

    A dialogical model is proposed for connecting theory to practice in teacher education by conceiving of cases from case-based pedagogy as problems that initiate shared inquiry. Cases with genuine cognitive and axiological content can initiate self-directed, student-centered inquiry while building democratic dialogical communities. (SLD)

  10. Analysis and model testing of Super Tiger Type B packaging in accident environments

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Romesberg, L.E.; May, R.A.; Joseph, B.J.

    1980-01-01

    Based on previous scale model test results with more rigid systems and the subsystem tests on drums, it is believed that the scaled models realistically replicate full scale system behavior. Future work will be performed to obtain improved stiffness data on the Type A containers. These data will be incorporated into the finite element model, and improved correlation with the test results is expected. Review of the scale model transport system test results indicated that the method of attachment of the Super Tiger to the trailer was the primary cause for detachment of the outer door during the one-eighth scale grade-crossing test. Although the container seal on the scale model of Super Tiger was not adequately modeled to provide a leak-tight seal, loss of the existing seal in a full scale test can be inferred from the results of the one-quarter scale model grade-crossing test. In each test, approximately two-thirds of the model drums were estimated to have deformed sufficiently to predict loss of drum head closure seal, with several partially losing their contents within the overpack. In no case were drums ejected from the overpack, nor was there evidence of material loss in excess of the amount assumed in the WIPP EIS from any of the Super Tiger models tested. 9 figures

  11. Integrating non-animal test information into an adaptive testing strategy - skin sensitization proof of concept case.

    Science.gov (United States)

    Jaworska, Joanna; Harol, Artsiom; Kern, Petra S; Gerberick, G Frank

    2011-01-01

    There is an urgent need to develop data integration and testing strategy frameworks allowing interpretation of results from animal alternative test batteries. To this end, we developed a Bayesian Network Integrated Testing Strategy (BN ITS) with the goal to estimate skin sensitization hazard as a test case of previously developed concepts (Jaworska et al., 2010). The BN ITS combines in silico, in chemico, and in vitro data related to skin penetration, peptide reactivity, and dendritic cell activation, and guides testing strategy by Value of Information (VoI). The approach offers novel insights into testing strategies: there is no one best testing strategy, but the optimal sequence of tests depends on information at hand, and is chemical-specific. Thus, a single generic set of tests as a replacement strategy is unlikely to be most effective. BN ITS offers the possibility of evaluating the impact of generating additional data on the target information uncertainty reduction before testing is commenced.

  12. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  13. Diversity in case management modalities: the Summit model.

    Science.gov (United States)

    Peterson, G A; Drone, I D; Munetz, M R

    1997-06-01

    Though ubiquitous in community mental health agencies, case management suffers from a lack of consensus regarding its definition, essential components, and appropriate application. Meaningful comparisons of various case management models await such a consensus. Global assessments of case management must be replaced by empirical studies of specific interventions with respect to the needs of specific populations. The authors describe a highly differentiated and prescriptive system of case management involving the application of more than one model of service delivery. Such a diversified and targeted system offers an opportunity to study the technology of case management in a more meaningful manner.

  14. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Lauderbach, Lisa [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Garza, Raul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Ferranti, Louis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Vitello, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. Finally, the total detonation energy density was locked to the v = 7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  15. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark; Lauderbach, Lisa; Garza, Raul; Ferranti, Louis; Vitello, Peter

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. The total detonation energy density was locked to the v=7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  16. EXPLORATION WELL TEST CASE HISTORY CONFIRMS IMPORTANCE OF DST

    Directory of Open Access Journals (Sweden)

    Dario Damjanić

    2009-12-01

    Full Text Available Drill stem testing of the exploration well consisted of two flow and two pressure build-up periods. Gas was obtained. Modified isochronal test was used during testing the well after completion. Except gas, small quantity of condensate and traces of oil and water were obtained. Both pressure build-up analyses showed that formation permeability is low. DST pressure build-up analysis showed that wellbore damage is present. This was proved later, when acid treatment was performed, by which skin was removed and production increased significantly. Data obtained by well testing are very important for future productivity prediction and determination of optimal well completion and surface facility construction (the paper is published in Croatian.

  17. Convective aggregation in idealised models and realistic equatorial cases

    Science.gov (United States)

    Holloway, Chris

    2015-04-01

    Idealised explicit convection simulations of the Met Office Unified Model are shown to exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen previously in other models in several recent studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapour (CWV) field. To investigate the relevance of this behaviour to the real world, these idealized simulations are compared with five 15-day cases of real organized convection in the tropics, including multiple simulations of each case testing sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. Despite similar large-scale forcing via lateral boundary conditions, systematic differences in mean CWV, CWV distribution shape, and the length scale of CWV features are found between the different sensitivity runs, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations.

  18. Impacts modeling using the SPH particulate method. Case study

    International Nuclear Information System (INIS)

    Debord, R.

    1999-01-01

    The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)

  19. Singularity analysis in nonlinear biomathematical models: Two case studies

    International Nuclear Information System (INIS)

    Meletlidou, E.; Leach, P.G.L.

    2007-01-01

    We investigate the possession of the Painleve Property for certain values of the parameters in two biological models. The first is a metapopulation model for two species (prey and predator) and the second one is a study of a sexually transmitted disease, into which 'education' is introduced. We determine the cases for which the systems possess the Painleve Property, in particular some of the cases for which the equations can be directly integrated. We draw conclusions for these cases

  20. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  1. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  2. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand

    2009-01-01

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  3. A Bootstrap Cointegration Rank Test for Panels of VAR Models

    DEFF Research Database (Denmark)

    Callot, Laurent

    functions of the individual Cointegrated VARs (CVAR) models. A bootstrap based procedure is used to compute empirical distributions of the trace test statistics for these individual models. From these empirical distributions two panel trace test statistics are constructed. The satisfying small sample...

  4. Modelling Monthly Mental Sickness Cases Using Principal ...

    African Journals Online (AJOL)

    The methodology was principal component analysis (PCA) using data obtained from the hospital to estimate regression coefficients and parameters. It was found that the principal component regression model that was derived was good predictive tool. The principal component regression model obtained was okay and this ...

  5. Comparison of ORSAT and SCARAB Reentry Analysis Tools for a Generic Satellite Test Case

    Science.gov (United States)

    Kelley, Robert L.; Hill, Nicole M.; Rochelle, W. C.; Johnson, Nicholas L.; Lips, T.

    2010-01-01

    Reentry analysis is essential to understanding the consequences of the full life cycle of a spacecraft. Since reentry is a key factor in spacecraft development, NASA and ESA have separately developed tools to assess the survivability of objects during reentry. Criteria such as debris casualty area and impact energy are particularly important to understanding the risks posed to people on Earth. Therefore, NASA and ESA have undertaken a series of comparison studies of their respective reentry codes for verification and improvements in accuracy. The NASA Object Reentry Survival Analysis Tool (ORSAT) and the ESA Spacecraft Atmospheric Reentry and Aerothermal Breakup (SCARAB) reentry analysis tools serve as standard codes for reentry survivability assessment of satellites. These programs predict whether an object will demise during reentry and calculate the debris casualty area of objects determined to survive, establishing the reentry risk posed to the Earth's population by surviving debris. A series of test cases have been studied for comparison and the most recent uses "Testsat," a conceptual satellite composed of generic parts, defined to use numerous simple shapes and various materials for a better comparison of the predictions of these two codes. This study is an improvement on the others in this series because of increased consistency in modeling techniques and variables. The overall comparison demonstrated that the two codes arrive at similar results. Either most objects modeled resulted in close agreement between the two codes, or if the difference was significant, the variance could be explained as a case of semantics in the model definitions. This paper presents the main results of ORSAT and SCARAB for the Testsat case and discusses the sources of any discovered differences. Discussion of the results of previous comparisons is made for a summary of differences between the codes and lessons learned from this series of tests.

  6. Technology Solutions Case Study: Combustion Safety Simplified Test Protocol

    Energy Technology Data Exchange (ETDEWEB)

    L. Brand, D. Cautley, D. Bohac, P. Francisco, L. Shen, and S. Gloss

    2015-12-01

    Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives.

  7. Efficiency of color vision tests in hereditary dyschromatopsia: case report

    OpenAIRE

    Fernandes, Luciene Chaves; Urbano, Lúcia Carvalho de Ventura

    2008-01-01

    As autoras relatam dois casos de discromatopsia hereditária e discutem a eficiência dos testes cromáticos no diagnóstico de uma discromatopsia. Os pacientes foram reprovados em diferentes concursos públicos federais por apresentarem diagnóstico de discromatopsia hereditária pelo teste de Ishihara. Submeteram-se a exame oftalmológico, com resultados dentro da normalidade. Procuraram novo parecer para melhor caracterização da sua discromatopsia. Não havia sintomas relacionados à deficiência. Os...

  8. Design, test and model of a hybrid magnetostrictive hydraulic actuator

    International Nuclear Information System (INIS)

    Chaudhuri, Anirban; Yoo, Jin-Hyeong; Wereley, Norman M

    2009-01-01

    The basic operation of hybrid hydraulic actuators involves high frequency bi-directional operation of an active material that is converted to uni-directional motion of hydraulic fluid using valves. A hybrid actuator was developed using magnetostrictive material Terfenol-D as the driving element and hydraulic oil as the working fluid. Two different lengths of Terfenol-D rod, 51 and 102 mm, with the same diameter, 12.7 mm, were used. Tests with no load and with load were carried out to measure the performance for uni-directional motion of the output piston at different pumping frequencies. The maximum no-load flow rates were 24.8 cm 3 s −1 and 22.7 cm 3 s −1 with the 51 mm and 102 mm long rods respectively, and the peaks were noted around 325 Hz pumping frequency. The blocked force of the actuator was close to 89 N in both cases. A key observation was that, at these high pumping frequencies, the inertial effects of the fluid mass dominate over the viscous effects and the problem becomes unsteady in nature. In this study, we also develop a mathematical model of the hydraulic hybrid actuator in the time domain to show the basic operational principle under varying conditions and to capture phenomena affecting system performance. Governing equations for the pumping piston and output shaft were obtained from force equilibrium considerations, while compressibility of the working fluid was taken into account by incorporating the bulk modulus. Fluid inertia was represented by a lumped parameter approach to the transmission line model, giving rise to strongly coupled ordinary differential equations. The model was then used to calculate the no-load velocities of the actuator at different pumping frequencies and simulation results were compared with experimental data for model validation

  9. Collider tests of the Renormalizable Coloron Model

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yang; Dobrescu, Bogdan A.

    2018-04-01

    The coloron, a massive version of the gluon present in gauge extensions of QCD, has been searched for at the LHC as a dijet or top quark pair resonance. We point out that in the Renormalizable Coloron Model (ReCoM) with a minimal field content to break the gauge symmetry, a color-octet scalar and a singlet scalar are naturally lighter than the coloron because they are pseudo Nambu-Goldstone bosons. Consequently, the coloron may predominantly decay into scalar pairs, leading to novel signatures at the LHC. When the color-octet scalar is lighter than the singlet, or when the singlet mass is above roughly 1 TeV, the signatures consist of multi-jet resonances of multiplicity up to 12, including topologies with multi-prong jet substructure, slightly displaced vertices, and sometimes a top quark pair. When the singlet is the lightest ReCoM boson and lighter than about 1 TeV, its main decays ($W^+W^-$, $\\gamma Z$, $ZZ$) arise at three loops. The LHC signatures then involve two or four boosted electroweak bosons, often originating from highly displaced vertices, plus one or two pairs of prompt jets or top quarks.

  10. Glide back booster wind tunnel model testing

    Science.gov (United States)

    Pricop, M. V.; Cojocaru, M. G.; Stoica, C. I.; Niculescu, M. L.; Neculaescu, A. M.; Persinaru, A. G.; Boscoianu, M.

    2017-07-01

    Affordable space access requires partial or ideally full launch vehicle reuse, which is in line with clean environment requirement. Although the idea is old, the practical use is difficult, requiring very large technology investment for qualification. Rocket gliders like Space Shuttle have been successfullyoperated but the price and correspondingly the energy footprint were found not sustainable. For medium launchers, finally there is a very promising platform as Falcon 9. For very small launchers the situation is more complex, because the performance index (payload to start mass) is already small, versus medium and heavy launchers. For partial reusable micro launchers this index is even smaller. However the challenge has to be taken because it is likely that in a multiyear effort, technology is going to enable the performance recovery to make such a system economically and environmentally feasible. The current paper is devoted to a small unitary glide back booster which is foreseen to be assembled in a number of possible configurations. Although the level of analysis is not deep, the solution is analyzed from the aerodynamic point of view. A wind tunnel model is designed, with an active canard, to enablea more efficient wind tunnel campaign, as a national level premiere.

  11. Findings concerning testis, vas deference, and epididymis in adult cases with nonpalpable testes

    Directory of Open Access Journals (Sweden)

    Coskun Sahin

    2011-12-01

    Full Text Available In this study, we aimed to state the relationship between testis, epididymis and vas deference, in adult cases with nonpalpable testis. Between January 1996 and December 2009, we evaluated 154 adult cases with nonpalpable testes. Mean age was 23 years (20-27 years. Explorations were performed by open inguinal incision, laparoscopy, and by inguinal incision and laparoscopy together on 22, 131 and 1 patient, respectively. Of all the unilateral cases, 32 were accepted as vanishing testis. In five of these cases, vas deference was ending inside the abdomen, and in the others, it was ending inside the scrotum. In the remaining 99 unilateral and 22 bilateral cases, 143 testes were found in total. Testes were found in the inguinal canal as atrophic in one case, at the right renal pedicle level with dysmorphic testis in one case, and anterior to the internal ring between the bladder and the common iliac vessels at a smaller than normal size in 119 cases. One (0.69% case did not have epididymis. While epididymis was attached to the testis only at the head and tail locations in 88 (61.53% cases, it was totally attached to the testis in 54 (37.76% cases. There is an obviously high incidence rate of testis and vas deference anomalies, where epididymis is the most frequent one. In cases with abdominal testes, this rate is highest for high localised abdominal testes.

  12. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  13. Testing Model with "Check Technique" for Physics Education

    Science.gov (United States)

    Demir, Cihat

    2016-01-01

    As the number, date and form of the written tests are structured and teacher-oriented, it is considered that it creates fear and anxiety among the students. It has been found necessary and important to form a testing model which will keep the students away from the test anxiety and allows them to learn only about the lesson. For this study,…

  14. LCT-coil design: Mechanical interaction between composite winding and steel casing under various test conditions

    International Nuclear Information System (INIS)

    Dolensky, B.; Messemer, G.; Zehlein, H.; Erb, J.

    1981-01-01

    Finite element computations for the structural design of the large superconducting toroidal field coil contributed by EURATOM to the Large Coil Test Facility (LCTF) at ORNL, USA were performed at KfK, using the ASKA code. The layout of the coil must consider different types of requirements: firstly, an optimal D-shaped contour minimizing circumferential stress gradients under normal operation in the toroidal arrangement must be defined. Secondly, the three-dimensional real design effects due to the actual support conditions, manufacturing tolerances etc. must be mastered for different basic operational and failure load cases. And, thirdly, the design must stand a single coil qualification test in the TOSKA-facility at KfK, Karlsruhe, FRG, before it is plugged into the LCTF. The emphasis of the paper is three-pronged according to these requirements: i) the 3D magnetic body forces as well as the underlying magnetic fields as computed by the HEDO-code are described. ii) The mechanical interaction between casing and winding as given elsewhere in terms of high stress regions, gaps, slide movements and contact forces for various load cases representing the LCTF test conditions is illustrated here by a juxtaposition of the operational deformations and stresses within the LCTF and the TOSKA. iii) Particular effects like the restraint imposed by a corset-type reinforcement of the coil in the TOSKA test facility to limit the breathing deformation are parametrically studied. Moreover, the possibilities to derive scaling laws which make essential results transferable to larger coils by extracting a 1D mechanical response from the 3D finite element model is also demonstrated. (orig./GG)

  15. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  16. Test case specifications for coupled neutronics-thermal hydraulics calculation of Gas-cooled Fast Reactor

    Science.gov (United States)

    Osuský, F.; Bahdanovich, R.; Farkas, G.; Haščík, J.; Tikhomirov, G. V.

    2017-01-01

    The paper is focused on development of the coupled neutronics-thermal hydraulics model for the Gas-cooled Fast Reactor. It is necessary to carefully investigate coupled calculations of new concepts to avoid recriticality scenarios, as it is not possible to ensure sub-critical state for a fast reactor core under core disruptive accident conditions. Above mentioned calculations are also very suitable for development of new passive or inherent safety systems that can mitigate the occurrence of the recriticality scenarios. In the paper, the most promising fuel material compositions together with a geometry model are described for the Gas-cooled fast reactor. Seven fuel pin and fuel assembly geometry is proposed as a test case for coupled calculation with three different enrichments of fissile material in the form of Pu-UC. The reflective boundary condition is used in radial directions of the test case and vacuum boundary condition is used in axial directions. During these condition, the nuclear system is in super-critical state and to achieve a stable state (which is numerical representation of operational conditions) it is necessary to decrease the reactivity of the system. The iteration scheme is proposed, where SCALE code system is used for collapsing of a macroscopic cross-section into few group representation as input for coupled code NESTLE.

  17. The Florida State Initial Teacher Certification Test: A Case Study.

    Science.gov (United States)

    Dorn, Charles M.

    1989-01-01

    Describes the development of the art certification examination which was designed for the Florida State Initial Teacher Certification Test. Discusses problems of subjectivity, content, and question format. Suggests criteria which can guide the development of viable college art education programs that can adequately prepare teachers in the areas of…

  18. Anterior abdominal wall ectopic testes: A report of two cases ...

    African Journals Online (AJOL)

    Undescended testis (UDT) is a common anomaly of the male reproductive system affecting about 2% to 4% of male infants more commonly preterms. If the testis remains in the line of normal descent, it is classified as an UDT. If it is not in the line of normal descent, it is termed an ectopic testis. Common sites of ectopic testes ...

  19. Functional Literacy Tests: A Case of Anticipatory Validity?

    Science.gov (United States)

    Anderson, Lorin W.; Anderson, Jo Craig

    1981-01-01

    Development of the mathematics functional literacy test (MFLT) is described, issues of predictive and content validity are discussed, and implications for educational policy are presented. Ten basic skill areas identified by the National Council of Supervisors of Mathematics were used as the basis for the development of the MFLT. (RL)

  20. Bankruptcy risk model and empirical tests

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Petersen, Alexander M.; Urošević, Branko; Stanley, H. Eugene

    2010-01-01

    We analyze the size dependence and temporal stability of firm bankruptcy risk in the US economy by applying Zipf scaling techniques. We focus on a single risk factor—the debt-to-asset ratio R—in order to study the stability of the Zipf distribution of R over time. We find that the Zipf exponent increases during market crashes, implying that firms go bankrupt with larger values of R. Based on the Zipf analysis, we employ Bayes’s theorem and relate the conditional probability that a bankrupt firm has a ratio R with the conditional probability of bankruptcy for a firm with a given R value. For 2,737 bankrupt firms, we demonstrate size dependence in assets change during the bankruptcy proceedings. Prepetition firm assets and petition firm assets follow Zipf distributions but with different exponents, meaning that firms with smaller assets adjust their assets more than firms with larger assets during the bankruptcy process. We compare bankrupt firms with nonbankrupt firms by analyzing the assets and liabilities of two large subsets of the US economy: 2,545 Nasdaq members and 1,680 New York Stock Exchange (NYSE) members. We find that both assets and liabilities follow a Pareto distribution. The finding is not a trivial consequence of the Zipf scaling relationship of firm size quantified by employees—although the market capitalization of Nasdaq stocks follows a Pareto distribution, the same distribution does not describe NYSE stocks. We propose a coupled Simon model that simultaneously evolves both assets and debt with the possibility of bankruptcy, and we also consider the possibility of firm mergers. PMID:20937903

  1. Safety Case Development as an Information Modelling Problem

    Science.gov (United States)

    Lewis, Robert

    This paper considers the benefits from applying information modelling as the basis for creating an electronically-based safety case. It highlights the current difficulties of developing and managing large document-based safety cases for complex systems such as those found in Air Traffic Control systems. After a review of current tools and related literature on this subject, the paper proceeds to examine the many relationships between entities that can exist within a large safety case. The paper considers the benefits to both safety case writers and readers from the future development of an ideal safety case tool that is able to exploit these information models. The paper also introduces the idea that the safety case has formal relationships between entities that directly support the safety case argument using a methodology such as GSN, and informal relationships that provide links to direct and backing evidence and to supporting information.

  2. Matrix diffusion model. In situ tests using natural analogues

    Energy Technology Data Exchange (ETDEWEB)

    Rasilainen, K. [VTT Energy, Espoo (Finland)

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories. 98 refs. The thesis includes also eight previous publications by author.

  3. Matrix diffusion model. In situ tests using natural analogues

    International Nuclear Information System (INIS)

    Rasilainen, K.

    1997-11-01

    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories

  4. Path generation algorithm for UML graphic modeling of aerospace test software

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  5. Digital test assembly of truck parts with the IMMA-tool--an illustrative case.

    Science.gov (United States)

    Hanson, L; Högberg, D; Söderholm, M

    2012-01-01

    Several digital human modelling (DHM) tools have been developed for simulation and visualisation of human postures and motions. In 2010 the DHM tool IMMA (Intelligently Moving Manikins) was introduced as a DHM tool that uses advanced path planning techniques to generate collision free and biomechanically acceptable motions for digital human models (as well as parts) in complex assembly situations. The aim of the paper is to illustrate how the IPS/IMMA tool is used at Scania CV AB in a digital test assembly process, and to compare the tool with other DHM tools on the market. The illustrated case of using the IMMA tool, here combined with the path planner tool IPS, indicates that the tool is promising. The major strengths of the tool are its user friendly interface, the motion generation algorithms, the batch simulation of manikins and the ergonomics assessment methods that consider time.

  6. Simulation of Simple Test Case 2D1

    DEFF Research Database (Denmark)

    Skovgaard, M.; Nielsen, Peter Vilhelm

    The turbulent flow pattern is calculated with a low Re number version of the k-∈ model in a room with two-dimensional isothermal flow. The results are compared both to LDA measurements obtained in a scale model and to other data obtained by numerical simulation. The overall performance is good an...... and indeed satisfactory. With respect to maximum velocity and turbulence level in the occupied zone the results are very good and with respect to the decay of the maximum velocity in the wall jet and the growth of jet width small discrepancies are found....

  7. DKIST enclosure modeling and verification during factory assembly and testing

    Science.gov (United States)

    Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka

    2014-08-01

    The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.

  8. The case for bilingual language tests: a study of test adaptation and ...

    African Journals Online (AJOL)

    The justification for the use of language tests in education in multilingual and multicultural societies needs to include both the aims of bilingual education, and evidence that the international standards for tests that are available in two or more languages are being met. In multilingual and multicultural societies, language tests ...

  9. Proposing and testing SOA governance process: A case study approach

    DEFF Research Database (Denmark)

    Koumaditis, Konstantinos; Themistocleous, Marinos

    2015-01-01

    Longstanding Healthcare Information Systems (HIS) integration challenges drove healthcare organisations to invest in new paradigms like Service Oriented Architecture (SOA). Yet, SOA holds challenges of its own, with SOA Governance surfacing on the top. This research depicts the development......, grounded in the normative literature and further developed to include healthcare aspects. The proposition is tested in a large Greek hospital utilising qualitative methods and the findings presented herein. This proposal aims to pinpoint attributes and guidelines for SOA Governance Process, required...

  10. Making System Dynamics Cool II : New Hot Teaching and Testing Cases of Increasing Complexity

    NARCIS (Netherlands)

    Pruyt, E.

    2010-01-01

    This follow-up paper presents several actual cases for testing and teaching System Dynamics. The cases were developed between April 2009 and January 2010 for the Introductory System Dynamics courses at Delft University of Technology in the Netherlands. They can be used for teaching and testing

  11. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  12. Geometrical optics modeling of the grating-slit test.

    Science.gov (United States)

    Liang, Chao-Wen; Sasian, Jose

    2007-02-19

    A novel optical testing method termed the grating-slit test is discussed. This test uses a grating and a slit, as in the Ronchi test, but the grating-slit test is different in that the grating is used as the incoherent illuminating object instead of the spatial filter. The slit is located at the plane of the image of a sinusoidal intensity grating. An insightful geometrical-optics model for the grating-slit test is presented and the fringe contrast ratio with respect to the slit width and object-grating period is obtained. The concept of spatial bucket integration is used to obtain the fringe contrast ratio.

  13. Conducting field studies for testing pesticide leaching models

    Science.gov (United States)

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  14. The universe of ANA testing: a case for point-of-care ANA testing.

    Science.gov (United States)

    Konstantinov, Konstantin N; Rubin, Robert L

    2017-12-01

    Testing for total antinuclear antibodies (ANA) is a critical tool for diagnosis and management of autoimmune diseases at both the primary care and subspecialty settings. Repurposing of ANA from a test for lupus to a test for any autoimmune condition has driven the increase in ANA requests. Changes in ANA referral patterns include early or subclinical autoimmune disease detection in patients with low pre-test probability and use of negative ANA results to rule out underlying autoimmune disease. A positive result can lead to further diagnostic considerations. Currently, ANA tests are performed in centralized laboratories; an alternative would be ANA testing at the clinical point-of-care (POC). By virtue of its near real-time data collection capability, low cost, and ease of use, we believe the POC ANA has the potential to enable a new paradigm shift in autoimmune serology testing.

  15. RELAP5 kinetics model development for the Advanced Test Reactor

    International Nuclear Information System (INIS)

    Judd, J.L.; Terry, W.K.

    1990-01-01

    A point-kinetics model of the Advanced Test Reactor has been developed for the RELAP5 code. Reactivity feedback parameters were calculated by a three-dimensional analysis with the PDQ neutron diffusion code. Analyses of several hypothetical reactivity insertion events by the new model and two earlier models are discussed. 3 refs., 10 figs., 6 tabs

  16. Tests of the single-pion exchange model

    International Nuclear Information System (INIS)

    Treiman, S.B.; Yang, C.N.

    1983-01-01

    The single-pion exchange model (SPEM) of high-energy particle reactions provides an attractively simple picture of seemingly complex processes and has accordingly been much discussed in recent times. The purpose of this note is to call attention to the possibility of subjecting the model to certain tests precisely in the domain where the model stands the best chance of making sense

  17. A Dutch test with the NewProd-model

    NARCIS (Netherlands)

    Bronnenberg, J.J.A.M.; van Engelen, M.L.

    1988-01-01

    The paper contains a report of a test of Cooper's NewProd model for predicting success and failure of product development projects. Based on Canadian data, the model has been shown to make predictions which are 84% correct. Having reservations on the reliability and validity of the model on

  18. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-17

    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN

  19. Spherical symmetry as a test case for unconstrained hyperboloidal evolution

    International Nuclear Information System (INIS)

    Vañó-Viñuales, Alex; Husa, Sascha; Hilditch, David

    2015-01-01

    We consider the hyperboloidal initial value problem for the Einstein equations in numerical relativity, motivated by the goal to evolve radiating compact objects such as black hole binaries with a numerical grid that includes null infinity. Unconstrained evolution schemes promise optimal efficiency, but are difficult to regularize at null infinity, where the compactified Einstein equations are formally singular. In this work we treat the spherically symmetric case, which already poses nontrivial problems and constitutes an important first step. We have carried out stable numerical evolutions with the generalized BSSN and Z4 equations coupled to a scalar field. The crucial ingredients have been to find an appropriate evolution equation for the lapse function and to adapt constraint damping terms to handle null infinity. (paper)

  20. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  1. Stable-label intravenous glucose tolerance test minimal model

    International Nuclear Information System (INIS)

    Avogaro, A.; Bristow, J.D.; Bier, D.M.; Cobelli, C.; Toffolo, G.

    1989-01-01

    The minimal model approach to estimating insulin sensitivity (Sl) and glucose effectiveness in promoting its own disposition at basal insulin (SG) is a powerful tool that has been underutilized given its potential applications. In part, this has been due to its inability to separate insulin and glucose effects on peripheral uptake from their effects on hepatic glucose inflow. Prior enhancements, with radiotracer labeling of the dosage, permit this separation but are unsuitable for use in pregnancy and childhood. In this study, we labeled the intravenous glucose tolerance test (IVGTT) dosage with [6,6- 2 H 2 ]glucose, [2- 2 H]glucose, or both stable isotopically labeled glucose tracers and modeled glucose kinetics in six postabsorptive, nonobese adults. As previously found with the radiotracer model, the tracer-estimated S*l derived from the stable-label IVGTT was greater than Sl in each case except one, and the tracer-estimated SG* was less than SG in each instance. More importantly, however, the stable-label IVGTT estimated each parameter with an average precision of +/- 5% (range 3-9%) compared to average precisions of +/- 74% (range 7-309%) for SG and +/- 22% (range 3-72%) for Sl. In addition, because of the different metabolic fates of the two deuterated tracers, there were minor differences in basal insulin-derived measures of glucose effectiveness, but these differences were negligible for parameters describing insulin-stimulated processes. In conclusion, the stable-label IVGTT is a simple, highly precise means of assessing insulin sensitivity and glucose effectiveness at basal insulin that can be used to measure these parameters in individuals of all ages, including children and pregnant women

  2. Situation testing: the case of health care refusal.

    Science.gov (United States)

    Després, C; Couralet, P-E

    2011-04-01

    Situation testing to assess physicians' refusal to provide healthcare is increasingly used in research studies. This paper aims to explain the relevance and limits of this method. Conducted in 2008-2009, this study was designed to assess the rate of healthcare refusal among several categories of private practitioners toward patients covered by the French public means-tested complementary health insurance (CMUc) when they requested a first appointment by phone. The other objectives were to study the determinants of healthcare refusal and assess the method. The study was conducted on a representative sample of Paris-based dentists and physicians in five categories: general practitioners, medical gynecologists, ophthalmologists, radiologists, and dentists. The method was based on two protocols. In the first scenario, an actor pretended to be a CMUc beneficiary and, in the second one, he did not give information about his health coverage but hinted at a low socioeconomic status. The two protocols were compared and procedures checking the relation between refusal and CMUc coverage were implemented in each of them. In the scenario in which the patient declared being a CMUc beneficiary, the results showed different refusal rates depending on the type of practitioner, physician, or dentist, their specialty, and whether or not, they charge more than the standard set fee. In the second scenario, refusal rates were much lower. The comparison of the two protocols seems to confirm the existence of discrimination based on CMUc affiliation rather than patients' socioeconomic status. The discussion presents the limits of situation testing, which remains an experimental instrument because it does not observe reality but reveals behaviors in situation. The findings cannot be extrapolated and are limited in time. The statistical analysis is only valid if the procedure followed is precise and applied consistently using a preset scenario. In addition, the discriminatory nature of the

  3. On a special case of model matching

    Czech Academy of Sciences Publication Activity Database

    Zagalak, Petr

    2004-01-01

    Roč. 77, č. 2 (2004), s. 164-172 ISSN 0020-7179 R&D Projects: GA ČR GA102/01/0608 Institutional research plan: CEZ:AV0Z1075907 Keywords : linear systems * state feedback * model matching Subject RIV: BC - Control Systems Theory Impact factor: 0.702, year: 2004

  4. A kinetic model that explains the dependence of magnetic susceptibility of sediment on grain size and organic matter content in transitional marine environments. Testing case studies in estuarine-like environments of NW Iberia

    Science.gov (United States)

    Rey, D.; Mohamed, K. J.; Andrade, A.; Rubio, B.; Bernabeu, A. M.

    2017-12-01

    The wide use of magnetic proxies to study pollution, sedimentological processes, and environmental and paleoclimatic changes is currently limited by the lack of transference functions that closely correlate with the unmeasurable variables. Among them, magnetic susceptibility (MS) is the oldest and most popular, but have yet to live up to its expectations. This paper explores and quantifies how MS values of surficial sediments in transitional environments depends on grain size and on what can be said about the spatial distribution of hydrodynamic forces and the potential modulation of MS by sediment and organic matter provenances. The concentration of (oxyhydr)oxides in sands (d50 > 63 microns) is primarily controlled by their degree of dilution in the diamagnetic framework, which is larger for coarser grainsizes. In contrast, the concentration of (oxyhydr)oxides in muddy sediments is controlled by their dissolution rate during very early diagenesis, which is controlled by their content in organic matter (TOC), inversely dependent of grainsize. The balance between both components results in the study area in sands of d50 = 68 microns displaying the maximum MS values. The influence of organic matter on the dissolution of magnetite in surficial sediments can be quantified using a simple kinetic model. The model reveals the existence of a negative exponential relationship between magnetic susceptibility and grain size, that depends on the TOC of the fine-grained fraction. The model accurately predicts that a TOC increase of 0.35% results in a 50% reduction in the concentration of magnetite in the sediments of the Ría the Muros. We have also encountered this relationship not universal in this form, as its quantification is strongly modulated by coarse sediment mineralogy, TOC lability and by other factors such as wave climate, depth, and sediment oxygenation. Better understanding and quantification of the role that TOC, hydrodynamics, and changes in the geochemical

  5. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  6. Creating a Business Case from a Business Model

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, Eelco; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2014-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  7. Computerised modelling for developmental biology : an exploration with case studies

    NARCIS (Netherlands)

    Bertens, Laura M.F.

    2012-01-01

    Many studies in developmental biology rely on the construction and analysis of models. This research presents a broad view of modelling approaches for developmental biology, with a focus on computational methods. An overview of modelling techniques is given, followed by several case studies. Using

  8. Generating custom test plans for CASE{sup *}Dictionary 5.0

    Energy Technology Data Exchange (ETDEWEB)

    Atkins, K.D. [Boeing Computer Services, Richland, WA (United States)

    1994-04-01

    Most database development organizations use a formal software development methodology that requires a certain amount of formal testing. The amount of formal testing that will be performed will vary from methodology to methodology and from site to site. If a very detailed formal test plan is required for each module in a system, the work involved to produce the test plan can be tedious and costly. After a system has been designed and developed using Oracle*CASE, there is much useful information in the CASE*Dictionary repository. If this information could be tied to specific test requirements, a test plan could be generated automatically, saving much time and resources. This paper shows how CASE*Dictionary can be used to store test plan information that can then be used to generate a specific test plan for each module based on it`s detailed data usage.

  9. 30 CFR 250.523 - How long do I keep records of casing pressure and diagnostic tests?

    Science.gov (United States)

    2010-07-01

    ... and diagnostic tests? 250.523 Section 250.523 Mineral Resources MINERALS MANAGEMENT SERVICE... casing pressure and diagnostic tests? Records of casing pressure and diagnostic tests must be kept at the field office nearest the well for a minimum of 2 years. The last casing diagnostic test for each casing...

  10. Using Virtual ATE Model to Migrate Test Programs

    Institute of Scientific and Technical Information of China (English)

    王晓明; 杨乔林

    1995-01-01

    Bacause of high development costs of IC (Integrated Circuit)test programs,recycling existing test programs from one kind of ATE (Automatic Test Equipment) to another or generating directly from CAD simulation modules to ATE is more and more valuable.In this paper,a new approach to migrating test programs is presented.A virtual ATE model based on object-oriented paradigm is developed;it runs Test C++ (an intermediate test control language) programs and TeIF(Test Inftermediate Format-an intermediate pattern),migrates test programs among three kinds of ATE (Ando DIC8032,Schlumberger S15 and GenRad 1732) and generates test patterns from two kinds of CAD 9Daisy and Panda) automatically.

  11. Experimental tests of transport models using modulated ECH

    International Nuclear Information System (INIS)

    DeBoo, J.C.; Kinsey, J.E.; Bravenec, R.

    1998-12-01

    Both the dynamic and equilibrium thermal responses of an L-mode plasma to repetitive ECH heat pulses were measured and compared to predictions from several thermal transport models. While no model consistently agreed with all observations, the GLF23 model was most consistent with the perturbated electron and ion temperature responses for one of the cases studied which may indicate a key role played by electron modes in the core of these discharges. Generally, the IIF and MM models performed well for the perturbed electron response while the GLF23 and IFS/PPPL models agreed with the perturbed ion response for all three cases studied. No single model agreed well with the equilibrium temperature profiles measured

  12. Large-scale column experiment: study of CO{sub 2}, pore water rock reactions and model test case; Experimentation de longue duree sur grandes colonnes, dans le contexte du stockage geologique de CO{sub 2}: etude des interactions eau-roche et modelisation

    Energy Technology Data Exchange (ETDEWEB)

    Bateman, K.; Turner, G.; Pearce, J.M.; Noy, D.J.; Birchall, D.; Rochelle, C.A. [British Geological Survey, Kingsley Dunham Centre, Keyworth (United Kingdom)

    2005-07-01

    During underground carbon dioxide (CO{sub 2}) storage operations in deep reservoirs, the CO{sub 2} can be trapped in three ways; - as 'free' CO{sub 2}, most likely as a supercritical phase (physical trapping); - dissolved in formation water (hydrodynamic trapping); - precipitated in carbonate phases such as calcite (mineral trapping). This study focuses on the reactions between CO{sub 2}, pore-water and host rock. The aim of this work was to provide a well-constrained long-term laboratory experiment reacting known quantities of minerals with CO{sub 2}-rich fluids, in order to try and represent situations where CO{sub 2} is being injected into lithologies deep underground. The experimental results can then be used as a test case with which to help validate predictive geochemical computer models. These will help improve our ability to predict the long-term fate of carbon dioxide (CO{sub 2}) stored underground. The experiment, though complex in terms of equipment, ran for approximately 7.5 months. The reacted material was then examined for mineralogical changes and the collected fluids analysed to provide data on the fate of the dissolved species. Changes were readily observable on the carbonates present in the starting material, which matches well with the observed trends in the fluid chemistry. However, although changes in silica concentrations were seen in the fluid chemistry no evidence for pitting or etching was noted in the silica bearing phases. Modelling of the experimental systems was performed using the BGS coupled code, PRECIP. As a general conclusion, the model predictions tend to over estimate the degree of reaction compared with the results from the experiment. In particular, some mineral phases (e.g. dawsonite) that are predicted to form in large quantities by the model are not seen at all in the experimental system. The differences between the model predictions and the experimental observations highlight the need for thermodynamic and kinetic

  13. Distributed storage and cloud computing: a test case

    International Nuclear Information System (INIS)

    Piano, S; Ricca, G Delia

    2014-01-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  14. Field Tested Service Oriented Robotic Architecture: Case Study

    Science.gov (United States)

    Flueckiger, Lorenzo; Utz, Hanz

    2012-01-01

    This paper presents the lessons learned from six years of experiments with planetary rover prototypes running the Service Oriented Robotic Architecture (SORA) developed by the Intelligent Robotics Group (IRG) at NASA Ames Research Center. SORA relies on proven software methods and technologies applied to the robotic world. Based on a Service Oriented Architecture and robust middleware, SORA extends its reach beyond the on-board robot controller and supports the full suite of software tools used during mission scenarios from ground control to remote robotic sites. SORA has been field tested in numerous scenarios of robotic lunar and planetary exploration. The results of these high fidelity experiments are illustrated through concrete examples that have shown the benefits of using SORA as well as its limitations.

  15. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  16. Earthquake induced rock shear through a deposition hole - modelling of three scale tests for validation of models

    International Nuclear Information System (INIS)

    Boergesson, Lennart; Hernelind, Jan

    2012-01-01

    Document available in extended abstract form only. Three model shear tests of very high quality simulating a horizontal rock shear through a KBS-3V deposition hole in the centre of a canister were performed 1986. The tests simulated a deposition hole in the scale 1:10 with reference density of the buffer, very stiff confinement simulating the rock, and a solid bar of copper simulating the canister. The three tests were almost identical with exception of the rate of shear, which was varied between 0.031 and 160 mm/s, i.e. with a factor of more than 5000, and the density of the bentonite, which differed slightly. The tests were very well documented. Shear force, shear rate, total stress in the bentonite, strain in the copper and the movement of the top of the simulated canister were measured continuously during the shear. After finished shear the equipment was dismantled and careful sampling of the bentonite with measurement of water ratio and density were made. The deformed copper 'canister' was also carefully measured after the test. The tests have been modelled with the finite element code Abaqus with the same models and techniques that were used for the full scale cases in the Swedish safety assessment SR-Site. The results have been compared with the measured results, which has yielded very valuable information about the relevancy of the material models and the modelling technique. An elastic-plastic material model was used for the bentonite where the stress-strain relations have been derived from laboratory tests. The material model is also described in another article to this conference. The material model is made a function of both the density and the strain rate at shear. Since the shear is fast and takes place under undrained conditions, the density is not changed during the tests. However, strain rate varies largely with both the location of the elements and time. This can be taken into account in Abaqus by making the material model a function of the strain

  17. Assuring consumer safety without animal testing: a feasibility case study for skin sensitisation.

    Science.gov (United States)

    Maxwell, Gavin; Aleksic, Maja; Aptula, Aynur; Carmichael, Paul; Fentem, Julia; Gilmour, Nicola; Mackay, Cameron; Pease, Camilla; Pendlington, Ruth; Reynolds, Fiona; Scott, Daniel; Warner, Guy; Westmoreland, Carl

    2008-11-01

    Allergic Contact Dermatitis (ACD; chemical-induced skin sensitisation) represents a key consumer safety endpoint for the cosmetics industry. At present, animal tests (predominantly the mouse Local Lymph Node Assay) are used to generate skin sensitisation hazard data for use in consumer safety risk assessments. An animal testing ban on chemicals to be used in cosmetics will come into effect in the European Union (EU) from March 2009. This animal testing ban is also linked to an EU marketing ban on products containing any ingredients that have been subsequently tested in animals, from March 2009 or March 2013, depending on the toxicological endpoint of concern. Consequently, the testing of cosmetic ingredients in animals for their potential to induce skin sensitisation will be subject to an EU marketing ban, from March 2013 onwards. Our conceptual framework and strategy to deliver a non-animal approach to consumer safety risk assessment can be summarised as an evaluation of new technologies (e.g. 'omics', informatics), leading to the development of new non-animal (in silico and in vitro) predictive models for the generation and interpretation of new forms of hazard characterisation data, followed by the development of new risk assessment approaches to integrate these new forms of data and information in the context of human exposure. Following the principles of the conceptual framework, we have been investigating existing and developing new technologies, models and approaches, in order to explore the feasibility of delivering consumer safety risk assessment decisions in the absence of new animal data. We present here our progress in implementing this conceptual framework, with the skin sensitisation endpoint used as a case study. 2008 FRAME.

  18. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    Science.gov (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  19. Modeling dust growth in protoplanetary disks: The breakthrough case

    Science.gov (United States)

    Drążkowska, J.; Windmark, F.; Dullemond, C. P.

    2014-07-01

    Context. Dust coagulation in protoplanetary disks is one of the initial steps toward planet formation. Simple toy models are often not sufficient to cover the complexity of the coagulation process, and a number of numerical approaches are therefore used, among which integration of the Smoluchowski equation and various versions of the Monte Carlo algorithm are the most popular. Aims: Recent progress in understanding the processes involved in dust coagulation have caused a need for benchmarking and comparison of various physical aspects of the coagulation process. In this paper, we directly compare the Smoluchowski and Monte Carlo approaches to show their advantages and disadvantages. Methods: We focus on the mechanism of planetesimal formation via sweep-up growth, which is a new and important aspect of the current planet formation theory. We use realistic test cases that implement a distribution in dust collision velocities. This allows a single collision between two grains to have a wide range of possible outcomes but also requires a very high numerical accuracy. Results: For most coagulation problems, we find a general agreement between the two approaches. However, for the sweep-up growth driven by the "lucky" breakthrough mechanism, the methods exhibit very different resolution dependencies. With too few mass bins, the Smoluchowski algorithm tends to overestimate the growth rate and the probability of breakthrough. The Monte Carlo method is less dependent on the number of particles in the growth timescale aspect but tends to underestimate the breakthrough chance due to its limited dynamic mass range. Conclusions: We find that the Smoluchowski approach, which is generally better for the breakthrough studies, is sensitive to low mass resolutions in the high-mass, low-number tail that is important in this scenario. To study the low number density features, a new modulation function has to be introduced to the interaction probabilities. As the minimum resolution

  20. Deformation modeling and the strain transient dip test

    International Nuclear Information System (INIS)

    Jones, W.B.; Rohde, R.W.; Swearengen, J.C.

    1980-01-01

    Recent efforts in material deformation modeling reveal a trend toward unifying creep and plasticity with a single rate-dependent formulation. While such models can describe actual material deformation, most require a number of different experiments to generate model parameter information. Recently, however, a new model has been proposed in which most of the requisite constants may be found by examining creep transients brought about through abrupt changes in creep stress (strain transient dip test). The critical measurement in this test is the absence of a resolvable creep rate after a stress drop. As a consequence, the result is extraordinarily sensitive to strain resolution as well as machine mechanical response. This paper presents the design of a machine in which these spurious effects have been minimized and discusses the nature of the strain transient dip test using the example of aluminum. It is concluded that the strain transient dip test is not useful as the primary test for verifying any micromechanical model of deformation. Nevertheless, if a model can be developed which is verifiable by other experimentts, data from a dip test machine may be used to generate model parameters

  1. Computer aided system engineering and analysis (CASE/A) modeling package for ECLS systems - An overview

    Science.gov (United States)

    Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.

    1990-01-01

    An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.

  2. Modelling Brazilian tests with FRACOD2D (FRActure propagation CODe)

    International Nuclear Information System (INIS)

    Lanaro, Flavio; Sato, Toshinori; Rinne, Mikael; Stephansson, Ove

    2008-01-01

    This study focuses on the influence of initiated cracks on the stress distribution within rock samples subjected to tensile loading by traditional Brazilian testing. The numerical analyses show that the stress distribution is only marginally affected by the considered loading boundary conditions. On the other hand, the initiation and propagation of cracks produce a stress field that is very different from that assumed by considering the rock material as continuous, homogeneous, isotropic and elastic. In the models, stress concentrations at the bridges between the cracks were found to have tensile stresses much higher than the macroscopic direct tensile strength of the intact rock. This was possible thanks to the development of large stress gradients that can be carried by the rock between the cracks. The analysis of the deformation along the sample diameter perpendicular to the loading direction might enable one to determine the macroscopic direct tensile strength of the rock or, in a real case, of the weakest grains. The strength is indicated by the point where the stress-strain curves depart from linearity. (author)

  3. Allele-sharing models: LOD scores and accurate linkage tests.

    Science.gov (United States)

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  4. IDC Use Case Model Survey Version 1.1.

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carr, Dorthe B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 SNL IDC Reengineering Project Team Initial delivery M. Harris V1.1 2/2015 SNL IDC Reengineering Project Team Iteration I2 Review Comments M. Harris

  5. IDC Use Case Model Survey Version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Dorthe B.; Harris, James M.

    2014-12-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model Survey. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris

  6. Service-oriented enterprise modelling and analysis: a case study

    NARCIS (Netherlands)

    Iacob, Maria Eugenia; Jonkers, H.; Lankhorst, M.M.; Steen, M.W.A.

    2007-01-01

    In order to validate the concepts and techniques for service-oriented enterprise architecture modelling, developed in the ArchiMate project (Lankhorst, et al., 2005), we have conducted a number of case studies. This paper describes one of these case studies, conducted at the Dutch Tax and Customs

  7. Sensitivity Analysis in Structural Equation Models: Cases and Their Influence

    Science.gov (United States)

    Pek, Jolynn; MacCallum, Robert C.

    2011-01-01

    The detection of outliers and influential observations is routine practice in linear regression. Despite ongoing extensions and development of case diagnostics in structural equation models (SEM), their application has received limited attention and understanding in practice. The use of case diagnostics informs analysts of the uncertainty of model…

  8. A Case-Based Learning Model in Orthodontics.

    Science.gov (United States)

    Engel, Francoise E.; Hendricson, William D.

    1994-01-01

    A case-based, student-centered instructional model designed to mimic orthodontic problem solving and decision making in dental general practice is described. Small groups of students analyze case data, then record and discuss their diagnoses and treatments. Students and instructors rated the seminars positively, and students reported improved…

  9. Development of a Medicaid Behavioral Health Case-Mix Model

    Science.gov (United States)

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  10. An Innovative Physical Model for Testing Bucket Foundations

    DEFF Research Database (Denmark)

    Foglia, Aligi; Ibsen, Lars Bo; Andersen, Lars Vabbersgaard

    2012-01-01

    Pa), 20 (kPa), and 30 (kPa) respectively. The comparison between the tests conducted at stress level of 0 (kPa), and the tests with stress level increased, shows remarkable differences. The relationship between scaled overturning moment and rotation is well represented by a power law. The exponent...... of the power law is consistent among all tests carried out with stress level increased. Besides, attention is given to the instantaneous centre of rotation distribution. To validate the mode, the tests are compared with a large scale test by means of a scaling moment. The validation of the model is only...

  11. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... as real projection welding tests, is easy to realize in industry, since tests may be performed in situ. In part II, an approach of characterizing the electrical properties of AC resistance welding machines is presented, involving testing and mathematical modelling of the weld current, the firing angle...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...

  12. THE MISHKIN TEST: AN ANALYSIS OF MODEL EXTENSIONS

    Directory of Open Access Journals (Sweden)

    Diana MURESAN

    2015-04-01

    Full Text Available This paper reviews empirical research that apply Mishkin test for the examination of the existence of accruals anomaly using alternative approaches. Mishkin test is a test used in macro-econometrics for rational hypothesis, which test for the market efficiency. Starting with Sloan (1996 the model has been applied to accruals anomaly literature. Since Sloan (1996, the model has known various improvements and it has been the subject to many debates in the literature regarding its efficacy. Nevertheless, the current evidence strengthens the pervasiveness of the model. The analyses realized on the extended studies on Mishkin test highlights that adding additional variables enhances the results, providing insightful information about the occurrence of accruals anomaly.

  13. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  14. Revisiting the advection-dispersion model - Testing an alternative

    International Nuclear Information System (INIS)

    Neretnieks, I.

    2001-01-01

    for strongly sorbing solutes. The conditions are explored for when in-filling particles in the fracture will not be equilibrated but will act as if there was seemingly a much larger FWS. It is found that for strongly sorbing tracers, relatively small particles can act in this way for systems and conditions that are typical of many tracer tests. Conditions for when uptake into stagnant zones of water in the fracture itself could be important are also explored. It is found that this mechanism can be quite important in fractures with large apertures. The assumption that the tracer residence time found by cautiously injecting a small stream of traced water represents the residence time in the whole fracture is explored. It is found that the traced stream can potentially sample a much larger fraction of the fracture than the ratio between the traced flowrate and the total pumped flowrate. In some recent field experiments the visually observed fracture apertures indicate that this may well be the case and possibly a more than two times larger fraction of the fracture is sampled than the flow rate ratio would indicate. This of course has an impact on the Flow Wetted Surface the traced stream contacts. The MCh-model was used to simulate some recent tracer tests in what has been proposed to be a 'single' fracture at the Aespoe Hard rock laboratory in Sweden. Non-sorbing tracers, HTO and Uranine were used to determine the mean residence time and its variance. Laboratory data on diffusion and sorption properties were used to 'predict' the RTD of the sorbing tracers. It was found that if all the flow occurs in a single fracture, diffusion into stagnant zones of water in the fracture could be 10 to 300 times larger than the uptake into the rock matrix in these experiments. This was also found to give good agreement with the experiments for the non-sorbing and weakly sorbing tracers. For the strongly sorbing tracers it was necessary to invoke a 2-6 times stronger interaction with

  15. Modification of Concrete Damaged Plasticity model. Part II: Formulation and numerical tests

    Directory of Open Access Journals (Sweden)

    Kamińska Inez

    2017-01-01

    Full Text Available A refined model for elastoplastic damaged material is formulated based on the plastic potential introduced in Part I [1]. Considered model is an extension of Concrete Damaged Plasticity material implemented in Abaqus [2]. In the paper the stiffness tensor for elastoplastic damaged behaviour is derived. In order to validate the model, computations for the uniaxial tests are performed. Response of the model for various cases of parameter’s choice is shown and compared to the response of the CDP model.

  16. Simulation with Different Turbulence Models in an Annex 20 Benchmark Test using Star-CCM+

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Nielsen, Peter V.

    The purpose of this investigation is to compare the different flow patterns obtained for the 2D isothermal test case defined in Annex 20 (1990) using different turbulence models. The different results are compared with the existing experimental data. Similar study has already been performed by Rong...

  17. Dropouts and Budgets: A Test of a Dropout Reduction Model among Students in Israeli Higher Education

    Science.gov (United States)

    Bar-Am, Ran; Arar, Osama

    2017-01-01

    This article deals with the problem of student dropout during the first year in a higher education institution. To date, no model on a budget has been developed and tested to prevent dropout among Engineering Students. This case study was conducted among first-year students taking evening classes in two practical engineering colleges in Israel.…

  18. Calibration of a Chemistry Test Using the Rasch Model

    Directory of Open Access Journals (Sweden)

    Nancy Coromoto Martín Guaregua

    2011-11-01

    Full Text Available The Rasch model was used to calibrate a general chemistry test for the purpose of analyzing the advantages and information the model provides. The sample was composed of 219 college freshmen. Of the 12 questions used, good fit was achieved in 10. The evaluation shows that although there are items of variable difficulty, there are gaps on the scale; in order to make the test complete, it will be necessary to design new items to fill in these gaps.

  19. Latent Trait Model Contributions to Criterion-Referenced Testing Technology.

    Science.gov (United States)

    1982-02-01

    levels of ability (ranging from very low to very high). The steps in the reserach were as follows: 1. Specify the characteristics of a "typical" pool...conventional testing methodologies displayed good fit to both of the latent trait models. The one-parameter model compared favorably with the three- parameter... Methodological developments: New directions for testing a!nd measurement (No. 4). San Francisco: Jossey-Bass, 1979. Haubleton, R. K. Advances in

  20. Instrumentation and testing of a prestressed concrete containment vessel model

    International Nuclear Information System (INIS)

    Hessheimer, M.F.; Pace, D.W.; Klamerus, E.W.

    1997-01-01

    Static overpressurization tests of two scale models of nuclear containment structures - a steel containment vessel (SCV) representative of an improved, boiling water reactor (BWR) Mark II design and a prestressed concrete containment vessel (PCCV) for pressurized water reactors (PWR) - are being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the U.S. Nuclear Regulatory Commission. This paper discusses plans for instrumentation and testing of the PCCV model. 6 refs., 2 figs., 2 tabs

  1. Testing the Structure of Hydrological Models using Genetic Programming

    Science.gov (United States)

    Selle, B.; Muttil, N.

    2009-04-01

    Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  2. Improved animal models for testing gene therapy for atherosclerosis.

    Science.gov (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A

    2014-04-01

    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  3. Method of case hardening depth testing by using multifunctional ultrasonic testing instrument

    International Nuclear Information System (INIS)

    Salchak, Y A; Sednev, D A; Ardashkin, I B; Kroening, M

    2015-01-01

    The paper describes usability of ultrasonic case hardening depth control applying standard instrument of ultrasonic inspections. The ultrasonic method of measuring the depth of the hardened layer is proposed. Experimental series within the specified and multifunctional ultrasonic equipment are performed. The obtained results are compared with the results of a referent method of analysis. (paper)

  4. Numerical Modelling and Measurement in a Test Secondary Settling Tank

    DEFF Research Database (Denmark)

    Dahl, C.; Larsen, Torben; Petersen, O.

    1994-01-01

    sludge. Phenomena as free and hindered settling and the Bingham plastic characteristic of activated sludge suspensions are included in the numerical model. Further characterisation and test tank experiments are described. The characterisation experiments were designed to measure calibration parameters...... for model description of settling and density differences. In the test tank experiments, flow velocities and suspended sludge concentrations were measured with different tank inlet geomotry and hydraulic and sludge loads. The test tank experiments provided results for the calibration of the numerical model......A numerical model and measurements of flow and settling in activated sludge suspension is presented. The numerical model is an attempt to describe the complex and interrelated hydraulic and sedimentation phenomena by describing the turbulent flow field and the transport/dispersion of suspended...

  5. A tutorial on testing the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias; Minakata, Katsumi

    2016-01-01

    , to faster responses to redundant signals. In contrast, coactivation models assume integrated processing of the combined stimuli. To distinguish between these two accounts, Miller (1982) derived the well-known race model inequality, which has become a routine test for behavioral data in experiments...... with redundant signals. In this tutorial, we review the basic properties of redundant signals experiments and current statistical procedures used to test the race model inequality during the period between 2011 and 2014. We highlight and discuss several issues concerning study design and the test of the race...... model inequality, such as inappropriate control of Type I error, insufficient statistical power, wrong treatment of omitted responses or anticipations and the interpretation of violations of the race model inequality. We make detailed recommendations on the design of redundant signals experiments...

  6. Model tests on overall forces on the SSG pilot plant

    DEFF Research Database (Denmark)

    Margheritini, Lucia; Morris, Alex

    . The tests have been realized at the Department of civil Engineering, AAU, in the 3D deep water tank with a scale model 1:60 to prototype and a reproduced bathymetry of the selected location at the time of the experiments. Overall forces and moments have been measured during the tests. The results are given...

  7. Automated model-based testing of hybrid systems

    NARCIS (Netherlands)

    Osch, van M.P.W.J.

    2009-01-01

    In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to

  8. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    Teräsvirta, Timo; Yang, Yukai

    The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...

  9. An integrated service excellence model for military test and ...

    African Journals Online (AJOL)

    The purpose of this article is to introduce an Integrated Service Excellence Model (ISEM) for empowering the leadership core of the capital-intensive military test and evaluation facilities to provide strategic military test and evaluation services and to continuously improve service excellence by ensuring that all activities ...

  10. Towards model-based testing of electronic funds transfer systems

    NARCIS (Netherlands)

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.; Arbab, F.; Sirjani, M.

    2012-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the

  11. Towards model-based testing of electronic funds transfer systems

    NARCIS (Netherlands)

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the

  12. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2010-01-01

    signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...

  13. Testing for time-varying loadings in dynamic factor models

    DEFF Research Database (Denmark)

    Mikkelsen, Jakob Guldbæk

    Abstract: In this paper we develop a test for time-varying factor loadings in factor models. The test is simple to compute and is constructed from estimated factors and residuals using the principal components estimator. The hypothesis is tested by regressing the squared residuals on the squared...... there is evidence of time-varying loadings on the risk factors underlying portfolio returns for around 80% of the portfolios....

  14. Model-Driven Test Generation of Distributed Systems

    Science.gov (United States)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin

    2012-01-01

    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  15. Comparison of vibration test results for Atucha II NPP and large scale concrete block models

    International Nuclear Information System (INIS)

    Iizuka, S.; Konno, T.; Prato, C.A.

    2001-01-01

    In order to study the soil structure interaction of reactor building that could be constructed on a Quaternary soil, a comparison study of the soil structure interaction springs was performed between full scale vibration test results of Atucha II NPP and vibration test results of large scale concrete block models constructed on Quaternary soil. This comparison study provides a case data of soil structure interaction springs on Quaternary soil with different foundation size and stiffness. (author)

  16. The microelectronics and photonics test bed (MPTB) space, ground test and modeling experiments

    International Nuclear Information System (INIS)

    Campbell, A.

    1999-01-01

    This paper is an overview of the MPTB (microelectronics and photonics test bed) experiment, a combination of a space experiment, ground test and modeling programs looking at the response of advanced electronic and photonic technologies to the natural radiation environment of space. (author)

  17. Testing and reference model analysis of FTTH system

    Science.gov (United States)

    Feng, Xiancheng; Cui, Wanlong; Chen, Ying

    2009-08-01

    With rapid development of Internet and broadband access network, the technologies of xDSL, FTTx+LAN , WLAN have more applications, new network service emerges in endless stream, especially the increase of network game, meeting TV, video on demand, etc. FTTH supports all present and future service with enormous bandwidth, including traditional telecommunication service, traditional data service and traditional TV service, and the future digital TV and VOD. With huge bandwidth of FTTH, it wins the final solution of broadband network, becomes the final goal of development of optical access network.. Fiber to the Home (FTTH) will be the goal of telecommunications cable broadband access. In accordance with the development trend of telecommunication services, to enhance the capacity of integrated access network, to achieve triple-play (voice, data, image), based on the existing optical Fiber to the curb (FTTC), Fiber To The Zone (FTTZ), Fiber to the Building (FTTB) user optical cable network, the optical fiber can extend to the FTTH system of end-user by using EPON technology. The article first introduced the basic components of FTTH system; and then explain the reference model and reference point for testing of the FTTH system; Finally, by testing connection diagram, the testing process, expected results, primarily analyze SNI Interface Testing, PON interface testing, Ethernet performance testing, UNI interface testing, Ethernet functional testing, PON functional testing, equipment functional testing, telephone functional testing, operational support capability testing and so on testing of FTTH system. ...

  18. Estrogen receptor testing and 10-year mortality from breast cancer: A model for determining testing strategy

    Directory of Open Access Journals (Sweden)

    Christopher Naugler

    2012-01-01

    Full Text Available Background: The use of adjuvant tamoxifen therapy in the treatment of estrogen receptor (ER expressing breast carcinomas represents a major advance in personalized cancer treatment. Because there is no benefit (and indeed there is increased morbidity and mortality associated with the use of tamoxifen therapy in ER-negative breast cancer, its use is restricted to women with ER expressing cancers. However, correctly classifying cancers as ER positive or negative has been challenging given the high reported false negative test rates for ER expression in surgical specimens. In this paper I model practice recommendations using published information from clinical trials to address the question of whether there is a false negative test rate above which it is more efficacious to forgo ER testing and instead treat all patients with tamoxifen regardless of ER test results. Methods: I used data from randomized clinical trials to model two different hypothetical treatment strategies: (1 the current strategy of treating only ER positive women with tamoxifen and (2 an alternative strategy where all women are treated with tamoxifen regardless of ER test results. The variables used in the model are literature-derived survival rates of the different combinations of ER positivity and treatment with tamoxifen, varying true ER positivity rates and varying false negative ER testing rates. The outcome variable was hypothetical 10-year survival. Results: The model predicted that there will be a range of true ER rates and false negative test rates above which it would be more efficacious to treat all women with breast cancer with tamoxifen and forgo ER testing. This situation occurred with high true positive ER rates and false negative ER test rates in the range of 20-30%. Conclusions: It is hoped that this model will provide an example of the potential importance of diagnostic error on clinical outcomes and furthermore will give an example of how the effect of that

  19. Building Energy Simulation Test for Existing Homes (BESTEST-EX); Phase 1 Test Procedure: Building Thermal Fabric Cases

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Polly, B.; Bianchi, M.; Neymark, J.

    2010-08-01

    The U.S. Department of Energy tasked NREL to develop a process for testing the reliability of models that predict retrofit energy savings, including their associated calibration methods. DOE asked NREL to conduct the work in phases so that a test procedure would be ready should DOE need it to meet legislative requirements related to residential retrofits in FY 2010. This report documents the initial 'Phase 1' test procedure.

  20. Towards model-based testing of electronic funds transfer systems

    OpenAIRE

    Asaadi, H.R.; Khosravi, R.; Mousavi, M.R.; Noroozi, N.

    2010-01-01

    We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the transaction flows specified in the ISO 8583 standard in terms of a Labeled Transition System (LTS). This formalization paves the way for model-based testing based on the formal notion of Input-Outpu...

  1. Data Modeling for Measurements in the Metrology and Testing Fields

    CERN Document Server

    Pavese, Franco

    2009-01-01

    Offers a comprehensive set of modeling methods for data and uncertainty analysis. This work develops methods and computational tools to address general models that arise in practice, allowing for a more valid treatment of calibration and test data and providing an understanding of complex situations in measurement science

  2. A Preliminary Field Test of an Employee Work Passion Model

    Science.gov (United States)

    Zigarmi, Drea; Nimon, Kim; Houson, Dobie; Witt, David; Diehl, Jim

    2011-01-01

    Four dimensions of a process model for the formulation of employee work passion, derived from Zigarmi, Nimon, Houson, Witt, and Diehl (2009), were tested in a field setting. A total of 447 employees completed questionnaires that assessed the internal elements of the model in a corporate work environment. Data from the measurements of work affect,…

  3. Testing static tradeoff theiry against pecking order models of capital ...

    African Journals Online (AJOL)

    We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...

  4. MODEL TESTING OF LOW PRESSURE HYDRAULIC TURBINE WITH HIGHER EFFICIENCY

    Directory of Open Access Journals (Sweden)

    V. K. Nedbalsky

    2007-01-01

    Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines. 

  5. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  6. Animal models for testing anti-prion drugs.

    Science.gov (United States)

    Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín

    2013-01-01

    Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.

  7. An Empirical Test of a Model of Resistance to Persuasion.

    Science.gov (United States)

    And Others; Burgoon, Michael

    1978-01-01

    Tests a model of resistance to persuasion based upon variables not considered by earlier congruity and inoculation models. Supports the prediction that the kind of critical response set induced and the target of the criticism are mediators of resistance to persuasion. (JMF)

  8. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  9. Case Studies in Modelling, Control in Food Processes.

    Science.gov (United States)

    Glassey, J; Barone, A; Montague, G A; Sabou, V

    This chapter discusses the importance of modelling and control in increasing food process efficiency and ensuring product quality. Various approaches to both modelling and control in food processing are set in the context of the specific challenges in this industrial sector and latest developments in each area are discussed. Three industrial case studies are used to demonstrate the benefits of advanced measurement, modelling and control in food processes. The first case study illustrates the use of knowledge elicitation from expert operators in the process for the manufacture of potato chips (French fries) and the consequent improvements in process control to increase the consistency of the resulting product. The second case study highlights the economic benefits of tighter control of an important process parameter, moisture content, in potato crisp (chips) manufacture. The final case study describes the use of NIR spectroscopy in ensuring effective mixing of dry multicomponent mixtures and pastes. Practical implementation tips and infrastructure requirements are also discussed.

  10. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  11. Testing and modelling autoregressive conditional heteroskedasticity of streamflow processes

    Directory of Open Access Journals (Sweden)

    W. Wang

    2005-01-01

    Full Text Available Conventional streamflow models operate under the assumption of constant variance or season-dependent variances (e.g. ARMA (AutoRegressive Moving Average models for deseasonalized streamflow series and PARMA (Periodic AutoRegressive Moving Average models for seasonal streamflow series. However, with McLeod-Li test and Engle's Lagrange Multiplier test, clear evidences are found for the existence of autoregressive conditional heteroskedasticity (i.e. the ARCH (AutoRegressive Conditional Heteroskedasticity effect, a nonlinear phenomenon of the variance behaviour, in the residual series from linear models fitted to daily and monthly streamflow processes of the upper Yellow River, China. It is shown that the major cause of the ARCH effect is the seasonal variation in variance of the residual series. However, while the seasonal variation in variance can fully explain the ARCH effect for monthly streamflow, it is only a partial explanation for daily flow. It is also shown that while the periodic autoregressive moving average model is adequate in modelling monthly flows, no model is adequate in modelling daily streamflow processes because none of the conventional time series models takes the seasonal variation in variance, as well as the ARCH effect in the residuals, into account. Therefore, an ARMA-GARCH (Generalized AutoRegressive Conditional Heteroskedasticity error model is proposed to capture the ARCH effect present in daily streamflow series, as well as to preserve seasonal variation in variance in the residuals. The ARMA-GARCH error model combines an ARMA model for modelling the mean behaviour and a GARCH model for modelling the variance behaviour of the residuals from the ARMA model. Since the GARCH model is not followed widely in statistical hydrology, the work can be a useful addition in terms of statistical modelling of daily streamflow processes for the hydrological community.

  12. Testing MODFLOW-LGR for simulating flow around Buried Quaternary valleys - synthetic test cases

    DEFF Research Database (Denmark)

    Vilhelmsen, Troels Norvin; Christensen, Steen

    In Denmark the water supply is entirely based on ground water. In some parts of the country these resources are found in buried quaternary tunnel valleys. Intensive mapping has shown that the valleys typically have a complex internal hydrogeology with multiple cut and ­fill structures....... The administration of groundwater resources has been based on simulations using regional scale groundwater models. However, regional scale models have difficulties with accurately resolving the complex geology of the buried valleys, which bears the risk of poor model predictions of local scale effects of groundwater...

  13. Testing the structure of a hydrological model using Genetic Programming

    Science.gov (United States)

    Selle, Benny; Muttil, Nitin

    2011-01-01

    SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.

  14. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  15. Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    for both upper and lower electrode systems. This has laid a foundation for modeling the welding process and selecting the welding parameters considering the machine factors. The method is straightforward and easy to be applied in industry since the whole procedure is based on tests with no requirements......The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...

  16. Reaction times to weak test lights. [psychophysics biological model

    Science.gov (United States)

    Wandell, B. A.; Ahumada, P.; Welsh, D.

    1984-01-01

    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  17. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU). The objective of the tests was: To investigate the combined influence of the pile diameter to water depth ratio and the wave height to water...... depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles. The Model tests include: Calibration of regular and irregular sea states at the location of the pile (without structure in place). Measurement of wave run-up for the calibrated sea states...... on the front side of the pile (0 to 90 degrees). These tests have been conducted at Aalborg University from 9. October, 2006 to 8. November, 2006. Unless otherwise mentioned, all values given in this report are in model scale....

  18. Creating a simulation model of software testing using Simulink package

    Directory of Open Access Journals (Sweden)

    V. M. Dubovoi

    2016-12-01

    Full Text Available The determination of the solution model of software testing that allows prediction both the whole process and its specific stages is actual for IT-industry. The article focuses on solving this problem. The aim of the article is prediction the time and improvement the quality of software testing. The analysis of the software testing process shows that it can be attributed to the branched cyclic technological processes because it is cyclical with decision-making on control operations. The investigation uses authors' previous works andsoftware testing process method based on Markov model. The proposed method enables execution the prediction for each software module, which leads to better decision-making of each controlled suboperation of all processes. Simulink simulation model shows implementation and verification of results of proposed technique. Results of the research have practically implemented in the IT-industry.

  19. Testing R&D-Based Endogenous Growth Models

    DEFF Research Database (Denmark)

    Kruse-Andersen, Peter Kjær

    2017-01-01

    R&D-based growth models are tested using US data for the period 1953-2014. A general growth model is developed which nests the model varieties of interest. The model implies a cointegrating relationship between multifactor productivity, research intensity, and employment. This relationship...... is estimated using cointegrated VAR models. The results provide evidence against the widely used fully endogenous variety and in favor of the semi-endogenous variety. Forecasts based on the empirical estimates suggest that the slowdown in US productivity growth will continue. Particularly, the annual long...

  20. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....