GENERATING TEST CASES FOR PLATFORM INDEPENDENT MODEL BY USING USE CASE MODEL
Hesham A. Hassan,; Zahraa. E. Yousif
2010-01-01
Model-based testing refers to testing and test case generation based on a model that describes the behavior of the system. Extensive use of models throughout all the phases of software development starting from the requirement engineering phase has led to increased importance of Model Based Testing. The OMG initiative MDA has revolutionized the way models would be used for software development. Ensuring that all user requirements are addressed in system design and the design is getting suffic...
The Couplex test cases: models and lessons
International Nuclear Information System (INIS)
Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.
2003-01-01
The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)
Automatic Generation of Test Cases from UML Models
Directory of Open Access Journals (Sweden)
Constanza Pérez
2018-04-01
Full Text Available [Context] The growing demand for high-quality software has caused the industry to incorporate processes to enable them to comply with these standards, but increasing the cost of development. A strategy to reduce this cost is to incorporate quality evaluations from early stages of software development. A technique that facilitates this evaluation is the model-based testing, which allows to generate test cases at early phases using as input the conceptual models of the system. [Objective] In this paper, we introduce TCGen, a tool that enables the automatic generation of abstract test cases starting from UML conceptual models. [Method] The design and implementation of TCGen, a technique that applies different testing criteria to class diagrams and state transition diagrams to generates test cases, is presented as a model-based testing approach. To do that, TCGen uses UML models, which are widely used at industry and a set of algorithms that recognize the concepts in the models in order to generate abstract test cases. [Results] An exploratory experimental evaluation has been performed to compare the TCGen tool with traditional testing. [Conclusions] Even though the exploratory evaluation shows promising results, it is necessary to perform more empirical evaluations in order to generalize the results. Abstract (in Spanish: [Contexto] La creciente demanda de software de alta calidad ha provocado que la industria incorpore procesos para permitirles cumplir con estos estándares, pero aumentando el costo del desarrollo. Una estrategia para reducir este costo es incorporar evaluaciones de calidad desde las primeras etapas del desarrollo del software. Una técnica que facilita esta evaluación es la prueba basada en modelos, que permite generar casos de prueba en fases tempranas utilizando como entrada los modelos conceptuales del sistema. [Objetivo] En este artículo, presentamos TCGen, una herramienta que permite la generación automática de casos de
Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction
Calamé, Jens R.; Ioustinova, Natalia; Romijn, J.M.T.; Smith, G.; van de Pol, Jan Cornelis
2007-01-01
Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to
Energy Technology Data Exchange (ETDEWEB)
Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-07-31
This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examples of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.
Class hierarchical test case generation algorithm based on expanded EMDPN model
Institute of Scientific and Technical Information of China (English)
LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang
2006-01-01
A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.
Tests of spinning turbine fragment impact on casing models
International Nuclear Information System (INIS)
Wilbeck, J.S.
1984-01-01
Ten 1/11-scale model turbine missile impact tests were conducted at a Naval spin chamber test facility to assess turbine missile effects in nuclear plant design. The objective of the tests was to determine the effects of missile spin, blade crush, and target edge conditions on the impact of turbine disk fragments on the steel casing. The results were intended for use in making realistic estimates for the initial conditions of fragments that might escape the casing in the event of a disk burst in a nuclear plant. The burst of a modified gas turbine rotor in a high-speed spin chamber provided three missiles with the proper rotational and translational velocities of actual steam turbine fragments. Tests of bladed, spinning missiles were compared with previous tests of unbladed, nonspinning missiles. The total residual energy of the spinning missiles, as observed from high-speed photographs of disk burst, was the same as that of the nonspinning missiles launched in a piercing orientation. Tests with bladed missiles showed that for equal burst speeds, the residual energy of bladed missiles is less than that of unbladed missiles. Impacts of missiles near the edge of targets resulted in residual missile velocities greater than for central impact. (orig.)
DECOVALEX I - Test Case 1: Coupled stress-flow model
International Nuclear Information System (INIS)
Rosengren, L.; Christianson, M.
1995-12-01
This report presents the results of the coupled stress-flow model, test case 1 of Decovalex. The model simulates the fourth loading cycle of a coupled stress-flow test and subsequent shearing up to and beyond peak shear resistance. The first loading sequence (A) consists of seven normal loading steps: 0, 5, 15, 25, 15, 5, 0 MPa. The second loading sequence (B) consists of the following eight steps: unstressed state, normal boundary loading of 25 MPa (no shearing), and then shearing of 0.5, 0.8, 2, 4, 2, 0 mm. Two different options regarding the rock joint behaviour were modeled in accordance with the problem definition. In option 1 a linear elastic joint model with Coulomb slip criterion was used. In option 2 a non-linear empirical (i.e. Barton-Bandis) joint model was used. The hydraulic condition during both load sequence A and B was a constant head of 5 m at the inlet point and 0 m at the outlet point. All model runs presented in this report were performed using the two-dimensional distinct element computer code UDEC, version 1.8. 30 refs, 36 figs
Time-Optimal Real-Time Test Case Generation using UPPAAL
DEFF Research Database (Denmark)
Hessel, Anders; Larsen, Kim Guldstrand; Nielsen, Brian
2004-01-01
Testing is the primary software validation technique used by industry today, but remains ad hoc, error prone, and very expensive. A promising improvement is to automatically generate test cases from formal models of the system under test. We demonstrate how to automatically generate real...... test purposes or generated automatically from various coverage criteria of the model.......-time conformance test cases from timed automata specifications. Specifically we demonstrate how to fficiently generate real-time test cases with optimal execution time i.e test cases that are the fastest possible to execute. Our technique allows time optimal test cases to be generated using manually formulated...
Comparative Test Case Specification
DEFF Research Database (Denmark)
Kalyanova, Olena; Heiselberg, Per
This document includes the specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. In the comp....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases. The comparative test cases include: ventilation, shading and geometry....
Energy Technology Data Exchange (ETDEWEB)
Neymark, J. [J. Neymark & Associates, Golden, CO (United States); Kennedy, M. [Mike D. Kennedy, Inc., Townsend, WA (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gall, J. [AAON, Inc., Tulsa, OK (United States); Knebel, D. [AAON, Inc., Tulsa, OK (United States); Henninger, R. [GARD Analytics, Inc., Arlington Heights, IL (United States); Witte, M. [GARD Analytics, Inc., Arlington Heights, IL (United States); Hong, T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McDowell, T. [Thermal Energy System Specialists, Madison, WI (United States); Yan, D. [Tsinghua Univ., Beijing (China); Zhou, X. [Tsinghua Univ., Beijing (China)
2016-03-01
This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.
Testing the Market Model – A Case Study of Fondul Proprietatea (FP)
Sorin Claudiu Radu
2014-01-01
The financial theory related to the bond portfolio analysis was coined by Harry Markowitz, an authentic’ pioneer of the modern bond theory’, and his well-thought interpretation of the bond selection model may be found in his research papers “Portfolio Selection” (Markowitz M. Harry, 1952) and “Portfolio Selection: Efficient Diversification of Investments” (Markowitz M. Harry 1960). This paper is proposed to test the market model in the Romanian stock market, case of Property Fund.
Test case for a near-surface repository
International Nuclear Information System (INIS)
Elert, M.; Jones, C.; Nilsson, L.B.; Skagius, K.; Wiborgh, M.
1998-01-01
A test case is presented for assessment of a near-surface disposal facility for radioactive waste. The case includes waste characterization and repository design, requirements and constraints in an assessment context, scenario development, model description and test calculations
Directory of Open Access Journals (Sweden)
Ina Schieferdecker
2012-02-01
Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.
Xie, Qin; Andrews, Stephen
2013-01-01
This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…
Strategy for a Rock Mechanics Site Descriptive Model. A test case based on data from the Aespoe HRL
International Nuclear Information System (INIS)
Hudson, John A
2002-06-01
In anticipation of the SKB Site Investigations for radioactive waste disposal, an approach has been developed for the Rock Mechanics Site Descriptive Model. This approach was tested by predicting the rock mechanics properties of a 600 m x 180 m x 120 m rock volume at the Aespoe Hard Rock Laboratory (HRL) using limited borehole data of the type typically obtained during a site investigation. These predicted properties were then compared with 'best estimate' properties obtained from a study of the test rock volume using additional information, mainly tunnel data. The exercise was known as the Test Case, and is the subject of this Report. Three modelling techniques were used to predict the rock properties: the 'empirical approach' - the rock properties were estimated using rock mass classification schemes and empirical correlation formulae; the 'theoretical approach' - the rock properties were estimated using numerical modelling techniques; and the 'stress approach' - the rock stress state was estimated using primary data and numerical modelling. These approaches are described separately and respectively. Following an explanation of the context for the Test Case within the strategy for developing the Rock Mechanics Site Descriptive Model, conditions at the Aespoe HRL are described in Chapter 2. The Test Case organization and the suite of nine Protocols used to ensure that the work was appropriately guided and co-ordinated are described in Chapter 3. The methods for predicting the rock properties and the rock stress, and comparisons with the 'best estimate' properties of the actual conditions, are presented in Chapters 4 and 5. Finally, the conclusions from this Test Case exercise are given in Chapter 6. General recommendations for the management of this type of Test Case are also included
Strategy for a Rock Mechanics Site Descriptive Model. A test case based on data from the Aespoe HRL
Energy Technology Data Exchange (ETDEWEB)
Hudson, John A (ed.) [Rock Engineering Consultants, Welwyn Garden City (United Kingdom)
2002-06-01
In anticipation of the SKB Site Investigations for radioactive waste disposal, an approach has been developed for the Rock Mechanics Site Descriptive Model. This approach was tested by predicting the rock mechanics properties of a 600 m x 180 m x 120 m rock volume at the Aespoe Hard Rock Laboratory (HRL) using limited borehole data of the type typically obtained during a site investigation. These predicted properties were then compared with 'best estimate' properties obtained from a study of the test rock volume using additional information, mainly tunnel data. The exercise was known as the Test Case, and is the subject of this Report. Three modelling techniques were used to predict the rock properties: the 'empirical approach' - the rock properties were estimated using rock mass classification schemes and empirical correlation formulae; the 'theoretical approach' - the rock properties were estimated using numerical modelling techniques; and the 'stress approach' - the rock stress state was estimated using primary data and numerical modelling. These approaches are described separately and respectively. Following an explanation of the context for the Test Case within the strategy for developing the Rock Mechanics Site Descriptive Model, conditions at the Aespoe HRL are described in Chapter 2. The Test Case organization and the suite of nine Protocols used to ensure that the work was appropriately guided and co-ordinated are described in Chapter 3. The methods for predicting the rock properties and the rock stress, and comparisons with the 'best estimate' properties of the actual conditions, are presented in Chapters 4 and 5. Finally, the conclusions from this Test Case exercise are given in Chapter 6. General recommendations for the management of this type of Test Case are also included.
Test case for a near-surface repository
Energy Technology Data Exchange (ETDEWEB)
Elert, M.; Jones, C. [Kemakta Konsult AB, Stockholm (Sweden); Nilsson, L.B. [Swedish Nuclear Fuel and Waste Co, Stockholm (Sweden); Skagius, K.; Wiborgh, M. [Kemakta Konsult AB, Stockholm (Sweden)
1998-09-01
A test case is presented for assessment of a near-surface disposal facility for radioactive waste. The case includes waste characterization and repository design, requirements and constraints in an assessment context, scenario development, model description and test calculations 6 refs, 12 tabs, 16 figs
Modeling of a Parabolic Trough Solar Field for Acceptance Testing: A Case Study
Energy Technology Data Exchange (ETDEWEB)
Wagner, M. J.; Mehos, M. S.; Kearney, D. W.; McMahan, A. C.
2011-01-01
As deployment of parabolic trough concentrating solar power (CSP) systems ramps up, the need for reliable and robust performance acceptance test guidelines for the solar field is also amplified. Project owners and/or EPC contractors often require extensive solar field performance testing as part of the plant commissioning process in order to ensure that actual solar field performance satisfies both technical specifications and performance guaranties between the involved parties. Performance test code work is currently underway at the National Renewable Energy Laboratory (NREL) in collaboration with the SolarPACES Task-I activity, and within the ASME PTC-52 committee. One important aspect of acceptance testing is the selection of a robust technology performance model. NREL1 has developed a detailed parabolic trough performance model within the SAM software tool. This model is capable of predicting solar field, sub-system, and component performance. It has further been modified for this work to support calculation at subhourly time steps. This paper presents the methodology and results of a case study comparing actual performance data for a parabolic trough solar field to the predicted results using the modified SAM trough model. Due to data limitations, the methodology is applied to a single collector loop, though it applies to larger subfields and entire solar fields. Special consideration is provided for the model formulation, improvements to the model formulation based on comparison with the collected data, and uncertainty associated with the measured data. Additionally, this paper identifies modeling considerations that are of particular importance in the solar field acceptance testing process and uses the model to provide preliminary recommendations regarding acceptable steady-state testing conditions at the single-loop level.
Casing pull tests for directionally drilled environmental wells
International Nuclear Information System (INIS)
Staller, G.E.; Wemple, R.P.; Layne, R.R.
1994-11-01
A series of tests to evaluate several types of environmental well casings have been conducted by Sandia National Laboratories (SNL) and it's industrial partner, The Charles Machine Works, Inc. (CMW). A test bed was constructed at the CMW test range to model a typical shallow, horizontal, directionally drilled wellbore. Four different types of casings were pulled through this test bed. The loads required to pull the casings through the test bed and the condition of the casing material were documented during the pulling operations. An additional test was conducted to make a comparison of test bed vs actual wellbore casing pull loads. A directionally drilled well was emplaced by CMW to closely match the test bed. An instrumented casing was installed in the well and the pull loads recorded. The completed tests are reviewed and the results reported
Casing pull tests for directionally drilled environmental wells
Energy Technology Data Exchange (ETDEWEB)
Staller, G.E.; Wemple, R.P. [Sandia National Labs., Albuquerque, NM (United States); Layne, R.R. [Charles Machine Works, Inc., Perry, OK (United States)
1994-11-01
A series of tests to evaluate several types of environmental well casings have been conducted by Sandia National Laboratories (SNL) and it`s industrial partner, The Charles Machine Works, Inc. (CMW). A test bed was constructed at the CMW test range to model a typical shallow, horizontal, directionally drilled wellbore. Four different types of casings were pulled through this test bed. The loads required to pull the casings through the test bed and the condition of the casing material were documented during the pulling operations. An additional test was conducted to make a comparison of test bed vs actual wellbore casing pull loads. A directionally drilled well was emplaced by CMW to closely match the test bed. An instrumented casing was installed in the well and the pull loads recorded. The completed tests are reviewed and the results reported.
Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine
2017-11-02
This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.
A Multi-Process Test Case to Perform Comparative Analysis of Coastal Oceanic Models
Lemarié, F.; Burchard, H.; Knut, K.; Debreu, L.
2016-12-01
Due to the wide variety of choices that need to be made during the development of dynamical kernels of oceanic models, there is a strong need for an effective and objective assessment of the various methods and approaches that predominate in the community. We present here an idealized multi-scale scenario for coastal ocean models combining estuarine, coastal and shelf sea scales at midlatitude. The bathymetry, initial conditions and external forcings are defined analytically so that any model developer or user could reproduce the test case with its own numerical code. Thermally stratified conditions are prescribed and a tidal forcing is imposed as a propagating coastal Kelvin wave. The following physical processes can be assessed from the model results: estuarine process driven by tides and buoyancy gradients, the river plume dynamics, tidal fronts, and the interaction between tides and inertial oscillations. We show results obtained using the GETM (General Estuarine Transport Model) and the CROCO (Coastal and Regional Ocean Community model) models. Those two models are representative of the diversity of numerical methods in use in coastal models: GETM is based on a quasi-lagrangian vertical coordinate, a coupled space-time approach for advective terms, a TVD (Total Variation Diminishing) tracer advection scheme while CROCO is discretized with a quasi-eulerian vertical coordinate, a method of lines is used for advective terms, and tracer advection satisfies the TVB (Total Variation Bounded) property. The multiple scales are properly resolved thanks to nesting strategies, 1-way nesting for GETM and 2-way nesting for CROCO. Such test case can be an interesting experiment to continue research in numerical approaches as well as an efficient tool to allow intercomparison between structured-grid and unstructured-grid approaches. Reference : Burchard, H., Debreu, L., Klingbeil, K., Lemarié, F. : The numerics of hydrostatic structured-grid coastal ocean models: state of
Comparative Test Case Specification
DEFF Research Database (Denmark)
Kalyanova, Olena; Heiselberg, Per
This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....... This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....
Awédikian , Roy; Yannou , Bernard
2012-01-01
International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...
Model-based testing for software safety
Gurbuz, Havva Gulay; Tekinerdogan, Bedir
2017-01-01
Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a
Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin
2012-01-01
Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Wang, Zhen; Labonotte, Laurent C.; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred
2018-04-01
Initially unpolarized solar radiation becomes polarized by scattering in the Earth's atmosphere. In particular molecular scattering (Rayleigh scattering) polarizes electromagnetic radiation, but also scattering of radiation at aerosols, cloud droplets (Mie scattering) and ice crystals polarizes. Each atmospheric constituent produces a characteristic polarization signal, thus spectro-polarimetric measurements are frequently employed for remote sensing of aerosol and cloud properties. Retrieval algorithms require efficient radiative transfer models. Usually, these apply the plane-parallel approximation (PPA), assuming that the atmosphere consists of horizontally homogeneous layers. This allows to solve the vector radiative transfer equation (VRTE) efficiently. For remote sensing applications, the radiance is considered constant over the instantaneous field-of-view of the instrument and each sensor element is treated independently in plane-parallel approximation, neglecting horizontal radiation transport between adjacent pixels (Independent Pixel Approximation, IPA). In order to estimate the errors due to the IPA approximation, three-dimensional (3D) vector radiative transfer models are required. So far, only a few such models exist. Therefore, the International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to provide benchmark results for polarized radiative transfer. The group has already performed an intercomparison for one-dimensional (1D) multi-layer test cases [phase A, 1]. This paper presents the continuation of the intercomparison project (phase B) for 2D and 3D test cases: a step cloud, a cubic cloud, and a more realistic scenario including a 3D cloud field generated by a Large Eddy Simulation (LES) model and typical background aerosols. The commonly established benchmark results for 3D polarized radiative transfer are available at the IPRT website (http
International Nuclear Information System (INIS)
Jackson, C.P.; Lever, D.A.; Sumner, P.J.
1991-03-01
We present our views on validation. We consider that validation is slightly different for general models and specific models. We stress the importance of presenting for review the case for (or against) a model. We outline a formal framework for validation, which helps to ensure that all the issues are addressed. Our framework includes calibration, testing predictions, comparison with alternative models, which we consider particularly important, analysis of discrepancies, presentation, consideration of implications and suggested improved experiments. We illustrate the approach by application to an INTRAVAL test case based on laboratory experiments. Three models were considered: a simple model that included the effects of advection, dispersion and equilibrium sorption, a model that also included the effects of rock-matrix diffusion, and a model with kinetic sorption. We show that the model with rock-matrix diffusion is the only one to provide a good description of the data. We stress the implications of extrapolating to larger length and time scales for repository performance assessments. (author)
Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves
2016-04-01
The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting
Directory of Open Access Journals (Sweden)
Laszlo A. Marosi
2013-01-01
Full Text Available We present a new redshift (RS versus photon travel time ( test including 171 supernovae RS data points. We extended the Hubble diagram to a range of z = 0,0141–8.1 in the hope that at high RSs, the fitting of the calculated RS/ diagrams to the observed RS data would, as predicted by different cosmological models, set constraints on alternative cosmological models. The Lambda cold dark matter (ΛCDM, the static universe model, and the case for a slowly expanding flat universe (SEU are considered. We show that on the basis of the Hubble diagram test, the static and the slowly expanding models are favored.
Model Based Analysis and Test Generation for Flight Software
Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep
2009-01-01
We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.
Empirical Test Case Specification
DEFF Research Database (Denmark)
Kalyanova, Olena; Heiselberg, Per
This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....
Automation of Test Cases for Web Applications : Automation of CRM Test Cases
Seyoum, Alazar
2012-01-01
The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...
Theoretical Models, Assessment Frameworks and Test Construction.
Chalhoub-Deville, Micheline
1997-01-01
Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…
International Nuclear Information System (INIS)
Beauheim, R.L.
1997-05-01
This report describes the WIPP 1 test case studied as part of INTRAVAL, an international project to study validation of geosphere transport models. The WIPP 1 test case involved simulation of measured brine-inflow rates to boreholes drilled into the halite strata surrounding the Waste Isolation Pilot Plant repository. The goal of the test case was to evaluate the use of Darcy's law to describe brine flow through halite. The general approach taken was to try to obtain values of permeability and specific capacitance that would be: (1) consistent with other available data and (2) able to provide reasonable simulations of all of the brine-inflow experiments performed in the Salado Formation. All of the teams concluded that the average permeability of the halite strata penetrated by the holes was between approximately 10 -22 and 10 -21 m 2 . Specific capacitances greater than 10 -10 Pa -1 are inconsistent with the known constitutive properties of halite and are attributed to deformation, possibly ongoing, of the halite around the WIPP excavations. All project teams found that Darcy-flow models could replicate the experimental data in a consistent and reasonable manner. Discrepancies between the data and simulations are attributed to inadequate representation in the models of processes modifying the pore-pressure field in addition to the experiments themselves, such as ongoing deformation of the rock around the excavations. Therefore, the conclusion from the test case is that Darcy-flow models can reliably be used to predict brine flow to WIPP excavations, provided that the flow modeling is coupled with measurement and realistic modeling of the pore-pressure field around the excavations. This realistic modeling of the pore-pressure field would probably require coupling to a geomechanical model of the stress evolution around the repository
Case Study: Testing with Case Studies
Herreid, Clyde Freeman
2015-01-01
This column provides original articles on innovations in case study teaching, assessment of the method, as well as case studies with teaching notes. This month's issue discusses using case studies to test for knowledge or lessons learned.
Integrated socio-environmental modelling: A test case in coastal Bangladesh
Lazar, Attila
2013-04-01
though the coastal Bangladesh test case.
A new fit-for-purpose model testing framework: Decision Crash Tests
Tolson, Bryan; Craig, James
2016-04-01
Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building
Modeling erosion of unsaturated compacted bentonite by groundwater flow; pinhole erosion test case
International Nuclear Information System (INIS)
Laurila, T.; Sane, P.; Olin, M.; Koskinen, K.
2012-01-01
Document available in extended abstract form only. Erosion of compacted clay material by water flow is a critical factor affecting the performance of radioactive waste confinement. Our emphasis in this work is the buffer of KBS-3V concept, proposed to be compacted MX-80 bentonite. Unsaturated erosion occurs during the saturation phase of the EBS, and the main quantity of interest is the total buffer mass carried away by a groundwater flow that induces erosion by forming piping channels near the buffer/rock interface. The purpose of this work is to provide modeling tools to support erosion experiments. Role of modeling is first to interpret experimental observations in terms of processes, and to estimate robustness of experimental results. Secondly, we seek to scale up results from the laboratory scale, particularly to time scales longer than those experimentally accessible. We have performed modeling and data analysis pertaining to tests of unsaturated clay erosion. Pinhole experiments were used to study this erosion case. The main differences to well-understood pinhole erosion tests are that the material is strongly swelling and that the water flow is not determined by the pressure head but by the total flux. Groundwater flow in the buffer is determined by the flux because pressure losses occur overwhelmingly in the surrounding rock, not in the piping channel. We formulate a simple model that links an effective solid diffusivity -based swelling model to erosion by flow on the solid/liquid interface. The swelling model is similar in concept to that developed at KTH, but simpler. Erosion in the model is caused by laminar flow in the pinhole, and happens in a narrow region at the solid/liquid interface where velocity and solid volume fraction overlap. The erosion model can be mapped to erosion by wall shear, and can thus be considered as extension of that classic erosion model. The main quantity defining the behavior of clay erosion in the model is the ratio of
Test-driven modeling of embedded systems
DEFF Research Database (Denmark)
Munck, Allan; Madsen, Jan
2015-01-01
To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases...... suggest that our method provides a sound foundation for rapid development of high quality system models....
2007-03-01
computer models of cell cycle regulation in a variety of organisms, including yeast cells, amphibian embryos, bacterial cells and human cells. These...and meiosis ), but they do not nullify the central role played by irreversible, alternating START and FINISH transitions in the cell cycle. 32...AFRL-IF-RS-TR-2007-69 Final Technical Report March 2007 EUKARYOTIC CELL CYCLE AS A TEST CASE FOR MODELING CELLULAR REGULATION IN A
Case Report: HIV test misdiagnosis
African Journals Online (AJOL)
Case Study: HIV test misdiagnosis 124. Case Report: HIV ... A positive rapid HIV test does not require ... 3 College of Medicine - Johns Hopkins Research Project, Blantyre,. Malawi ... test results: a pilot study of three community testing sites.
Predicate Argument Structure Analysis for Use Case Description Modeling
Takeuchi, Hironori; Nakamura, Taiga; Yamaguchi, Takahira
In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.
Test cases for interface tracking methods: methodology and current status
International Nuclear Information System (INIS)
Lebaigue, O.; Jamet, D.; Lemonnier, E.
2004-01-01
Full text of publication follows:In the past decade, a large number of new methods have been developed to deal with interfaces in the numerical simulation of two-phase flows. We have collected a set of 36 test cases, which can be seen as a tool to help engineers and researchers selecting the most appropriate method(s) for their specific fields of application. This set can be use: - To perform an initial evaluation of the capabilities of available methods with regard to the specificity of the final application and the most important features to be recovered from the simulation. - To measure the maximum mesh size to be used for a given physical problem in order to obtain an accurate enough solution. - To assess and quantify the performances of a selected method equipped with its set of physical models. The computation of a well-documented test case allows estimating the error due to the numerical technique by comparison with reference solutions. This process is compulsory to gain confidence and credibility on the prediction capabilities of a numerical method and its physical models. - To broaden the capabilities of a given numerical technique. The test cases may be used to identify the need for improvement of the overall numerical scheme or to determine the physical part of the model, which is responsible for the observed limitations. Each test case falls within one of the following categories: - Analytical solutions of well-known sets of equations corresponding to simple geometrical situations. - Reference numerical solutions of moderately complex problems, produced by accurate methods (e.g., boundary Fitted coordinate method) on refined meshes. - Separate effects analytical experiments. The presentation will suggest how to use the test cases for assessing the physical models and the numerical methods. The expected fallout of using test cases is indeed on the one hand to identify the merits of existing methods and on the other hand to orient further research towards
Theory testing using case studies
DEFF Research Database (Denmark)
Dissing Sørensen, Pernille; Løkke Nielsen, Ann-Kristina
2006-01-01
on the strengths of theory-testing case studies. We specify research paths associated with theory testing in case studies and present a coherent argument for the logic of theoretical development and refinement using case studies. We emphasize different uses of rival explanations and their implications for research...... design. Finally, we discuss the epistemological logic, i.e., the value to larger research programmes, of such studies and, following Lakatos, conclude that the value of theory-testing case studies lies beyond naïve falsification and in their contribution to developing research programmes in a progressive......Case studies may have different research goals. One such goal is the testing of small-scale and middle-range theories. Theory testing refers to the critical examination, observation, and evaluation of the 'why' and 'how' of a specified phenomenon in a particular setting. In this paper, we focus...
Automated Test Case Generation
CERN. Geneva
2015-01-01
I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...
Theory Testing Using Case Studies
DEFF Research Database (Denmark)
Møller, Ann-Kristina Løkke; Dissing Sørensen, Pernille
2014-01-01
The appropriateness of case studies as a tool for theory testing is still a controversial issue, and discussions about the weaknesses of such research designs have previously taken precedence over those about its strengths. The purpose of the paper is to examine and revive the approach of theory...... testing using case studies, including the associated research goal, analysis, and generalisability. We argue that research designs for theory testing using case studies differ from theorybuilding case study research designs because different research projects serve different purposes and follow different...... research paths....
Piety, Philip John
With science education in the United States entering a period of greater accountability, this study investigated how student learning in science was assessed by educators within one state, asking what systemic assessment approaches existed and how the information from them was used. Conducted during the 20o6-2007 school year, this research developed and piloted a network-model case study design that included teachers, principals, administrators, and the state test development process, as well as several state-level professional associations. The data analyzed included observations, interviews, surveys, and both public and private documents. Some data were secondary. This design produced an empirical depiction of practice with a web of related cases. The network model expands on the hierarchical (nested) models often assumed in the growing literature on how information is used in educational contexts by showing multiple ways in which individuals are related through organizational structures. Seven case study teachers, each employing assessment methods largely unique and invisible to others in their schools, illustrate one set of assessment practices. The only alternative to classroom assessments that could be documented was the annual state accountability test. These two assessment species were neither tightly coupled nor distinct. Some teachers were partners in developing state test instruments, and in some cases the annual test could be seen as a school management resource. Boundary practices---activities where these two systems connected---were opportunities to identify challenges to policy implementation in science education. The challenges include standards, cognition, vocabulary, and classroom equipment. The boundary practices, along with the web of connections, provide the outlines of potential (and often unrealized) synergistic relationships. This model shows diverse indigenous practices and adaptations by actors responding to pressures of change and
Comparison of Critical Flow Models' Evaluations for SBLOCA Tests
International Nuclear Information System (INIS)
Kim, Yeon Sik; Park, Hyun Sik; Cho, Seok
2016-01-01
A comparison of critical flow models between the Trapp-Ransom and Henry-Fauske models for all SBLOCA (small break loss of coolant accident) scenarios of the ATLAS (Advanced thermal-hydraulic test loop for accident simulation) facility was performed using the MARS-KS code. For the comparison of the two critical models, the accumulated break mass was selected as the main parameter for the comparison between the analyses and tests. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL (cold leg) break and 25%, 50%, and 100% DVI (direct vessel injection) breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR (pressurizer) pressure and collapsed core water level, were also compared between the two critical models. Four cases showed the same respective discharge coefficients between the two critical models, e.g., 6' CL break and 25%, 50%, and 100% DVI breaks. In the case of the 4' CL break, no reasonable results were obtained with any possible Cd values. In addition, typical system behaviors, e.g., PZR pressure and collapsed core water level, were also compared between the two critical models. From the comparison between the two critical models for the CL breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 2', 6', and 8.5' CL breaks. In addition, from the comparison between the two critical models for the DVI breaks, the Trapp-Ransom model predicted quite well with respect to the other model for the smallest and larger breaks, e.g., 5%, 50%, and 100% DVI breaks. In the case of the 50% and 100% breaks, the two critical models predicted the test data quite well.
Testing Affine Term Structure Models in Case of Transaction Costs
Driessen, J.J.A.G.; Melenberg, B.; Nijman, T.E.
1999-01-01
In this paper we empirically analyze the impact of transaction costs on the performance of affine interest rate models. We test the implied (no arbitrage) Euler restrictions, and we calculate the specification error bound of Hansen and Jagannathan to measure the extent to which a model is
Making System Dynamics Cool IV : Teaching & Testing with Cases & Quizzes
Pruyt, E.
2012-01-01
This follow-up paper presents cases and multiple choice questions for teaching and testing System Dynamics modeling. These cases and multiple choice questions were developed and used between January 2012 and April 2012 a large System Dynamics course (250+ 2nd year BSc and 40+ MSc students per year)
Conformance test development with the Java modeling language
DEFF Research Database (Denmark)
Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.
2017-01-01
In order to claim conformance with a Java Specification Request, a Java implementation has to pass all tests in an associated Technology Compatibility Kit (TCK). This paper presents a model-based development of a TCK test suite and a test execution tool for the draft Safety-Critical Java (SCJ......) profile specification. The Java Modeling Language (JML) is used to model conformance constraints for the profile. JML annotations define contracts for classes and interfaces. The annotations are translated by a tool into runtime assertion checks.Hereby the design and elaboration of the concrete test cases...
Testing the compounding structure of the CP-INARCH model
Weiß, Christian H.; Gonçalves, Esmeralda; Lopes, Nazaré Mendes
2017-01-01
A statistical test to distinguish between a Poisson INARCH model and a Compound Poisson INARCH model is proposed, based on the form of the probability generating function of the compounding distribution of the conditional law of the model. For first-order autoregression, the normality of the test statistics’ asymptotic distribution is established, either in the case where the model parameters are specified, or when such parameters are consistently estimated. As the test statistics’ law involv...
Prioritizing Test Cases for Memory Leaks in Android Applications
Institute of Scientific and Technical Information of China (English)
Ju Qian; Di Zhou
2016-01-01
Mobile applications usually can only access limited amount of memory. Improper use of the memory can cause memory leaks, which may lead to performance slowdowns or even cause applications to be unexpectedly killed. Although a large body of research has been devoted into the memory leak diagnosing techniques after leaks have been discovered, it is still challenging to find out the memory leak phenomena at first. Testing is the most widely used technique for failure discovery. However, traditional testing techniques are not directed for the discovery of memory leaks. They may spend lots of time on testing unlikely leaking executions and therefore can be inefficient. To address the problem, we propose a novel approach to prioritize test cases according to their likelihood to cause memory leaks in a given test suite. It firstly builds a prediction model to determine whether each test can potentially lead to memory leaks based on machine learning on selected code features. Then, for each input test case, we partly run it to get its code features and predict its likelihood to cause leaks. The most suspicious test cases will be suggested to run at first in order to reveal memory leak faults as soon as possible. Experimental evaluation on several Android applications shows that our approach is effective.
Shin, Youngsul; Choi, Yunja; Lee, Woo Jin
2013-06-01
As medical software is getting larger-sized, complex, and connected with other devices, finding faults in integrated software modules gets more difficult and time consuming. Existing integration testing typically takes a black-box approach, which treats the target software as a black box and selects test cases without considering internal behavior of each software module. Though it could be cost-effective, this black-box approach cannot thoroughly test interaction behavior among integrated modules and might leave critical faults undetected, which should not happen in safety-critical systems such as medical software. This work anticipates that information on internal behavior is necessary even for integration testing to define thorough test cases for critical software and proposes a new integration testing method by reusing test cases used for unit testing. The goal is to provide a cost-effective method to detect subtle interaction faults at the integration testing phase by reusing the knowledge obtained from unit testing phase. The suggested approach notes that the test cases for the unit testing include knowledge on internal behavior of each unit and extracts test cases for the integration testing from the test cases for the unit testing for a given test criteria. The extracted representative test cases are connected with functions under test using the state domain and a single test sequence to cover the test cases is produced. By means of reusing unit test cases, the tester has effective test cases to examine diverse execution paths and find interaction faults without analyzing complex modules. The produced test sequence can have test coverage as high as the unit testing coverage and its length is close to the length of optimal test sequences. Copyright © 2013 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Neymark, J.; Judkoff, R.
2002-01-01
This report describes the Building Energy Simulation Test for Heating, Ventilating, and Air-Conditioning Equipment Models (HVAC BESTEST) project conducted by the Tool Evaluation and Improvement International Energy Agency (IEA) Experts Group. The group was composed of experts from the Solar Heating and Cooling (SHC) Programme, Task 22, Subtask A. The current test cases, E100-E200, represent the beginning of work on mechanical equipment test cases; additional cases that would expand the current test suite have been proposed for future development.
A prevalence-based association test for case-control studies.
Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M
2008-11-01
Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.
Developing and Testing a 3d Cadastral Data Model a Case Study in Australia
Aien, A.; Kalantari, M.; Rajabifard, A.; Williamson, I. P.; Shojaei, D.
2012-07-01
and physical extent of 3D properties and associated interests. The data model extends the traditional cadastral requirements to cover other applications such as urban planning and land valuation and taxation. A demonstration of a test system on the proposed data model is also presented. The test is based on a case study in Victoria, Australia to evaluate the effectiveness of the data model.
Unit testing, model validation, and biological simulation.
Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C
2016-01-01
The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.
Directory of Open Access Journals (Sweden)
J. Kent
2012-12-01
Full Text Available The accurate modeling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5 to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the western hemisphere should not pass into the eastern hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing.
An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV and spectral element (CAM-SE dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL. Each scheme tested produces unphysical mass in the eastern hemisphere of the separate cells test.
A case study testing the cavity mode model of the magnetosphere
Directory of Open Access Journals (Sweden)
D. V. Sarafopoulos
2005-07-01
Full Text Available Based on a case study we test the cavity mode model of the magnetosphere, looking for eigenfrequencies via multi-satellite and multi-instrument measurements. Geotail and ACE provide information on the interplanetary medium that dictates the input parameters of the system; the four Cluster satellites monitor the magnetopause surface waves; the POLAR (L=9.4 and LANL 97A (L=6.6 satellites reveal two in-situ monochromatic field line resonances (FLRs with T=6 and 2.5 min, respectively; and the IMAGE ground magnetometers demonstrate latitude dependent delays in signature arrival times, as inferred by Sarafopoulos (2004b. Similar dispersive structures showing systematic delays are also extensively scrutinized by Sarafopoulos (2005 and interpreted as tightly associated with the so-called pseudo-FLRs, which show almost the same observational characteristics with an authentic FLR. In particular for this episode, successive solar wind pressure pulses produce recurring ionosphere twin vortex Hall currents which are identified on the ground as pseudo-FLRs. The BJN ground magnetometer records the pseudo-FLR (alike with the other IMAGE station responses associated with an intense power spectral density ranging from 8 to 12 min and, in addition, two discrete resonant lines with T=3.5 and 7 min. In this case study, even though the magnetosphere is evidently affected by a broad-band compressional wave originated upstream of the bow shock, nevertheless, we do not identify any cavity mode oscillation within the magnetosphere. We fail, also, to identify any of the cavity mode frequencies proposed by Samson (1992.
Keywords. Magnetospheric physics (Magnetosphereionosphere interactions; Solar wind-magnetosphere interactions; MHD waves and instabilities
Continuous validation of ASTEC containment models and regression testing
International Nuclear Information System (INIS)
Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin
2014-01-01
The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Nielsen, Søren R. K.; Kirkegaard, Poul Henning
1997-01-01
in the comparison. The data investigated are sampled from a laboratory model of a plane 6-storey, 2-bay RC-frame. The laboratory model is excited at the top storey where two different types of excitation where considered. In the first case the structure was excited in the first mode and in the second case......The scope of the paper is to apply multi-variate time-domain models for identification of eginfrequencies and mode shapes of a time- invariant model test Reinforced Concrete (RC) frame from measured decays. The frequencies and mode shapes of interest are the two lowest ones since they are normally...
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Nielsen, Søren R. K.; Kirkegaard, Poul Henning
in the comparison. The data investigated are sampled from a laboratory model of a plane 6-storey, 2-bay RC-frame. The laboratory model is excited at the top storey where two different types of excitation where considered. In the first case the structure was excited in the first mode and in the second case......The scope of the paper is to apply multi-variate time-domain models for identification of eginfrequencies and mode shapes of a time- invariant model test Reinforced Concrete (RC) frame from measured decays. The frequencies and mode shapes of interest are the two lowest ones since they are normally...
Model-Based GUI Testing Using Uppaal at Novo Nordisk
Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne
This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.
Sensitivity analysis methods and a biosphere test case implemented in EIKOS
Energy Technology Data Exchange (ETDEWEB)
Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)
2006-05-15
Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several
Sensitivity analysis methods and a biosphere test case implemented in EIKOS
International Nuclear Information System (INIS)
Ekstroem, P.A.; Broed, R.
2006-05-01
Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked
Goodness-of-fit tests in mixed models
Claeskens, Gerda
2009-05-12
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.
Loke Mun Sei
2015-01-01
Software testing has become a mandatory process in assuring the software product quality. Hence, test management is needed in order to manage the test activities conducted in the software test life cycle. This paper discusses on the challenges faced in the software test life cycle, and how the test processes and test activities, mainly on test cases creation, test execution, and test reporting is being managed and automated using several test automation tools, i.e. Jira, ...
Test-Driven, Model-Based Systems Engineering
DEFF Research Database (Denmark)
Munck, Allan
Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...
Barsi, Alpar; Jager, Tjalling; Collinet, Marc; Lagadic, Laurent; Ducrot, Virginie
2014-07-01
Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information is provided regarding test design for TKTD modeling. A case study is presented for the pond snail Lymnaea stagnalis exposed to the narcotic compound acetone, in which the data on all endpoints were analyzed together using a relatively simple TKTD model called DEBkiss. Furthermore, the influence of the data used for calibration on accuracy and precision of model parameters is discussed. The DEBkiss model described toxic effects on survival, growth, and reproduction over time well, within a single integrated analysis. Regarding the parameter estimates (e.g., no-effect concentration), precision rather than accuracy was affected depending on which data set was used for model calibration. In addition, the present study shows that the intrinsic sensitivity of snails to acetone stays the same across different life stages, including the embryonic stage. In fact, the data on egg development allowed for selection of a unique metabolic mode of action for the toxicant. Practical and theoretical considerations for test design to accommodate TKTD modeling are discussed in the hope that this information will aid other researchers to make the best possible use of their test animals. © 2014 SETAC.
Energy Technology Data Exchange (ETDEWEB)
Neymark J.; Judkoff, R.
2004-12-01
This report documents an additional set of mechanical system test cases that are planned for inclusion in ANSI/ASHRAE STANDARD 140. The cases test a program's modeling capabilities on the working-fluid side of the coil, but in an hourly dynamic context over an expanded range of performance conditions. These cases help to scale the significance of disagreements that are less obvious in the steady-state cases. The report is Vol. 2 of HVAC BESTEST Volume 1. Volume 1 was limited to steady-state test cases that could be solved with analytical solutions. Volume 2 includes hourly dynamic effects, and other cases that cannot be solved analytically. NREL conducted this work in collaboration with the Tool Evaluation and Improvement Experts Group under the International Energy Agency (IEA) Solar Heating and Cooling Programme Task 22.
RSG Deployment Case Testing Results
Energy Technology Data Exchange (ETDEWEB)
Owsley, Stanley L.; Dodson, Michael G.; Hatchell, Brian K.; Seim, Thomas A.; Alexander, David L.; Hawthorne, Woodrow T.
2005-09-01
The RSG deployment case design is centered on taking the RSG system and producing a transport case that houses the RSG in a safe and controlled manner for transport. The transport case was driven by two conflicting constraints, first that the case be as light as possible, and second that it meet a stringent list of Military Specified requirements. The design team worked to extract every bit of weight from the design while striving to meet the rigorous Mil-Spec constraints. In the end compromises were made primarily on the specification side to control the overall weight of the transport case. This report outlines the case testing results.
Collaborative testing of turbulence models
Bradshaw, P.
1992-12-01
This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.
Test case preparation using a prototype
Treharne, Helen; Draper, J.; Schneider, Steve A.
1998-01-01
This paper reports on the preparation of test cases using a prototype within the context of a formal development. It describes an approach to building a prototype using an example. It discusses how a prototype contributes to the testing activity as part of a lifecycle based on the use of formal methods. The results of applying the approach to an embedded avionics case study are also presented.
Path generation algorithm for UML graphic modeling of aerospace test software
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
Empirical tests of the Chicago model and the Easterlin hypothesis: a case study of Japan.
Ohbuchi, H
1982-05-01
The objective of this discussion is to test the applicability of economic theory of fertility with special reference to postwar Japan and to find a clue for forecasting the future trend of fertility. The theories examined are the "Chicago model" and the "Easterlin hypothesis." The major conclusion common among the leading economic theories of fertility, which have their origin with Gary S. Becker (1960, 1965) and Richard A. Easterlin (1966), is the positive income effect, i.e., that the relationship between income and fertility is positive despite the evidence that higher income families have fewer children and that fertility has declined with economic development. To bridge the gap between theory and fact is the primary purpose of the economic theory of fertility, and each offers a different interpretation for it. The point of the Chicago model, particularly of the household decision making model of the "new home economics," is the mechanism that a positive effect of husband's income growth on fertility is offset by a negative price effect caused by the opportunity cost of wife's time. While the opportunity cost of wife's time is independent of the female wage rate for an unemployed wife, it is directly associated with the wage rate for a gainfully employed wife. Thus, the fertility response to female wages occurs only among families with an employed wife. The primary concern of empirical efforts to test the Chicago model has been with the determination of income and price elasticities. An attempt is made to test the relevance of the Chicago model and the Easterlin hypothesis in explaning the fertility movement in postwar Japan. In case of the Chicago model, the statistical results appeared fairly successful but did not match with the theory. The effect on fertility of a rise in women's real wage (and, therefore in the opportunity cost of mother's time) and of a rise in labor force participation rate of married women of childbearing age in recent years could not
Gifted and Talented Education: A National Test Case in Peoria.
Fetterman, David M.
1986-01-01
This article presents a study of a program in Peoria, Illinois, for the gifted and talented that serves as a national test case for gifted education and minority enrollment. It was concluded that referral, identification, and selection were appropriate for the program model but that inequalities resulted from socioeconomic variables. (Author/LMO)
Measuring Test Case Similarity to Support Test Suite Understanding
Greiler, M.S.; Van Deursen, A.; Zaidman, A.E.
2012-01-01
Preprint of paper published in: TOOLS 2012 - Proceedings of the 50th International Conference, Prague, Czech Republic, May 29-31, 2012; doi:10.1007/978-3-642-30561-0_8 In order to support test suite understanding, we investigate whether we can automatically derive relations between test cases. In
Validation test case generation based on safety analysis ontology
International Nuclear Information System (INIS)
Fan, Chin-Feng; Wang, Wen-Shing
2012-01-01
Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.
Automated Test Case Generation for an Autopilot Requirement Prototype
Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael
2011-01-01
Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.
Modelling, simulation and visualisation for electromagnetic non-destructive testing
International Nuclear Information System (INIS)
Ilham Mukriz Zainal Abidin; Abdul Razak Hamzah
2010-01-01
This paper reviews the state-of-the art and the recent development of modelling, simulation and visualization for eddy current Non-Destructive Testing (NDT) technique. Simulation and visualization has aid in the design and development of electromagnetic sensors and imaging techniques and systems for Electromagnetic Non-Destructive Testing (ENDT); feature extraction and inverse problems for Quantitative Non-Destructive Testing (QNDT). After reviewing the state-of-the art of electromagnetic modelling and simulation, case studies of Research and Development in eddy current NDT technique via magnetic field mapping and thermography for eddy current distribution are discussed. (author)
Thermal Testing and Model Correlation of the Magnetospheric Multiscale (MMS) Observatories
Kim, Jong S.; Teti, Nicholas M.
2015-01-01
The Magnetospheric Multiscale (MMS) mission is a Solar Terrestrial Probes mission comprising four identically instrumented spacecraft that will use Earth's magnetosphere as a laboratory to study the microphysics of three fundamental plasma processes: magnetic reconnection, energetic particle acceleration, and turbulence. This paper presents the complete thermal balance (TB) test performed on the first of four observatories to go through thermal vacuum (TV) and the minibalance testing that was performed on the subsequent observatories to provide a comparison of all four. The TV and TB tests were conducted in a thermal vacuum chamber at the Naval Research Laboratory (NRL) in Washington, D.C. with the vacuum level higher than 1.3 x 10 (sup -4) pascals (10 (sup -6) torr) and the surrounding temperature achieving -180 degrees Centigrade. Three TB test cases were performed that included hot operational science, cold operational science and a cold survival case. In addition to the three balance cases a two hour eclipse and a four hour eclipse simulation was performed during the TV test to provide additional transient data points that represent the orbit in eclipse (or Earth's shadow) The goal was to perform testing such that the flight orbital environments could be simulated as closely as possible. A thermal model correlation between the thermal analysis and the test results was completed. Over 400 1-Wire temperature sensors, 200 thermocouples and 125 flight thermistor temperature sensors recorded data during TV and TB testing. These temperature versus time profiles and their agreements with the analytical results obtained using Thermal Desktop and SINDA/FLUINT are discussed. The model correlation for the thermal mathematical model (TMM) is conducted based on the numerical analysis results and the test data. The philosophy of model correlation was to correlate the model to within 3 degrees Centigrade of the test data using the standard deviation and mean deviation error
PREDICTABILITY OF FINANCIAL CRISES: TESTING K.R.L. MODEL IN THE CASE OF TURKEY
Directory of Open Access Journals (Sweden)
Zeynep KARACOR
2012-06-01
Full Text Available The aim of this study is to test predictability of 2007 Global Economic Crisis which hit Turkey by the help of macroeconomic data of Turkey. K.R.L. model is used to test the predictability. By the method of analyzing various leading early warning indicators, the success of the model in forecasting the crises is surveyed. The findings do not support K.R.L. models. Possible reasons for this are stated at the article.
Directory of Open Access Journals (Sweden)
Wing Kam Fung
2010-02-01
Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.
Pescara benchmark: overview of modelling, testing and identification
International Nuclear Information System (INIS)
Bellino, A; Garibaldi, L; Marchesiello, S; Brancaleoni, F; Gabriele, S; Spina, D; Bregant, L; Carminelli, A; Catania, G; Sorrentino, S; Di Evangelista, A; Valente, C; Zuccarino, L
2011-01-01
The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.
International Nuclear Information System (INIS)
Miller, D.R.; Paige, R.W.
1988-07-01
HYDROCOIN is an international project for comparing groundwater flow models and modelling strategies. Level 3 of the project concerns the application of groundwater flow models to repository performance assessment with emphasis on the treatment of sensitivity and uncertainty in models and data. Level 3, test case 1 concerns sensitivity analysis of the groundwater flow around a radioactive waste repository situated in a near surface argillaceous formation. Work on this test case has been carried out by Harwell and will be reported in full in the near future. This report presents the results obtained using the computer program NAMMU. (author)
Active earth pressure model tests versus finite element analysis
Pietrzak, Magdalena
2017-06-01
The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.
Pile Model Tests Using Strain Gauge Technology
Krasiński, Adam; Kusio, Tomasz
2015-09-01
Ordinary pile bearing capacity tests are usually carried out to determine the relationship between load and displacement of pile head. The measurement system required in such tests consists of force transducer and three or four displacement gauges. The whole system is installed at the pile head above the ground level. This approach, however, does not give us complete information about the pile-soil interaction. We can only determine the total bearing capacity of the pile, without the knowledge of its distribution into the shaft and base resistances. Much more information can be obtained by carrying out a test of instrumented pile equipped with a system for measuring the distribution of axial force along its core. In the case of pile model tests the use of such measurement is difficult due to small scale of the model. To find a suitable solution for axial force measurement, which could be applied to small scale model piles, we had to take into account the following requirements: - a linear and stable relationship between measured and physical values, - the force measurement accuracy of about 0.1 kN, - the range of measured forces up to 30 kN, - resistance of measuring gauges against aggressive counteraction of concrete mortar and against moisture, - insensitivity to pile bending, - economical factor. These requirements can be fulfilled by strain gauge sensors if an appropriate methodology is used for test preparation (Hoffmann [1]). In this paper, we focus on some aspects of the application of strain gauge sensors for model pile tests. The efficiency of the method is proved on the examples of static load tests carried out on SDP model piles acting as single piles and in a group.
Idealized tropical cyclone simulations of intermediate complexity: A test case for AGCMs
Directory of Open Access Journals (Sweden)
Kevin Reed
2012-04-01
Full Text Available The paper introduces a moist, deterministic test case of intermediate complexity for Atmospheric General Circulation Models (AGCMs. We suggest pairing an AGCM dynamical core with simple physical parameterizations to test the evolution of a single, idealized, initially weak vortex into a tropical cyclone. The initial conditions are based on an initial vortex seed that is in gradient-wind and hydrostatic balance. The suggested ``simple-physics'' package consists of parameterizations of bulk aerodynamic surface fluxes for moisture, sensible heat and momentum, boundary layer diffusion, and large-scale condensation. Such a configuration includes the important driving mechanisms for tropical cyclones, and leads to a rapid intensification of the initial vortex over a forecast period of ten days. The simple-physics test paradigm is not limited to tropical cyclones, and can be universally applied to other flow fields. The physical parameterizations are described in detail to foster model intercomparisons.The characteristics of the intermediate-complexity test case are demonstrated with the help of four hydrostatic dynamical cores that are part of the Community Atmosphere Model version 5 (CAM 5 developed at the National Center for Atmospheric Research (NCAR. In particular, these are the Finite-Volume, Spectral Element, and spectral transform Eulerian and semi-Lagrangian dynamical cores that are coupled to the simple-physics suite. The simulations show that despite the simplicity of the physics forcings the models develop the tropical cyclone at horizontal grid spacings of about 55 km and finer. The simple-physics simulations reveal essential differences in the storm's structure and strength due to the choice of the dynamical core. Similar differences are also seen in complex full-physics aqua-planet experiments with CAM 5 which serve as a motivator for this work. The results suggest that differences in complex full-physics simulations can be, at least
Couplex1 test case nuclear - Waste disposal far field simulation
International Nuclear Information System (INIS)
2001-01-01
This first COUPLEX test case is to compute a simplified Far Field model used in nuclear waste management simulation. From the mathematical point of view the problem is of convection diffusion type but the parameters are highly varying from one layer to another. Another particularity is the very concentrated nature of the source, both in space and in time. (author)
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Real-Time Extended Interface Automata for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin
2014-01-01
Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080
Old star clusters: Bench tests of low mass stellar models
Directory of Open Access Journals (Sweden)
Salaris M.
2013-03-01
Full Text Available Old star clusters in the Milky Way and external galaxies have been (and still are traditionally used to constrain the age of the universe and the timescales of galaxy formation. A parallel avenue of old star cluster research considers these objects as bench tests of low-mass stellar models. This short review will highlight some recent tests of stellar evolution models that make use of photometric and spectroscopic observations of resolved old star clusters. In some cases these tests have pointed to additional physical processes efficient in low-mass stars, that are not routinely included in model computations. Moreover, recent results from the Kepler mission about the old open cluster NGC6791 are adding new tight constraints to the models.
Inference and testing on the boundary in extended constant conditional correlation GARCH models
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard
2017-01-01
We consider inference and testing in extended constant conditional correlation GARCH models in the case where the true parameter vector is a boundary point of the parameter space. This is of particular importance when testing for volatility spillovers in the model. The large-sample properties...
Directory of Open Access Journals (Sweden)
Fu Jinzhong
2010-06-01
Full Text Available Abstract Background Species are fundamental units in biology, yet much debate exists surrounding how we should delineate species in nature. Species discovery now requires the use of separate, corroborating datasets to quantify independently evolving lineages and test species criteria. However, the complexity of the speciation process has ushered in a need to infuse studies with new tools capable of aiding in species delineation. We suggest that model-based assignment tests are one such tool. This method circumvents constraints with traditional population genetic analyses and provides a novel means of describing cryptic and complex diversity in natural systems. Using toad-headed agamas of the Phrynocephalus vlangalii complex as a case study, we apply model-based assignment tests to microsatellite DNA data to test whether P. putjatia, a controversial species that closely resembles P. vlangalii morphologically, represents a valid species. Mitochondrial DNA and geographic data are also included to corroborate the assignment test results. Results Assignment tests revealed two distinct nuclear DNA clusters with 95% (230/243 of the individuals being assigned to one of the clusters with > 90% probability. The nuclear genomes of the two clusters remained distinct in sympatry, particularly at three syntopic sites, suggesting the existence of reproductive isolation between the identified clusters. In addition, a mitochondrial ND2 gene tree revealed two deeply diverged clades, which were largely congruent with the two nuclear DNA clusters, with a few exceptions. Historical mitochondrial introgression events between the two groups might explain the disagreement between the mitochondrial and nuclear DNA data. The nuclear DNA clusters and mitochondrial clades corresponded nicely to the hypothesized distributions of P. vlangalii and P. putjatia. Conclusions These results demonstrate that assignment tests based on microsatellite DNA data can be powerful tools
Test case prioritization using Cuscuta search
Directory of Open Access Journals (Sweden)
Mukesh Mann
2014-12-01
Full Text Available Most companies are under heavy time and resource constraints when it comes to testing a software system. Test prioritization technique(s allows the most useful tests to be executed first, exposing faults earlier in the testing process. Thus makes software testing more efficient and cost effective by covering maximum faults in minimum time. But test case prioritization is not an easy and straightforward process and it requires huge efforts and time. Number of approaches is available with their proclaimed advantages and limitations, but accessibility of any one of them is a subject dependent. In this paper, artificial Cuscuta search algorithm (CSA inspired by real Cuscuta parasitism is used to solve time constraint prioritization problem. We have applied CSA for prioritizing test cases in an order of maximum fault coverage with minimum test suite execution and compare its effectiveness with different prioritization ordering. Taking into account the experimental results, we conclude that (i The average percentage of faults detection (APFD is 82.5% using our proposed CSA ordering which is equal to the APFD of optimal and ant colony based ordering whereas No ordering, Random ordering and Reverse ordering has 76.25%, 75%, 68.75% of APFD respectively.
Large block migration experiments: INTRAVAL phase 1, Test Case 9
Energy Technology Data Exchange (ETDEWEB)
Gureghian, A.B.; Noronha, C.J. (Battelle, Willowbrook, IL (USA). Office of Waste Technology Development); Vandergraaf, T.T. (Atomic Energy of Canada Ltd., Ottawa, ON (Canada))
1990-08-01
The development of INTRAVAL Test Case 9, as presented in this report, was made possible by a past subsidiary agreement to the bilateral cooperative agreement between the US Department of Energy (DOE) and Atomic Energy of Canada Limited (AECL) encompassing various aspects of nuclear waste disposal research. The experimental aspect of this test case, which included a series of laboratory experiments designed to quantify the migration of tracers in a single, natural fracture, was undertaken by AECL. The numerical simulation of the results of these experiments was performed by the Battelle Office of Waste Technology Development (OWTD) by calibrating an in-house analytical code, FRACFLO, which is capable of predicting radionuclide transport in an idealized fractured rock. Three tracer migration experiments were performed, using nonsorbing uranine dye for two of them and sorbing Cs-137 for the third. In addition, separate batch experiments were performed to determine the fracture surface and rock matrix sorption coefficients for Cs-137. The two uranine tracer migration experiment were used to calculate the average fracture aperture and to calibrate the model for the fracture dispersivity and matrix diffusion coefficient. The predictive capability of the model was then tested by simulating the third, Cs-137, tracer test without changing the parameter values determined from the other experiments. Breakthrough curves of both the experimental and numerical results obtained at the outlet face of the fracture are presented for each experiment. The reported spatial concentration profiles for the rock matrix are based solely on numerical predictions. 22 refs., 12 figs., 8 tabs.
Test Automation Process Improvement A case study of BroadSoft
Gummadi, Jalendar
2016-01-01
This master thesis research is about improvement of test automation process at BroadSoft Finland as a case study. Test automation project recently started at BroadSoft but the project is not properly integrated in to existing process. Project is about converting manual test cases to automation test cases. The aim of this thesis is about studying existing BroadSoft test process and studying different test automation frameworks. In this thesis different test automation process are studied ...
Real-Time Extended Interface Automata for Software Testing Cases Generation
Directory of Open Access Journals (Sweden)
Shunkun Yang
2014-01-01
Full Text Available Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system.
HYBRID DATA APPROACH FOR SELECTING EFFECTIVE TEST CASES DURING THE REGRESSION TESTING
Mohan, M.; Shrimali, Tarun
2017-01-01
In the software industry, software testing becomes more important in the entire software development life cycle. Software testing is one of the fundamental components of software quality assurances. Software Testing Life Cycle (STLC)is a process involved in testing the complete software, which includes Regression Testing, Unit Testing, Smoke Testing, Integration Testing, Interface Testing, System Testing & etc. In the STLC of Regression testing, test case selection is one of the most importan...
Design Of Computer Based Test Using The Unified Modeling Language
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL
International Nuclear Information System (INIS)
SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.
2001-01-01
This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases
International Nuclear Information System (INIS)
Gillespie, S.
2000-01-01
This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M and O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequent calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M and O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with
The International intraval project. Phase 1 test cases
International Nuclear Information System (INIS)
1992-01-01
This report contains a description of the test cases adopted in Phase 1 of the international cooperation project INTRAVAL. Seventeen test cases based on bench-scale experiments in laboratory, field tests and natural analogue studies, have been included in the study. The test cases are described in terms of experimental design and types of available data. In addition, some quantitative examples of available data are given as well as references to more extensive documentation of the experiments on which the test cases are based. Fithteen test cases examples are given: 1 Mass transfer through clay by diffusion and advection. 2 Uranium migration in crystalline bore cores, small scale pressure infiltration experiments. 3 Radionuclide migration in single natural fractures in granite. 4 Tracer tests in a deep basalt flow top. 5 Flow and tracer experiment in crystalline rock based on the Stripa 3-D experiment. 6 Tracer experiment in a fracture zone at the Finnsjon research area. 7 Synthetic data base, based on single fracture migration experiments in Grimsel rock laboratory. 8 Natural analogue studies at Pocos de Caldas, Minais Gerais, Brazil. Redox-front and radionuclide movement in an open pit uranium mine. 9 Natural analogue studies at the Koongarra site in the Alligator Rivers area of the Northern Territory, Australia. 10 Large block migration experiments in a block of crystalline rock. 11 Unsaturated flow and transport experiments performed at Las Cruces, New Mexico. 12 Flow and transport experiment in unsaturated fractured rock performed at the Apache Leap Tuff site, Arizona. 13 Experiments in partially saturated tuffaceous rocks performed in the G-tunnel underground facility at the Nevada Test site, USA. 14 Experimental study of brine transport in porous media. 15 Groundwater flow in the vicinity of the Gorleben Salt Dome, Federal Republic of Germany
A general diagnostic model applied to language testing data.
von Davier, Matthias
2008-11-01
Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.
Carbon Back Sputter Modeling for Hall Thruster Testing
Gilland, James H.; Williams, George J.; Burt, Jonathan M.; Yim, John T.
2016-01-01
In support of wear testing for the Hall Effect Rocket with Magnetic Shielding (HERMeS) program, the back sputter from a Hall effect thruster plume has been modeled for the NASA Glenn Research Centers Vacuum Facility 5. The predicted wear at a near-worst case condition of 600 V, 12.5 kW was found to be on the order of 3 4 mkhour in a fully carbon-lined chamber. A more detailed numerical monte carlo code was also modified to estimate back sputter for a detailed facility and pumping configuration. This code demonstrated similar back sputter rate distributions, but is not yet accurately modeling the magnitudes. The modeling has been benchmarked to recent HERMeS wear testing, using multiple microbalance measurements. These recent measurements have yielded values, on the order of 1.5- 2 microns/khour.
Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.
1990-01-01
An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.
Relational Constraint Driven Test Case Synthesis for Web Applications
Directory of Open Access Journals (Sweden)
Xiang Fu
2010-09-01
Full Text Available This paper proposes a relational constraint driven technique that synthesizes test cases automatically for web applications. Using a static analysis, servlets can be modeled as relational transducers, which manipulate backend databases. We present a synthesis algorithm that generates a sequence of HTTP requests for simulating a user session. The algorithm relies on backward symbolic image computation for reaching a certain database state, given a code coverage objective. With a slight adaptation, the technique can be used for discovering workflow attacks on web applications.
Case studies in ultrasonic testing
International Nuclear Information System (INIS)
Prasad, V.; Satheesh, C.; Varde, P.V.
2015-01-01
Ultrasonic testing is widely used Non Destructive Testing (NDT) method and forms the essential part of In-service inspection programme of nuclear reactors. Main application of ultrasonic testing is for volumetric scanning of weld joints followed by thickness gauging of pipelines and pressure vessels. Research reactor Dhruva has completed the first In Service Inspection programme in which about 325 weld joints have been volumetrically scanned, in addition to thickness gauging of 300 meters of pipe lines of various sizes and about 24 nos of pressure vessels. Ultrasonic testing is also used for level measurements, distance measurements and cleaning and decontamination of tools. Two case studies are brought out in this paper in which ultrasonic testing is used successfully for identification of butterfly valve opening status and extent of choking in pipe lines in Dhruva reactor systems
Directory of Open Access Journals (Sweden)
HERBERT POINSTINGL
2009-06-01
Full Text Available Based on the demand for new verbal reasoning tests to enrich psychological test inventory, a pilot version of a new test was analysed: the 'Family Relation Reasoning Test' (FRRT; Poinstingl, Kubinger, Skoda & Schechtner, forthcoming, in which several basic cognitive operations (logical rules have been embedded/implemented. Given family relationships of varying complexity embedded in short stories, testees had to logically conclude the correct relationship between two individuals within a family. Using empirical data, the linear logistic test model (LLTM; Fischer, 1972, a special case of the Rasch model, was used to test the construct validity of the test: The hypothetically assumed basic cognitive operations had to explain the Rasch model's item difficulty parameters. After being shaped in LLTM's matrices of weights ((qij, none of these operations were corroborated by means of the Andersen's Likelihood Ratio Test.
Theory Testing Using Case Studies
DEFF Research Database (Denmark)
Sørensen, Pernille Dissing; Løkke, Ann-Kristina
2006-01-01
design. Finally, we discuss the epistemological logic, i.e., the value to larger research programmes, of such studies and, following Lakatos, conclude that the value of theory-testing case studies lies beyond naïve falsification and in their contribution to developing research programmes in a progressive...
A model based security testing method for protocol implementation.
Fu, Yu Long; Xin, Xiao Long
2014-01-01
The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.
Felix, Juan C; Lacey, Michael J; Miller, Jeffrey D; Lenhart, Gregory M; Spitzer, Mark; Kulkarni, Rucha
2016-06-01
Consensus United States cervical cancer screening guidelines recommend use of combination Pap plus human papillomavirus (HPV) testing for women aged 30 to 65 years. An HPV test was approved by the Food and Drug Administration in 2014 for primary cervical cancer screening in women age 25 years and older. Here, we present the results of clinical-economic comparisons of Pap plus HPV mRNA testing including genotyping for HPV 16/18 (co-testing) versus DNA-based primary HPV testing with HPV 16/18 genotyping and reflex cytology (HPV primary) for cervical cancer screening. A health state transition (Markov) model with 1-year cycling was developed using epidemiologic, clinical, and economic data from healthcare databases and published literature. A hypothetical cohort of one million women receiving triennial cervical cancer screening was simulated from ages 30 to 70 years. Screening strategies compared HPV primary to co-testing. Outcomes included total and incremental differences in costs, invasive cervical cancer (ICC) cases, ICC deaths, number of colposcopies, and quality-adjusted life years for cost-effectiveness calculations. Comprehensive sensitivity analyses were performed. In a simulation cohort of one million 30-year-old women modeled up to age 70 years, the model predicted that screening with HPV primary testing instead of co-testing could lead to as many as 2,141 more ICC cases and 2,041 more ICC deaths. In the simulation, co-testing demonstrated a greater number of lifetime quality-adjusted life years (22,334) and yielded $39.0 million in savings compared with HPV primary, thereby conferring greater effectiveness at lower cost. Model results demonstrate that co-testing has the potential to provide improved clinical and economic outcomes when compared with HPV primary. While actual cost and outcome data are evaluated, these findings are relevant to U.S. healthcare payers and women's health policy advocates seeking cost-effective cervical cancer screening
McKim, Stephen A.
2016-01-01
This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.
Using Glucose Tolerance Tests to Model Insulin Secretion and Clearance
Directory of Open Access Journals (Sweden)
Anthony Shannon
2005-04-01
Full Text Available The purpose of the studies described in this paper is to develop theoretically and to validate experimentally mathematical compartment models which can be used to predict plasma insulin levels in patients with diabetes mellitus (DM. In the case of Type 2 Diabetes Mellitus (T2DM, the C-peptide levels in the plasma were measured as part of routine glucose tolerance tests in order to estimate the prehepatic insulin secretion rates. In the case of Type 1 Diabetes Mellitus (T1DM, a radioactive labelled insulin was used to measure the absorption rate of insulin after a subcutaneous injection of insulin. Both models gave close fits between theoretical estimates and experimental data, and, unlike other models, it is not necessary to seed these models with initial estimates.
Directory of Open Access Journals (Sweden)
Dang Thanh Mai
2017-11-01
Full Text Available A combined hydrological and hydraulic model is presented for flood prediction in Vietnam. This model is applied to the Huong river basin as a test case study. Observed flood flows and water surface levels of the 2002–2005 flood seasons are used for model calibration, and those of the 2006–2007 flood seasons are used for validation of the model. The physically based distributed hydrologic model WetSpa is used for predicting the generation and propagation of flood flows in the mountainous upper sub-basins, and proves to predict flood flows accurately. The Hydrologic Engineering Center River Analysis System (HEC-RAS hydraulic model is applied to simulate flood flows and inundation levels in the downstream floodplain, and also proves to predict water levels accurately. The predicted water profiles are used for mapping of inundations in the floodplain. The model may be useful in developing flood forecasting and early warning systems to mitigate losses due to flooding in Vietnam.
INTRAVAL test case 1b - modelling results
International Nuclear Information System (INIS)
Jakob, A.; Hadermann, J.
1991-07-01
This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs
Herbalife hepatotoxicity: Evaluation of cases with positive reexposure tests.
Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Schwarzenboeck, Alexander; Eickhoff, Axel
2013-07-27
To analyze the validity of applied test criteria and causality assessment methods in assumed Herbalife hepatotoxicity with positive reexposure tests. We searched the Medline database for suspected cases of Herbalife hepatotoxicity and retrieved 53 cases including eight cases with a positive unintentional reexposure and a high causality level for Herbalife. First, analysis of these eight cases focused on the data quality of the positive reexposure cases, requiring a baseline value of alanine aminotransferase (ALT) Herbalife in these eight cases were probable (n = 1), unlikely (n = 4), and excluded (n = 3). Confounding variables included low data quality, alternative diagnoses, poor exclusion of important other causes, and comedication by drugs and herbs in 6/8 cases. More specifically, problems were evident in some cases regarding temporal association, daily doses, exact start and end dates of product use, actual data of laboratory parameters such as ALT, and exact dechallenge characteristics. Shortcomings included scattered exclusion of hepatitis A-C, cytomegalovirus and Epstein Barr virus infection with only globally presented or lacking parameters. Hepatitis E virus infection was considered in one single patient and found positive, infections by herpes simplex virus and varicella zoster virus were excluded in none. Only one case fulfilled positive reexposure test criteria in initially assumed Herbalife hepatotoxicity, with lower CIOMS based causality gradings for the other cases than hitherto proposed.
Analysis of UPTF downcomer tests with the Cathare multi-dimensional model
International Nuclear Information System (INIS)
Dor, I.
1993-01-01
This paper presents the analysis and the modelling - with the system code CATHARE - of UPTF downcomer refill tests simulating the refill phase of a large break LOCA. The modelling approach in a system code is discussed. First the reasons why in this particular case available flooding correlations are difficult to use in system code are developed. Then the use of a 1 - D modelling of the downcomer with specific closure relations for the annular geometry is examined. But UPTF 1:1 scale tests and CREARE reduced scale tests point out some weaknesses of this modelling due to the particular multi-dimensional nature of the flow in the upper part of the downcomer. Thus a 2-D model is elaborated and implemented into CATHARE version 1.3e code. The assessment of the model is based on UPTF 1:1 scale tests (saturated and subcooled conditions). Discretization and meshing influence are investigated. On the basis of saturated tests a new discretization is proposed for different terms of the momentum balance equations (interfacial friction, momentum transport terms) which results in a significant improvement. Sensitivity studies performed on subcooled tests show that the water downflow predictions are improved by increasing the condensation in the downcomer. (author). 8 figs., 5 tabs., 9 refs., 2 appendix
Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model
Directory of Open Access Journals (Sweden)
Salha M. Alzahrani
2015-07-01
Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.
Taher, Fadi; Falkensammer, Juergen; McCarte, Jamie; Strassegger, Johann; Uhlmann, Miriam; Schuch, Philipp; Assadian, Afshin
2017-06-01
The fenestrated Anaconda endograft (Vascutek/Terumo, Inchinnan, UK) is intended for the treatment of abdominal aortic aneurysms with an insufficient infrarenal landing zone. The endografts are custom-made with use of high-resolution, 1-mm-slice computed tomography angiography images. For every case, a nonsterile prototype and a three-dimensional (3D) model of the patient's aorta are constructed to allow the engineers as well as the physician to test-implant the device and to review the fit of the graft. The aim of this investigation was to assess the impact of 3D model construction and prototype testing on the design of the final sterile endograft. A prospectively held database on fenestrated endovascular aortic repair patients treated at a single institution was completed with data from the Vascutek engineers' prototype test results as well as the product request forms. Changes to endograft design based on prototype testing were assessed and are reported for all procedures. Between April 1, 2013, and August 18, 2015, 60 fenestrated Anaconda devices were implanted. Through prototype testing, engineers were able to identify and report potential risks to technical success related to use of the custom device for the respective patient. Theoretical concerns about endograft fit in the rigid model were expressed in 51 cases (85.0%), and the engineers suggested potential changes to the design of 21 grafts (35.0%). Thirteen cases (21.7%) were eventually modified after the surgeon's testing of the prototype. A second prototype was ordered in three cases (5.0%) because of extensive changes to endograft design, such as inclusion of an additional fenestration. Technical success rates were comparable for grafts that showed a perfect fit from the beginning and cases in which prototype testing resulted in a modification of graft design. Planning and construction of fenestrated endografts for complex aortic anatomies where exact fit and positioning of the graft are paramount to
International Nuclear Information System (INIS)
Boergesson, Lennart; Hernelind, Jan
2012-01-01
Document available in extended abstract form only. Three model shear tests of very high quality simulating a horizontal rock shear through a KBS-3V deposition hole in the centre of a canister were performed 1986. The tests simulated a deposition hole in the scale 1:10 with reference density of the buffer, very stiff confinement simulating the rock, and a solid bar of copper simulating the canister. The three tests were almost identical with exception of the rate of shear, which was varied between 0.031 and 160 mm/s, i.e. with a factor of more than 5000, and the density of the bentonite, which differed slightly. The tests were very well documented. Shear force, shear rate, total stress in the bentonite, strain in the copper and the movement of the top of the simulated canister were measured continuously during the shear. After finished shear the equipment was dismantled and careful sampling of the bentonite with measurement of water ratio and density were made. The deformed copper 'canister' was also carefully measured after the test. The tests have been modelled with the finite element code Abaqus with the same models and techniques that were used for the full scale cases in the Swedish safety assessment SR-Site. The results have been compared with the measured results, which has yielded very valuable information about the relevancy of the material models and the modelling technique. An elastic-plastic material model was used for the bentonite where the stress-strain relations have been derived from laboratory tests. The material model is also described in another article to this conference. The material model is made a function of both the density and the strain rate at shear. Since the shear is fast and takes place under undrained conditions, the density is not changed during the tests. However, strain rate varies largely with both the location of the elements and time. This can be taken into account in Abaqus by making the material model a function of the strain
Turbine-missile casing exit tests
International Nuclear Information System (INIS)
Yoshimura, H.R.; Sliter, G.E.
1978-01-01
Nuclear power plant designers are required to provide safety-related components with adequate protection against hypothetical turbine-missile impacts. In plants with a ''peninsula'' arrangement, protection is provided by installing the turbine axis radially from the reactor building, so that potential missile trajectories are not in line with the plant. In plants with a ''non-peninsula'' arrangement (turbine axis perpendicular to a radius), designers rely on the low probability of a missile strike and on the protection provided by reinforced concrete walls in order to demonstrate an adequate level of protection USNRC Regulatory Guide 1.115). One of the critical first steps in demonstrating adequacy is the determination of the energy and spin of the turbine segments as they exit the turbine casing. The spin increases the probability that a subsequent impact with a protective barrier will be off-normal and therefore less severe than the normal impact assumed in plant designs. Two full-scale turbine-missile casing exit tests which were conducted by Sandia Laboratories at their rocket-sled facility in Albuquerque, New Mexico, are described. Because of wide variations in turbine design details, postulated failure conditions, and missile exit scenarios, the conditions for the two tests were carefully selected to be as prototypical as possible, while still maintaining the well-controlled and well-characterized test conditions needed for generating benchmark data
2-D Model Test of Dolosse Breakwater
DEFF Research Database (Denmark)
Burcharth, Hans F.; Liu, Zhou
1994-01-01
). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992...... was on the Dolos breakwater with a high superstructure, where there was almost no overtopping. This case is believed to be the most dangerous one. The test of the Dolos breakwater with a low superstructure was also performed. The objective of the last part of the experiment is to investigate the influence...
National Research Council Canada - National Science Library
Posey, Pamela
2002-01-01
The purpose of this Software Test Description (STD) is to establish formal test cases to be used by personnel tasked with the installation and verification of the Globally Relocatable Navy Tide/Atmospheric Modeling System (PCTides...
Kelderman, Hendrikus
1984-01-01
Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch
A 'Turing' Test for Landscape Evolution Models
Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.
2008-12-01
Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.
Testing the normality assumption in the sample selection model with an application to travel demand
van der Klaauw, B.; Koning, R.H.
2003-01-01
In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.
Testing the normality assumption in the sample selection model with an application to travel demand
van der Klauw, B.; Koning, R.H.
In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.
Interactive modelling with stakeholders in two cases in flood management
Leskens, Johannes; Brugnach, Marcela
2013-04-01
New policies on flood management called Multi-Level Safety (MLS), demand for an integral and collaborative approach. The goal of MLS is to minimize flood risks by a coherent package of protection measures, crisis management and flood resilience measures. To achieve this, various stakeholders, such as water boards, municipalities and provinces, have to collaborate in composing these measures. Besides the many advances this integral and collaborative approach gives, the decision-making environment becomes also more complex. Participants have to consider more criteria than they used to do and have to take a wide network of participants into account, all with specific perspectives, cultures and preferences. In response, sophisticated models are developed to support decision-makers in grasping this complexity. These models provide predictions of flood events and offer the opportunity to test the effectiveness of various measures under different criteria. Recent model advances in computation speed and model flexibility allow stakeholders to directly interact with a hydrological hydraulic model during meetings. Besides a better understanding of the decision content, these interactive models are supposed to support the incorporation of stakeholder knowledge in modelling and to support mutual understanding of different perspectives of stakeholders To explore the support of interactive modelling in integral and collaborate policies, such as MLS, we tested a prototype of an interactive flood model (3Di) with respect to a conventional model (Sobek) in two cases. The two cases included the designing of flood protection measures in Amsterdam and a flood event exercise in Delft. These case studies yielded two main results. First, we observed that in the exploration phase of a decision-making process, stakeholders participated actively in interactive modelling sessions. This increased the technical understanding of complex problems and the insight in the effectiveness of various
Space engineering modeling and optimization with case studies
Pintér, János
2016-01-01
This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...
GTS-LCS, in-situ experiment 2. Modeling of tracer test 09-03
International Nuclear Information System (INIS)
Manette, M.; Saaltink, M.W.; Soler, J.M.
2015-02-01
Within the framework of the GTS-LCS project (Grimsel Test Site - Long-Term Cement Studies), an in-situ experiment lasting about 5 years was started in 2009 to study water-cement-rock interactions in a fractured granite. Prior to the experiment, a tracer test was performed to characterize the initial flow and transport properties of the rock around the experimental boreholes. This study reports on the model interpretation of tracer test 09-03. The calculations were performed by means of a two-dimensional model (homogeneous fracture plane including 3 boreholes) using the Retraso-CodeBright software package. In the tracer test, Grimsel groundwater containing the tracer (uranine) was circulated in the emplacement borehole during 43 days (zero injection flow rate). Circulation continued without tracer afterwards. Water was extracted at the observation and extraction boreholes. Results from a model sensitivity analysis comparing model results with measured tracer concentrations showed 3 cases where the evolution of tracer concentrations in the 3 different boreholes was satisfactory. In these cases a low-permeability skin affected the emplacement and observation boreholes. No skin appeared to affect the extraction borehole. The background hydraulic gradient seems to have no effect on the results of the tracer test. These results will be applied in the calculation of the initial flow field for the reactive transport phase of in-situ experiment 2 (interaction between pre-hardened cement and fractured granite at Grimsel). (orig.)
Traceability in Model-Based Testing
Directory of Open Access Journals (Sweden)
Mathew George
2012-11-01
Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.
Generating custom test plans for CASE{sup *}Dictionary 5.0
Energy Technology Data Exchange (ETDEWEB)
Atkins, K.D. [Boeing Computer Services, Richland, WA (United States)
1994-04-01
Most database development organizations use a formal software development methodology that requires a certain amount of formal testing. The amount of formal testing that will be performed will vary from methodology to methodology and from site to site. If a very detailed formal test plan is required for each module in a system, the work involved to produce the test plan can be tedious and costly. After a system has been designed and developed using Oracle*CASE, there is much useful information in the CASE*Dictionary repository. If this information could be tied to specific test requirements, a test plan could be generated automatically, saving much time and resources. This paper shows how CASE*Dictionary can be used to store test plan information that can then be used to generate a specific test plan for each module based on it`s detailed data usage.
Application of the genetic algorithm to blume-emery-griffiths model: Test Cases
International Nuclear Information System (INIS)
Erdinc, A.
2004-01-01
The equilibrium properties of the Blume-Emery-Griffiths (BEO) model Hamiltonian with the arbitrary bilinear (1), biquadratic (K) and crystal field interaction (D) are studied using the genetic algorithm technique. Results are compared with lowest approximation of the cluster variation method (CVM), which is identical to the mean field approximation. We found that the genetic algorithm to be very efficient for fast search at the average fraction of the spins, especially in the early stages as the system is far from the equilibrium state. A combination of the genetic algorithm followed by one of the well-tested simulation techniques seems to be an optimal approach. The curvature of the inverse magnetic susceptibility is also presented for the stable state of the BEG model
Modeling of environmentally significant interfaces: Two case studies
International Nuclear Information System (INIS)
Williford, R.E.
2006-01-01
When some parameters cannot be easily measured experimentally, mathematical models can often be used to deconvolute or interpret data collected on complex systems, such as those characteristic of many environmental problems. These models can help quantify the contributions of various physical or chemical phenomena that contribute to the overall behavior, thereby enabling the scientist to control and manipulate these phenomena, and thus to optimize the performance of the material or device. In the first case study presented here, a model is used to test the hypothesis that oxygen interactions with hydrogen on the catalyst particles of solid oxide fuel cell anodes can sometimes occur a finite distance away from the triple phase boundary (TPB), so that such reactions are not restricted to the TPB as normally assumed. The model may help explain a discrepancy between the observed structure of SOFCs and their performance. The second case study develops a simple physical model that allows engineers to design and control the sizes and shapes of mesopores in silica thin films. Such pore design can be useful for enhancing the selectivity and reactivity of environmental sensors and catalysts. This paper demonstrates the mutually beneficial interactions between experiment and modeling in the solution of a wide range of problems
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene
2016-04-30
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.
Business models & business cases for point-of-care testing
Staring, A.J.; Meertens, L. O.; Sikkel, N.
2016-01-01
Point-Of-Care Testing (POCT) enables clinical tests at or near the patient, with test results that are available instantly or in a very short time frame, to assist caregivers with immediate diagnosis and/or clinical intervention. The goal of POCT is to provide accurate, reliable, fast, and
Economic Crisis and Marital Problems in Turkey: Testing the Family Stress Model
Aytac, Isik A.; Rankin, Bruce H.
2009-01-01
This paper applied the family stress model to the case of Turkey in the wake of the 2001 economic crisis. Using structural equation modeling and a nationally representative urban sample of 711 married women and 490 married men, we tested whether economic hardship and the associated family economic strain on families resulted in greater marital…
Analysis and model testing of Super Tiger Type B packaging in accident environments
International Nuclear Information System (INIS)
Yoshimura, H.R.; Romesberg, L.E.; May, R.A.; Joseph, B.J.
1980-01-01
Based on previous scale model test results with more rigid systems and the subsystem tests on drums, it is believed that the scaled models realistically replicate full scale system behavior. Future work will be performed to obtain improved stiffness data on the Type A containers. These data will be incorporated into the finite element model, and improved correlation with the test results is expected. Review of the scale model transport system test results indicated that the method of attachment of the Super Tiger to the trailer was the primary cause for detachment of the outer door during the one-eighth scale grade-crossing test. Although the container seal on the scale model of Super Tiger was not adequately modeled to provide a leak-tight seal, loss of the existing seal in a full scale test can be inferred from the results of the one-quarter scale model grade-crossing test. In each test, approximately two-thirds of the model drums were estimated to have deformed sufficiently to predict loss of drum head closure seal, with several partially losing their contents within the overpack. In no case were drums ejected from the overpack, nor was there evidence of material loss in excess of the amount assumed in the WIPP EIS from any of the Super Tiger models tested. 9 figures
Automatic WSDL-guided Test Case Generation for PropEr Testing of Web Services
Directory of Open Access Journals (Sweden)
Konstantinos Sagonas
2012-10-01
Full Text Available With web services already being key ingredients of modern web systems, automatic and easy-to-use but at the same time powerful and expressive testing frameworks for web services are increasingly important. Our work aims at fully automatic testing of web services: ideally the user only specifies properties that the web service is expected to satisfy, in the form of input-output relations, and the system handles all the rest. In this paper we present in detail the component which lies at the heart of this system: how the WSDL specification of a web service is used to automatically create test case generators that can be fed to PropEr, a property-based testing tool, to create structurally valid random test cases for its operations and check its responses. Although the process is fully automatic, our tool optionally allows the user to easily modify its output to either add semantic information to the generators or write properties that test for more involved functionality of the web services.
Temperature Buffer Test. Final THM modelling
Energy Technology Data Exchange (ETDEWEB)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)
2012-01-15
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Temperature Buffer Test. Final THM modelling
International Nuclear Information System (INIS)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel
2012-01-01
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Helland, Fredrik
2016-01-01
Assessment is an integral part of society and education, and for this reason it is important to know what you measure. This thesis is about explanatory item response modelling of an abstract reasoning assessment, with the objective to create a modern test design framework for automatic generation of valid and precalibrated items of abstract reasoning. Modern test design aims to strengthen the connections between the different components of a test, with a stress on strong theory, systematic it...
Cole, Jeffrey C.; Maloney, Kelly O.; Schmid, Matthias; McKenna, James E.
2014-11-01
Water temperature is an important driver of many processes in riverine ecosystems. If reservoirs are present, their releases can greatly influence downstream water temperatures. Models are important tools in understanding the influence these releases may have on the thermal regimes of downstream rivers. In this study, we developed and tested a suite of models to predict river temperature at a location downstream of two reservoirs in the Upper Delaware River (USA), a section of river that is managed to support a world-class coldwater fishery. Three empirical models were tested, including a Generalized Least Squares Model with a cosine trend (GLScos), AutoRegressive Integrated Moving Average (ARIMA), and Artificial Neural Network (ANN). We also tested one mechanistic Heat Flux Model (HFM) that was based on energy gain and loss. Predictor variables used in model development included climate data (e.g., solar radiation, wind speed, etc.) collected from a nearby weather station and temperature and hydrologic data from upstream U.S. Geological Survey gages. Models were developed with a training dataset that consisted of data from 2008 to 2011; they were then independently validated with a test dataset from 2012. Model accuracy was evaluated using root mean square error (RMSE), Nash Sutcliffe efficiency (NSE), percent bias (PBIAS), and index of agreement (d) statistics. Model forecast success was evaluated using baseline-modified prime index of agreement (md) at the one, three, and five day predictions. All five models accurately predicted daily mean river temperature across the entire training dataset (RMSE = 0.58-1.311, NSE = 0.99-0.97, d = 0.98-0.99); ARIMA was most accurate (RMSE = 0.57, NSE = 0.99), but each model, other than ARIMA, showed short periods of under- or over-predicting observed warmer temperatures. For the training dataset, all models besides ARIMA had overestimation bias (PBIAS = -0.10 to -1.30). Validation analyses showed all models performed well; the
Cole, Jeffrey C.; Maloney, Kelly O.; Schmid, Matthias; McKenna, James E.
2014-01-01
Water temperature is an important driver of many processes in riverine ecosystems. If reservoirs are present, their releases can greatly influence downstream water temperatures. Models are important tools in understanding the influence these releases may have on the thermal regimes of downstream rivers. In this study, we developed and tested a suite of models to predict river temperature at a location downstream of two reservoirs in the Upper Delaware River (USA), a section of river that is managed to support a world-class coldwater fishery. Three empirical models were tested, including a Generalized Least Squares Model with a cosine trend (GLScos), AutoRegressive Integrated Moving Average (ARIMA), and Artificial Neural Network (ANN). We also tested one mechanistic Heat Flux Model (HFM) that was based on energy gain and loss. Predictor variables used in model development included climate data (e.g., solar radiation, wind speed, etc.) collected from a nearby weather station and temperature and hydrologic data from upstream U.S. Geological Survey gages. Models were developed with a training dataset that consisted of data from 2008 to 2011; they were then independently validated with a test dataset from 2012. Model accuracy was evaluated using root mean square error (RMSE), Nash Sutcliffe efficiency (NSE), percent bias (PBIAS), and index of agreement (d) statistics. Model forecast success was evaluated using baseline-modified prime index of agreement (md) at the one, three, and five day predictions. All five models accurately predicted daily mean river temperature across the entire training dataset (RMSE = 0.58–1.311, NSE = 0.99–0.97, d = 0.98–0.99); ARIMA was most accurate (RMSE = 0.57, NSE = 0.99), but each model, other than ARIMA, showed short periods of under- or over-predicting observed warmer temperatures. For the training dataset, all models besides ARIMA had overestimation bias (PBIAS = −0.10 to −1.30). Validation analyses showed all models performed
Modeling of the Case Grammatical Meaning
Directory of Open Access Journals (Sweden)
Алексей Львович Новиков
2014-12-01
Full Text Available The article raises the problem of constructing a semantic model to describe the meaning of the grammatical category of case in the languages of different types. The main objective of this publication - to provide an overview of different points of view on the semantic structure of the category of case and to compare different models of case semantics. Initial impulse to the development of problems of case semantics became the grammar and typological ideas of A.A. Potebnya and R. Jakobson. The basis of these models, which differ from each other in the number and nature of the allocation of features is the idea of the possibility of representing grammatical meaning as a structured set of semantic features. The analysis shows that the construction of formal models of grammatical categories is impossible without referring to the content of the dominant semantic features in the structure of grammatical meaning. Despite all the difficulties of modeling grammatical semantics, to construct a semantic model of case is an interesting and promising task of general morphology and typological linguistics.
Ozatac, Nesrin; Gokmenoglu, Korhan K; Taspinar, Nigar
2017-07-01
This study investigates the environmental Kuznets curve (EKC) hypothesis for the case of Turkey from 1960 to 2013 by considering energy consumption, trade, urbanization, and financial development variables. Although previous literature examines various aspects of the EKC hypothesis for the case of Turkey, our model augments the basic model with several covariates to develop a better understanding of the relationship among the variables and to refrain from omitted variable bias. The results of the bounds test and the error correction model under autoregressive distributed lag mechanism suggest long-run relationships among the variables as well as proof of the EKC and the scale effect in Turkey. A conditional Granger causality test reveals that there are causal relationships among the variables. Our findings can have policy implications including the imposition of a "polluter pays" mechanism, such as the implementation of a carbon tax for pollution trading, to raise the urban population's awareness about the importance of adopting renewable energy and to support clean, environmentally friendly technology.
Refined Diebold-Mariano Test Methods for the Evaluation of Wind Power Forecasting Models
Directory of Open Access Journals (Sweden)
Hao Chen
2014-07-01
Full Text Available The scientific evaluation methodology for the forecast accuracy of wind power forecasting models is an important issue in the domain of wind power forecasting. However, traditional forecast evaluation criteria, such as Mean Squared Error (MSE and Mean Absolute Error (MAE, have limitations in application to some degree. In this paper, a modern evaluation criterion, the Diebold-Mariano (DM test, is introduced. The DM test can discriminate the significant differences of forecasting accuracy between different models based on the scheme of quantitative analysis. Furthermore, the augmented DM test with rolling windows approach is proposed to give a more strict forecasting evaluation. By extending the loss function to an asymmetric structure, the asymmetric DM test is proposed. Case study indicates that the evaluation criteria based on DM test can relieve the influence of random sample disturbance. Moreover, the proposed augmented DM test can provide more evidence when the cost of changing models is expensive, and the proposed asymmetric DM test can add in the asymmetric factor, and provide practical evaluation of wind power forecasting models. It is concluded that the two refined DM tests can provide reference to the comprehensive evaluation for wind power forecasting models.
Modification of Concrete Damaged Plasticity model. Part II: Formulation and numerical tests
Directory of Open Access Journals (Sweden)
Kamińska Inez
2017-01-01
Full Text Available A refined model for elastoplastic damaged material is formulated based on the plastic potential introduced in Part I [1]. Considered model is an extension of Concrete Damaged Plasticity material implemented in Abaqus [2]. In the paper the stiffness tensor for elastoplastic damaged behaviour is derived. In order to validate the model, computations for the uniaxial tests are performed. Response of the model for various cases of parameter’s choice is shown and compared to the response of the CDP model.
Comparison of model propeller tests with airfoil theory
Durand, William F; Lesley, E P
1925-01-01
The purpose of the investigation covered by this report was the examination of the degree of approach which may be anticipated between laboratory tests on model airplane propellers and results computed by the airfoil theory, based on tests of airfoils representative of successive blade sections. It is known that the corrections of angles of attack and for aspect ratio, speed, and interference rest either on experimental data or on somewhat uncertain theoretical assumptions. The general situation as regards these four sets of corrections is far from satisfactory, and while it is recognized that occasion exists for the consideration of such corrections, their determination in any given case is a matter of considerable uncertainty. There exists at the present time no theory generally accepted and sufficiently comprehensive to indicate the amount of such corrections, and the application to individual cases of the experimental data available is, at best, uncertain. While the results of this first phase of the investigation are less positive than had been hoped might be the case, the establishment of the general degree of approach between the two sets of results which might be anticipated on the basis of this simpler mode of application seems to have been desirable.
Sample test cases using the environmental computer code NECTAR
International Nuclear Information System (INIS)
Ponting, A.C.
1984-06-01
This note demonstrates a few of the many different ways in which the environmental computer code NECTAR may be used. Four sample test cases are presented and described to show how NECTAR input data are structured. Edited output is also presented to illustrate the format of the results. Two test cases demonstrate how NECTAR may be used to study radio-isotopes not explicitly included in the code. (U.K.)
Computer tomography of flows external to test models
Prikryl, I.; Vest, C. M.
1982-01-01
Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
2017-01-01
We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...... the nuisance parameters lie on the boundary of the parameter space, nor on lack of identification. Our general results on GARCH-X type models are applied to Gaussian based GARCH-X models, GARCH-X models with Student's t-distributed innovations as well as the integer-valued GARCH-X (PAR-X) models....
DWPF PCCS version 2.0 test case
International Nuclear Information System (INIS)
Brown, K.G.; Pickett, M.A.
1992-01-01
To verify the operation of the Product Composition Control System (PCCS), a test case specific to DWPF operation was developed. The values and parameters necessary to demonstrate proper DWPF product composition control have been determined and are presented in this paper. If this control information (i.e., for transfers and analyses) is entered into the PCCS as illustrated in this paper, and the results obtained correspond to the independently-generated results, it can safely be said that the PCCS is operating correctly and can thus be used to control the DWPF. The independent results for this test case will be generated and enumerated in a future report. This test case was constructed along the lines of the normal DWPF operation. Many essential parameters are internal to the PCCS (e.g., property constraint and variance information) and can only be manipulated by personnel knowledgeable of the Symbolics reg-sign hardware and software. The validity of these parameters will rely on induction from observed PCCS results. Key process control values are entered into the PCCS as they would during normal operation. Examples of the screens used to input specific process control information are provided. These inputs should be entered into the PCCS database, and the results generated should be checked against the independent, computed results to confirm the validity of the PCCS
Mckim, Stephen A.
2016-01-01
This thesis describes the development and correlation of a thermal model that forms the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented are presented. The thermal model was correlated to within plus or minus 3 degrees Celsius of the thermal vacuum test data, and was determined sufficient to make future propellant predictions on MMS. The model was also found to be relatively sensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed to improve temperature predictions in the upper hemisphere of the propellant tank where predictions were found to be 2 to 2.5 C lower than the test data. A road map for applying the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.
International Nuclear Information System (INIS)
Torres-Echeverria, A.C.; Martorell, S.; Thompson, H.A.
2011-01-01
This paper addresses the modeling of probability of dangerous failure on demand and spurious trip rate of safety instrumented systems that include MooN voting redundancies in their architecture. MooN systems are a special case of k-out-of-n systems. The first part of the article is devoted to the development of a time-dependent probability of dangerous failure on demand model with capability of handling MooN systems. The model is able to model explicitly common cause failure and diagnostic coverage, as well as different test frequencies and strategies. It includes quantification of both detected and undetected failures, and puts emphasis on the quantification of common cause failure to the system probability of dangerous failure on demand as an additional component. In order to be able to accommodate changes in testing strategies, special treatment is devoted to the analysis of system reconfiguration (including common cause failure) during test of one of its components, what is then included in the model. Another model for spurious trip rate is also analyzed and extended under the same methodology in order to empower it with similar capabilities. These two models are powerful enough, but at the same time simple, to be suitable for handling of dependability measures in multi-objective optimization of both system design and test strategies for safety instrumented systems. The level of modeling detail considered permits compliance with the requirements of the standard IEC 61508. The two models are applied to brief case studies to demonstrate their effectiveness. The results obtained demonstrated that the first model is adequate to quantify time-dependent PFD of MooN systems during different system states (i.e. full operation, test and repair) and different MooN configurations, which values are averaged to obtain the PFD avg . Also, it was demonstrated that the second model is adequate to quantify STR including spurious trips induced by internal component failure and
Wei, Jiawei
2011-07-01
We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.
BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool
Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.
2006-01-01
BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test
Directory of Open Access Journals (Sweden)
Chichun Hu
2017-01-01
Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.
Testing the generalized partial credit model
Glas, Cornelis A.W.
1996-01-01
The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a generalization of the PCM (GPCM), a further generalization of the one-parameter logistic model, is discussed. The model is defined and the conditional maximum likelihood procedure for the method is describe...
Cavitating Propeller Performance in Inclined Shaft Conditions with OpenFOAM: PPTC 2015 Test Case
Gaggero, Stefano; Villa, Diego
2018-05-01
In this paper, we present our analysis of the non-cavitating and cavitating unsteady performances of the Potsdam Propeller Test Case (PPTC) in oblique flow. For our calculations, we used the Reynolds-averaged Navier-Stokes equation (RANSE) solver from the open-source OpenFOAM libraries. We selected the homogeneous mixture approach to solve for multiphase flow with phase change, using the volume of fluid (VoF) approach to solve the multiphase flow and modeling the mass transfer between vapor and water with the Schnerr-Sauer model. Comparing the model results with the experimental measurements collected during the Second Workshop on Cavitation and Propeller Performance - SMP'15 enabled our assessment of the reliability of the open-source calculations. Comparisons with the numerical data collected during the workshop enabled further analysis of the reliability of different flow solvers from which we produced an overview of recommended guidelines (mesh arrangements and solver setups) for accurate numerical prediction even in off-design conditions. Lastly, we propose a number of calculations using the boundary element method developed at the University of Genoa for assessing the reliability of this dated but still widely adopted approach for design and optimization in the preliminary stages of very demanding test cases.
Verification test of the SURF and SURFplus models in xRage
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-18
As a verification test of the SURF and SURFplus models in the xRage code we use a propagating underdriven detonation wave in 1-D. This is about the only test cases for which an accurate solution can be determined based on the theoretical structure of the solution. The solution consists of a steady ZND reaction zone profile joined with a scale invariant rarefaction or Taylor wave and followed by a constant state. The end of the reaction profile and the head of the rarefaction coincide with the sonic CJ state of the detonation wave. The constant state is required to match a rigid wall boundary condition. For a test case, we use PBX 9502 with the same EOS and burn rate as previously used to test the shock detector algorithm utilized by the SURF model. The detonation wave is propagated for 10 μs (slightly under 80mm). As expected, the pointwise errors are largest in the neighborhood of discontinuities; pressure discontinuity at the lead shock front and pressure derivative discontinuities at the head and tail of the rarefaction. As a quantitative measure of the overall accuracy, the L2 norm of the difference of the numerical pressure and the exact solution is used. Results are presented for simulations using both a uniform grid and an adaptive grid that refines the reaction zone.
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD
Directory of Open Access Journals (Sweden)
A. Jalila
2015-10-01
Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.
Should we assess climate model predictions in light of severe tests?
Katzav, Joel
2011-06-01
According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out [Popper, 2005, pp. 20, 62]. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system's failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested [Popper, 2002, p. 150]. Popper counts the 1919 tests of general relativity's then unlikely predictions of the deflection of light in the Sun's gravitational field as severe. An implication of Popper's above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and nearfuture climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.
Testing the generalized partial credit model
Glas, Cornelis A.W.
1996-01-01
The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
Static tilt tests of a full-sized cylindrical liquid storage tank model
International Nuclear Information System (INIS)
Sakai, F.
1988-01-01
This paper is explaining a static tilt test with a full-scaled tank model, the objects of which are the above-ground type LNG,LPG and oil storage tanks. Main points of view to investigate are as follows: Stress and deformation at each part of the tank wall, the bottom plate and the anchor straps in case that the anchor straps are very effective; Behavior in case that the anchor straps are not very effective; Behavior in case of no anchors; Influence of the roof above the shell; and Influence of the foundation rigidity under the bottom plate
Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling
Directory of Open Access Journals (Sweden)
Paul-Antoine Santoni
2011-01-01
Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.449 Section 154.449 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR SELF... § 154.449 Model test. The following analyzed data of a model test of structural elements for independent...
Energy Technology Data Exchange (ETDEWEB)
Price, Phillip N.; Granderson, Jessica; Sohn, Michael; Addy, Nathan; Jump, David
2013-09-01
The overarching goal of this work is to advance the capabilities of technology evaluators in evaluating the building-level baseline modeling capabilities of Energy Management and Information System (EMIS) software. Through their customer engagement platforms and products, EMIS software products have the potential to produce whole-building energy savings through multiple strategies: building system operation improvements, equipment efficiency upgrades and replacements, and inducement of behavioral change among the occupants and operations personnel. Some offerings may also automate the quantification of whole-building energy savings, relative to a baseline period, using empirical models that relate energy consumption to key influencing parameters, such as ambient weather conditions and building operation schedule. These automated baseline models can be used to streamline the whole-building measurement and verification (M&V) process, and therefore are of critical importance in the context of multi-measure whole-building focused utility efficiency programs. This report documents the findings of a study that was conducted to begin answering critical questions regarding quantification of savings at the whole-building level, and the use of automated and commercial software tools. To evaluate the modeling capabilities of EMIS software particular to the use case of whole-building savings estimation, four research questions were addressed: 1. What is a general methodology that can be used to evaluate baseline model performance, both in terms of a) overall robustness, and b) relative to other models? 2. How can that general methodology be applied to evaluate proprietary models that are embedded in commercial EMIS tools? How might one handle practical issues associated with data security, intellectual property, appropriate testing ‘blinds’, and large data sets? 3. How can buildings be pre-screened to identify those that are the most model-predictable, and therefore those
Mathematical modelling with case studies using Maple and Matlab
Barnes, B
2014-01-01
Introduction to Mathematical ModelingMathematical models An overview of the book Some modeling approaches Modeling for decision makingCompartmental Models Introduction Exponential decay and radioactivity Case study: detecting art forgeries Case study: Pacific rats colonize New Zealand Lake pollution models Case study: Lake Burley Griffin Drug assimilation into the blood Case study: dull, dizzy, or dead? Cascades of compartments First-order linear DEs Equilibrium points and stability Case study: money, money, money makes the world go aroundModels of Single PopulationsExponential growth Density-
Hall Thruster Thermal Modeling and Test Data Correlation
Myers, James; Kamhawi, Hani; Yim, John; Clayman, Lauren
2016-01-01
The life of Hall Effect thrusters are primarily limited by plasma erosion and thermal related failures. NASA Glenn Research Center (GRC) in cooperation with the Jet Propulsion Laboratory (JPL) have recently completed development of a Hall thruster with specific emphasis to mitigate these limitations. Extending the operational life of Hall thursters makes them more suitable for some of NASA's longer duration interplanetary missions. This paper documents the thermal model development, refinement and correlation of results with thruster test data. Correlation was achieved by minimizing uncertainties in model input and recognizing the relevant parameters for effective model tuning. Throughout the thruster design phase the model was used to evaluate design options and systematically reduce component temperatures. Hall thrusters are inherently complex assemblies of high temperature components relying on internal conduction and external radiation for heat dispersion and rejection. System solutions are necessary in most cases to fully assess the benefits and/or consequences of any potential design change. Thermal model correlation is critical since thruster operational parameters can push some components/materials beyond their temperature limits. This thruster incorporates a state-of-the-art magnetic shielding system to reduce plasma erosion and to a lesser extend power/heat deposition. Additionally a comprehensive thermal design strategy was employed to reduce temperatures of critical thruster components (primarily the magnet coils and the discharge channel). Long term wear testing is currently underway to assess the effectiveness of these systems and consequently thruster longevity.
Convective aggregation in idealised models and realistic equatorial cases
Holloway, Chris
2015-04-01
Idealised explicit convection simulations of the Met Office Unified Model are shown to exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen previously in other models in several recent studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapour (CWV) field. To investigate the relevance of this behaviour to the real world, these idealized simulations are compared with five 15-day cases of real organized convection in the tropics, including multiple simulations of each case testing sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. Despite similar large-scale forcing via lateral boundary conditions, systematic differences in mean CWV, CWV distribution shape, and the length scale of CWV features are found between the different sensitivity runs, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations.
Impacts modeling using the SPH particulate method. Case study
International Nuclear Information System (INIS)
Debord, R.
1999-01-01
The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)
Movable scour protection. Model test report
Energy Technology Data Exchange (ETDEWEB)
Lorenz, R.
2002-07-01
This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for
Tree-Based Global Model Tests for Polytomous Rasch Models
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Model test. 154.431 Section 154.431 Shipping COAST GUARD... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank...
Experimental impact testing and analysis of composite fan cases
Vander Klok, Andrew Joe
For aircraft engine certification, one of the requirements is to demonstrate the ability of the engine to withstand a fan blade-out (FBO) event. A FBO event may be caused by fatigue failure of the fan blade itself or by impact damage of foreign objects such as bird strike. An un-contained blade can damage flight critical engine components or even the fuselage. The design of a containment structure is related to numerous parameters such as the blade tip speed; blade material, size and shape; hub/tip diameter; fan case material, configuration, rigidity, etc. To investigate all parameters by spin experiments with a full size rotor assembly can be prohibitively expensive. Gas gun experiments can generate useful data for the design of engine containment cases at much lower costs. To replicate the damage modes similar to that on a fan case in FBO testing, the gas gun experiment has to be carefully designed. To investigate the experimental procedure and data acquisition techniques for FBO test, a low cost, small spin rig was first constructed. FBO tests were carried out with the small rig. The observed blade-to-fan case interactions were similar to those reported using larger spin rigs. The small rig has the potential in a variety of applications from investigating FBO events, verifying concept designs of rotors, to developing spin testing techniques. This rig was used in the developments of the notched blade releasing mechanism, a wire trigger method for synchronized data acquisition, high speed video imaging and etc. A relationship between the notch depth and the release speed was developed and verified. Next, an original custom designed spin testing facility was constructed. Driven by a 40HP, 40,000rpm air turbine, the spin rig is housed in a vacuum chamber of phi72inx40in (1829mmx1016mm). The heavily armored chamber is furnished with 9 viewports. This facility enables unprecedented investigations of FBO events. In parallel, a 15.4ft (4.7m) long phi4.1inch (105mm
Karabinos, Michael Joseph
2015-01-01
This dissertation tests the universal suitability of the records continuum model by using two cases from the decolonization of Southeast Asia. The continuum model is a new model of records visualization invented in the 1990s that sees records as free to move throughout four ‘dimensions’ rather than
A user interface for the Kansas Geological Survey slug test model.
Esling, Steven P; Keller, John E
2009-01-01
The Kansas Geological Survey (KGS) developed a semianalytical solution for slug tests that incorporates the effects of partial penetration, anisotropy, and the presence of variable conductivity well skins. The solution can simulate either confined or unconfined conditions. The original model, written in FORTRAN, has a text-based interface with rigid input requirements and limited output options. We re-created the main routine for the KGS model as a Visual Basic macro that runs in most versions of Microsoft Excel and built a simple-to-use Excel spreadsheet interface that automatically displays the graphical results of the test. A comparison of the output from the original FORTRAN code to that of the new Excel spreadsheet version for three cases produced identical results.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Findings concerning testis, vas deference, and epididymis in adult cases with nonpalpable testes
Directory of Open Access Journals (Sweden)
Coskun Sahin
2011-12-01
Full Text Available In this study, we aimed to state the relationship between testis, epididymis and vas deference, in adult cases with nonpalpable testis. Between January 1996 and December 2009, we evaluated 154 adult cases with nonpalpable testes. Mean age was 23 years (20-27 years. Explorations were performed by open inguinal incision, laparoscopy, and by inguinal incision and laparoscopy together on 22, 131 and 1 patient, respectively. Of all the unilateral cases, 32 were accepted as vanishing testis. In five of these cases, vas deference was ending inside the abdomen, and in the others, it was ending inside the scrotum. In the remaining 99 unilateral and 22 bilateral cases, 143 testes were found in total. Testes were found in the inguinal canal as atrophic in one case, at the right renal pedicle level with dysmorphic testis in one case, and anterior to the internal ring between the bladder and the common iliac vessels at a smaller than normal size in 119 cases. One (0.69% case did not have epididymis. While epididymis was attached to the testis only at the head and tail locations in 88 (61.53% cases, it was totally attached to the testis in 54 (37.76% cases. There is an obviously high incidence rate of testis and vas deference anomalies, where epididymis is the most frequent one. In cases with abdominal testes, this rate is highest for high localised abdominal testes.
EVALUATION OF REINFORCING EFFECT ON FACEBOLTS FOR TUNNELING USING X-RAY CT AND CENTRIFUGE MODEL TEST
Takano, Daiki; Otani, Jun; Date, Kensuke; Yokot, Yasuhiro; Nagatani, Hideki
The purpose of this paper is firstly to simulate the tunnel face failure in laboratory with four cases of model tests by pulling out tunnel model from a sandy ground that are without using auxiliary method nor facebolts and using facebolts with three different lengths of bolts, and secondary, to investigate the behavior of model ground using X-ray computed tomography (CT) scanner to visualize the failure zone in three dimensions. In addition to those results, a series of centrifuge model tests are conducted to confirm the results of X-ray CT test and also to discuss the ground behavior under full scale stress level. Finally, the effect of face bolting method is evaluated based on all the test results.
30 CFR 250.522 - When do I have to repeat casing diagnostic testing?
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When do I have to repeat casing diagnostic... Operations Casing Pressure Management § 250.522 When do I have to repeat casing diagnostic testing? Casing diagnostic testing must be repeated according to the following table: When * * * you must repeat diagnostic...
International Nuclear Information System (INIS)
Thorne, D.J.; McDowell-Boyer, L.M.; Kocher, D.C.; Little, C.A.; Roemer, E.K.
1993-01-01
The International Atomic Energy Agency (IAEA) started the Coordinated Research Program entitled '''The Safety Assessment of Near-Surface Radioactive Waste Disposal Facilities.'' The program is aimed at improving the confidence in the modeling results for safety assessments of waste disposal facilities. The program has been given the acronym NSARS (Near-Surface Radioactive Waste Disposal Safety Assessment Reliability Study) for ease of reference. The purpose of this report is to present the ORNL modeling results for the first test case (i.e., Test Case 1) of the IAEA NSARS program. Test Case 1 is based on near-surface disposal of radionuclides that are subsequently leached to a saturated-sand aquifer. Exposure to radionuclides results from use of a well screened in the aquifer and from intrusion into the repository. Two repository concepts were defined in Test Case 1: a simple earth trench and an engineered vault
Extended shadow test approach for constrained adaptive testing
Veldkamp, Bernard P.; Ariel, A.
2002-01-01
Several methods have been developed for use on constrained adaptive testing. Item pool partitioning, multistage testing, and testlet-based adaptive testing are methods that perform well for specific cases of adaptive testing. The weighted deviation model and the Shadow Test approach can be more
Experimentally testing the standard cosmological model
Energy Technology Data Exchange (ETDEWEB)
Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))
1990-11-01
The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.
Experimentally testing the standard cosmological model
International Nuclear Information System (INIS)
Schramm, D.N.
1990-11-01
The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, Ω b , remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that Ω b ∼ 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming Ω total = 1) and the need for dark baryonic matter, since Ω visible b . Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M x approx-gt 20 GeV and an interaction weaker than the Z 0 coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for ν-masses may imply that the ν τ is a good hot dark matter candidate. 73 refs., 5 figs
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Stoner–Wohlfarth model for the anisotropic case
Energy Technology Data Exchange (ETDEWEB)
Campos, Marcos F. de, E-mail: mcampos@metal.eeimvr.uff.br [Programa de Pós-graduação em Engenharia Metalúrgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420,27255-125 Volta Redonda, Rio de Janeiro (Brazil); Sampaio da Silva, Fernanda A. [Programa de Pós-graduação em Engenharia Metalúrgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420,27255-125 Volta Redonda, Rio de Janeiro (Brazil); Perigo, Elio A. [Laboratory for the Physics of Advanced Materials, University of Luxembourg, L1511 Luxembourg (Luxembourg); Castro, José A. de [Programa de Pós-graduação em Engenharia Metalúrgica-PUVR, Universidade Federal Fluminense, Av dos Trabalhadores 420,27255-125 Volta Redonda, Rio de Janeiro (Brazil)
2013-11-15
The Stoner–Wohlfarth (SW) model was calculated for the anisotropic case, by assuming crystallographical texture distributions as Gaussian, Lorentzian and Cos{sup n} (alpha). All these distributions were tested and both Gaussian and Cos{sup n} (alpha) give similar results for M{sub r}/M{sub s} above 0.8. However, the use of Cos{sup n} (alpha) makes it easier to find analytical expressions representing texture. The Lorentzian distribution is a suitable choice for not well aligned magnets, or magnets with a high fraction of misaligned grains. It is discussed how to obtain the alignment degree M{sub r}/M{sub s} directly from two measurements of magnetic remanence at the transverse and parallel directions to the alignment direction of the magnet. It is demonstrated that even the well aligned magnets with M{sub r}/M{sub s}=0.96 present coercive field of 60–70% of the anisotropy field, depending on the chosen distribution. The anisotropic SW model was used for discussing hysteresis squareness. Improving the crystalographical texture, the loop squareness also increases. - Highlights: • The Stoner–Wohlfarth model was calculated for the anisotropic case. • Different distribution functions for texture description were compared and discussed. • Lorentzian distribution is adequate for not well oriented magnets. • Determination of the alignment ratio M{sub r}/M{sub s} from 2 remanence measurements. • Prediction of the coercive field in Stoner–Wohlfarth aligned magnets.
Observation-Based Modeling for Model-Based Testing
Kanstrén, T.; Piel, E.; Gross, H.G.
2009-01-01
One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through
Riles, K
1998-01-01
The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.
Community referral for presumptive TB in Nigeria: a comparison of four models of active case finding
Directory of Open Access Journals (Sweden)
A. O. Adejumo
2016-02-01
Full Text Available Abstract Background Engagement of communities and civil society organizations is a critical part of the Post-2015 End TB Strategy. Since 2007, many models of community referral have been implemented to boost TB case detection in Nigeria. Yet clear insights into the comparative TB yield from particular approaches have been limited. Methods We compared four models of active case finding in three Nigerian states. Data on presumptive TB case referral by community workers (CWs, TB diagnoses among referred clients, active case finding model characteristics, and CWs compensation details for 2012 were obtained from implementers and CWs via interviews and log book review. Self-reported performance data were triangulated against routine surveillance data to assess concordance. Analysis focused on assessing the predictors of presumptive TB referral. Results CWs referred 4–22 % of presumptive TB clients tested, and 4–24 % of the total TB cases detected. The annual median referral per CW ranged widely among the models from 1 to 48 clients, with an overall average of 13.4 referrals per CW. The highest median referrals (48 per CW/yr and mean TB diagnoses (7.1/yr per CW (H =70.850, p < 0.001 was obtained by the model with training supervision, and $80/quarterly payments (Comprehensive Quotas-Oriented model. The model with irregularly supervised, trained, and compensated CWs contributed the least to TB case detection with a median of 13 referrals per CW/yr and mean of 0.53 TB diagnoses per CW/yr. Hours spent weekly on presumptive TB referral made the strongest unique contribution (Beta = 0.514, p < 0.001 to explaining presumptive TB referral after controlling for other variables. Conclusion All community based TB case-finding projects studied referred a relative low number of symptomatic individuals. The study shows that incentivized referral, appropriate selection of CWs, supportive supervision, leveraged treatment support roles, and a
Energy Technology Data Exchange (ETDEWEB)
Aakre, Shaun R. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Mechanical Engineering; Jentz, Ian W. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Mechanical Engineering; Anderson, Mark H. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Mechanical Engineering
2018-03-27
The U.S. Department of Energy has agreed to fund a three-year integrated research project to close technical gaps involved with compact heat exchangers to be used in nuclear applications. This paper introduces the goals of the project, the research institutions, and industrial partners working in collaboration to develop a draft Boiler and Pressure Vessel Code Case for this technology. Heat exchanger testing, as well as non-destructive and destructive evaluation, will be performed by researchers across the country to understand the performance of compact heat exchangers. Testing will be performed using coolants and conditions proposed for Gen IV Reactor designs. Preliminary observations of the mechanical failure mechanisms of the heat exchangers using destructive and non-destructive methods is presented. Unit-cell finite element models assembled to help predict the mechanical behavior of these high-temperature components are discussed as well. Performance testing methodology is laid out in this paper along with preliminary modeling results, an introduction to x-ray and neutron inspection techniques, and results from a recent pressurization test of a printed-circuit heat exchanger. The operational and quality assurance knowledge gained from these models and validation tests will be useful to developers of supercritical CO2 systems, which commonly employ printed-circuit heat exchangers.
How to Choose the Suitable Template for Homology Modelling of GPCRs: 5-HT7 Receptor as a Test Case.
Shahaf, Nir; Pappalardo, Matteo; Basile, Livia; Guccione, Salvatore; Rayan, Anwar
2016-09-01
G protein-coupled receptors (GPCRs) are a super-family of membrane proteins that attract great pharmaceutical interest due to their involvement in almost every physiological activity, including extracellular stimuli, neurotransmission, and hormone regulation. Currently, structural information on many GPCRs is mainly obtained by the techniques of computer modelling in general and by homology modelling in particular. Based on a quantitative analysis of eighteen antagonist-bound, resolved structures of rhodopsin family "A" receptors - also used as templates to build 153 homology models - it was concluded that a higher sequence identity between two receptors does not guarantee a lower RMSD between their structures, especially when their pair-wise sequence identity (within trans-membrane domain and/or in binding pocket) lies between 25 % and 40 %. This study suggests that we should consider all template receptors having a sequence identity ≤50 % with the query receptor. In fact, most of the GPCRs, compared to the currently available resolved structures of GPCRs, fall within this range and lack a correlation between structure and sequence. When testing suitability for structure-based drug design, it was found that choosing as a template the most similar resolved protein, based on sequence resemblance only, led to unsound results in many cases. Molecular docking analyses were carried out, and enrichment factors as well as attrition rates were utilized as criteria for assessing suitability for structure-based drug design. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Using a fuzzy comprehensive evaluation method to determine product usability: A test case.
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.
A Socioecological Model of Rape Survivors' Decisions to Aid in Case Prosecution
Anders, Mary C.; Christopher, F. Scott
2011-01-01
The purpose of our study was to identify factors underlying rape survivors' post-assault prosecution decisions by testing a decision model that included the complex relations between the multiple social ecological systems within which rape survivors are embedded. We coded 440 police rape cases for characteristics of the assault and characteristics…
Stress-testing the Standard Model at the LHC
2016-01-01
With the high-energy run of the LHC now underway, and clear manifestations of beyond-Standard-Model physics not yet seen in data from the previous run, the search for new physics at the LHC may be a quest for small deviations with big consequences. If clear signals are present, precise predictions and measurements will again be crucial for extracting the maximum information from the data, as in the case of the Higgs boson. Precision will therefore remain a key theme for particle physics research in the coming years. The conference will provide a forum for experimentalists and theorists to identify the challenges and refine the tools for high-precision tests of the Standard Model and searches for signals of new physics at Run II of the LHC. Topics to be discussed include: pinning down Standard Model corrections to key LHC processes; combining fixed-order QCD calculations with all-order resummations and parton showers; new developments in jet physics concerning jet substructure, associated jets and boosted je...
Paternity tests in Mexico: Results obtained in 3005 cases.
García-Aceves, M E; Romero Rentería, O; Díaz-Navarro, X X; Rangel-Villalobos, H
2018-04-01
National and international reports regarding the paternity testing activity scarcely include information from Mexico and other Latin American countries. Therefore, we report different results from the analysis of 3005 paternity cases analyzed during a period of five years in a Mexican paternity testing laboratory. Motherless tests were the most frequent (77.27%), followed by trio cases (20.70%); the remaining 2.04% included different cases of kinship reconstruction. The paternity exclusion rate was 29.58%, higher but into the range reported by the American Association of Blood Banks (average 24.12%). We detected 65 mutations, most of them involving one-step (93.8% and the remaining were two-step mutations (6.2%) thus, we were able to estimate the paternal mutation rate for 17 different STR loci: 0.0018 (95% CI 0.0005-0.0047). Five triallelic patterns and 12 suspected null alleles were detected during this period; however, re-amplification of these samples with a different Human Identification (HID) kit confirmed the homozygous genotypes, which suggests that most of these exclusions actually are one-step mutations. HID kits with ≥20 STRs detected more exclusions, diminishing the rate of inconclusive results with isolated exclusions (Powerplex 21 kit (20 STRs) and Powerplex Fusion kit (22 STRs) offered similar PI (p = 0.379) and average number of exclusions (PE) (p = 0.339) when a daughter was involved in motherless tests. In brief, besides to report forensic parameters from paternity tests in Mexico, results describe improvements to solve motherless paternity tests using HID kits with ≥20 STRs instead of one including 15 STRs. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Design and Testing of Braided Composite Fan Case Materials and Components
Roberts, Gary D.; Pereira, J. Michael; Braley, Michael S.; Arnold, William a.; Dorer, James D.; Watson, William R/.
2009-01-01
Triaxial braid composite materials are beginning to be used in fan cases for commercial gas turbine engines. The primary benefit for the use of composite materials is reduced weight and the associated reduction in fuel consumption. However, there are also cost benefits in some applications. This paper presents a description of the braided composite materials and discusses aspects of the braiding process that can be utilized for efficient fabrication of composite cases. The paper also presents an approach that was developed for evaluating the braided composite materials and composite fan cases in a ballistic impact laboratory. Impact of composite panels with a soft projectile is used for materials evaluation. Impact of composite fan cases with fan blades or blade-like projectiles is used to evaluate containment capability. A post-impact structural load test is used to evaluate the capability of the impacted fan case to survive dynamic loads during engine spool down. Validation of these new test methods is demonstrated by comparison with results of engine blade-out tests.
Test facility TIMO for testing the ITER model cryopump
International Nuclear Information System (INIS)
Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.
2001-01-01
Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L' Air Liquide in the form of a NET contract. (author)
Test facility TIMO for testing the ITER model cryopump
International Nuclear Information System (INIS)
Haas, H.; Day, C.; Mack, A.; Methe, S.; Boissin, J.C.; Schummer, P.; Murdoch, D.K.
1999-01-01
Within the framework of the European Fusion Technology Programme, FZK is involved in the research and development process for a vacuum pump system of a future fusion reactor. As a result of these activities, the concept and the necessary requirements for the primary vacuum system of the ITER fusion reactor were defined. Continuing that development process, FZK has been preparing the test facility TIMO (Test facility for ITER Model pump) since 1996. This test facility provides for testing a cryopump all needed infrastructure as for example a process gas supply including a metering system, a test vessel, the cryogenic supply for the different temperature levels and a gas analysing system. For manufacturing the ITER model pump an order was given to the company L'Air Liquide in the form of a NET contract. (author)
A Human Proximity Operations System test case validation approach
Huber, Justin; Straub, Jeremy
A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.
An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg
2001-01-01
We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....
Directory of Open Access Journals (Sweden)
Sebastiaan J van Hal
Full Text Available BACKGROUND: Influenza causes annual epidemics and often results in extensive outbreaks in closed communities. To minimize transmission, a range of interventions have been suggested. For these to be effective, an accurate and timely diagnosis of influenza is required. This is confirmed by a positive laboratory test result in an individual whose symptoms are consistent with a predefined clinical case definition. However, the utility of these clinical case definitions and laboratory testing in mass gathering outbreaks remains unknown. METHODS AND RESULTS: An influenza outbreak was identified during World Youth Day 2008 in Sydney. From the data collected on pilgrims presenting to a single clinic, a Markov model was developed and validated against the actual epidemic curve. Simulations were performed to examine the utility of different clinical case definitions and laboratory testing strategies for containment of influenza outbreaks. Clinical case definitions were found to have the greatest impact on averting further cases with no added benefit when combined with any laboratory test. Although nucleic acid testing (NAT demonstrated higher utility than indirect immunofluorescence antigen or on-site point-of-care testing, this effect was lost when laboratory NAT turnaround times was included. The main benefit of laboratory confirmation was limited to identification of true influenza cases amenable to interventions such as antiviral therapy. CONCLUSIONS: Continuous re-evaluation of case definitions and laboratory testing strategies are essential for effective management of influenza outbreaks during mass gatherings.
International Nuclear Information System (INIS)
Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.
1985-09-01
Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs
Modelling sexual transmission of HIV: testing the assumptions, validating the predictions
Baggaley, Rebecca F.; Fraser, Christophe
2010-01-01
Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600
Origin of honeycombs: Testing the hydraulic and case hardening hypotheses
Bruthans, Jiří; Filippi, Michal; Slavík, Martin; Svobodová, Eliška
2018-02-01
Cavernous weathering (cavernous rock decay) is a global phenomenon, which occurs in porous rocks around the world. Although honeycombs and tafoni are considered to be the most common products of this complex process, their origin and evolution are as yet not fully understood. The two commonly assumed formation hypotheses - hydraulic and case hardening - were tested to elucidate the origin of honeycombs on sandstone outcrops in a humid climate. Mechanical and hydraulic properties of the lips (walls between adjacent pits) and backwalls (bottoms of pits) of the honeycombs were determined via a set of established and novel approaches. While the case hardening hypothesis was not supported by the determinations of either tensile strength, drilling resistance or porosity, the hydraulic hypothesis was clearly supported by field measurements and laboratory tests. Fluorescein dye visualization of capillary zone, vapor zone, and evaporation front upon their contact, demonstrated that the evaporation front reaches the honeycomb backwalls under low water flow rate, while the honeycomb lips remain dry. During occasional excessive water flow events, however, the evaporation front may shift to the lips, while the backwalls become moist as a part of the capillary zone. As the zone of evaporation corresponds to the zone of potential salt weathering, it is the spatial distribution of the capillary and vapor zones which dictates whether honeycombs are created or the rock surface is smoothed. A hierarchical model of factors related to the hydraulic field was introduced to obtain better insights into the process of cavernous weathering.
Comparison of ORSAT and SCARAB Reentry Analysis Tools for a Generic Satellite Test Case
Kelley, Robert L.; Hill, Nicole M.; Rochelle, W. C.; Johnson, Nicholas L.; Lips, T.
2010-01-01
Reentry analysis is essential to understanding the consequences of the full life cycle of a spacecraft. Since reentry is a key factor in spacecraft development, NASA and ESA have separately developed tools to assess the survivability of objects during reentry. Criteria such as debris casualty area and impact energy are particularly important to understanding the risks posed to people on Earth. Therefore, NASA and ESA have undertaken a series of comparison studies of their respective reentry codes for verification and improvements in accuracy. The NASA Object Reentry Survival Analysis Tool (ORSAT) and the ESA Spacecraft Atmospheric Reentry and Aerothermal Breakup (SCARAB) reentry analysis tools serve as standard codes for reentry survivability assessment of satellites. These programs predict whether an object will demise during reentry and calculate the debris casualty area of objects determined to survive, establishing the reentry risk posed to the Earth's population by surviving debris. A series of test cases have been studied for comparison and the most recent uses "Testsat," a conceptual satellite composed of generic parts, defined to use numerous simple shapes and various materials for a better comparison of the predictions of these two codes. This study is an improvement on the others in this series because of increased consistency in modeling techniques and variables. The overall comparison demonstrated that the two codes arrive at similar results. Either most objects modeled resulted in close agreement between the two codes, or if the difference was significant, the variance could be explained as a case of semantics in the model definitions. This paper presents the main results of ORSAT and SCARAB for the Testsat case and discusses the sources of any discovered differences. Discussion of the results of previous comparisons is made for a summary of differences between the codes and lessons learned from this series of tests.
Osuský, F.; Bahdanovich, R.; Farkas, G.; Haščík, J.; Tikhomirov, G. V.
2017-01-01
The paper is focused on development of the coupled neutronics-thermal hydraulics model for the Gas-cooled Fast Reactor. It is necessary to carefully investigate coupled calculations of new concepts to avoid recriticality scenarios, as it is not possible to ensure sub-critical state for a fast reactor core under core disruptive accident conditions. Above mentioned calculations are also very suitable for development of new passive or inherent safety systems that can mitigate the occurrence of the recriticality scenarios. In the paper, the most promising fuel material compositions together with a geometry model are described for the Gas-cooled fast reactor. Seven fuel pin and fuel assembly geometry is proposed as a test case for coupled calculation with three different enrichments of fissile material in the form of Pu-UC. The reflective boundary condition is used in radial directions of the test case and vacuum boundary condition is used in axial directions. During these condition, the nuclear system is in super-critical state and to achieve a stable state (which is numerical representation of operational conditions) it is necessary to decrease the reactivity of the system. The iteration scheme is proposed, where SCALE code system is used for collapsing of a macroscopic cross-section into few group representation as input for coupled code NESTLE.
Mathematical modelling a case studies approach
Illner, Reinhard; McCollum, Samantha; Roode, Thea van
2004-01-01
Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...
Energy Technology Data Exchange (ETDEWEB)
Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-06-03
During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitch and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.
30 CFR 250.520 - When do I have to perform a casing diagnostic test?
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When do I have to perform a casing diagnostic... Operations Casing Pressure Management § 250.520 When do I have to perform a casing diagnostic test? (a) You must perform a casing diagnostic test within 30 days after first observing or imposing casing pressure...
Wind-US Code Physical Modeling Improvements to Complement Hypersonic Testing and Evaluation
Georgiadis, Nicholas J.; Yoder, Dennis A.; Towne, Charles S.; Engblom, William A.; Bhagwandin, Vishal A.; Power, Greg D.; Lankford, Dennis W.; Nelson, Christopher C.
2009-01-01
This report gives an overview of physical modeling enhancements to the Wind-US flow solver which were made to improve the capabilities for simulation of hypersonic flows and the reliability of computations to complement hypersonic testing. The improvements include advanced turbulence models, a bypass transition model, a conjugate (or closely coupled to vehicle structure) conduction-convection heat transfer capability, and an upgraded high-speed combustion solver. A Mach 5 shock-wave boundary layer interaction problem is used to investigate the benefits of k- s and k-w based explicit algebraic stress turbulence models relative to linear two-equation models. The bypass transition model is validated using data from experiments for incompressible boundary layers and a Mach 7.9 cone flow. The conjugate heat transfer method is validated for a test case involving reacting H2-O2 rocket exhaust over cooled calorimeter panels. A dual-mode scramjet configuration is investigated using both a simplified 1-step kinetics mechanism and an 8-step mechanism. Additionally, variations in the turbulent Prandtl and Schmidt numbers are considered for this scramjet configuration.
A Method to Select Test Input Cases for Safety-critical Software
International Nuclear Information System (INIS)
Kim, Heeeun; Kang, Hyungook; Son, Hanseong
2013-01-01
This paper proposes a new testing methodology for effective and realistic quantification of RPS software failure probability. Software failure probability quantification is important factor in digital system safety assessment. In this study, the method for software test case generation is briefly described. The test cases generated by this method reflect the characteristics of safety-critical software and past inputs. Furthermore, the number of test cases can be reduced, but it is possible to perform exhaustive test. Aspect of software also can be reflected as failure data, so the final failure data can include the failure of software itself and external influences. Software reliability is generally accepted as the key factor in software quality since it quantifies software failures which can make a powerful system inoperative. In the KNITS (Korea Nuclear Instrumentation and Control Systems) project, the software for the fully digitalized reactor protection system (RPS) was developed under a strict procedure including unit testing and coverage measurement. Black box testing is one type of Verification and validation (V and V), in which given input values are entered and the resulting output values are compared against the expected output values. Programmable logic controllers (PLCs) were used in implementing critical systems and function block diagram (FBD) is a commonly used implementation language for PLC
Modelling of ultrasonic nondestructive testing of cracks in claddings
International Nuclear Information System (INIS)
Bostroem, Anders; Zagbai, Theo
2006-05-01
Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry. To develop and qualify the methods extensive experimental work with test blocks is usually required. This can be very time-consuming and costly and it also requires a good physical intuition of the situation. A reliable mathematical model of the testing situation can, therefore, be very valuable and cost-effective as it can reduce experimental work significantly. A good mathematical model enhances the physical intuition and is very useful for parametric studies, as a pedagogical tool, and for the qualification of procedures and personnel. The present project has been concerned with the modelling of defects in claddings. A cladding is a layer of material that is put on for corrosion protection, in the nuclear power industry this layer is often an austenitic steel that is welded onto the surface. The cladding is usually anisotropic and to some degree it is most likely also inhomogeneous, particularly in that the direction of the anisotropy is varying. This degree of inhomogeneity is unknown but probably not very pronounced so for modelling purposes it may be a valid assumption to take the cladding to be homogeneous. However, another important complicating factor with claddings is that the interface between the cladding and the base material is often corrugated. This corrugation can have large effects on the transmission of ultrasound through the interface and can thus greatly affect the detectability of defects in the cladding. In the present project the only type of defect that is considered is a planar crack that is situated inside the cladding. The investigations are, furthermore, limited to two dimensions, and the crack is then only a straight line. The crack can be arbitrarily oriented and situated, but it must not intersect the interface to the base material. The crack can be surface-breaking, and this is often the case of most practical interest, but it should then be
Results of steel containment vessel model test
International Nuclear Information System (INIS)
Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.
1998-05-01
A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed
Rupture tests with reactor pressure vessel head models
International Nuclear Information System (INIS)
Talja, H.; Keinaenen, H.; Hosio, E.; Pankakoski, P.H.; Rahka, K.
2003-01-01
In the LISSAC project (LImit Strains in Severe ACcidents), partly funded by the EC Nuclear Fission and Safety Programme within the 5th Framework programme, an extensive experimental and computational research programme is conducted to study the stress state and size dependence of ultimate failure strains. The results are aimed especially to make the assessment of severe accident cases more realistic. For the experiments in the LISSAC project a block of material of the German Biblis C reactor pressure vessel was available. As part of the project, eight reactor pressure vessel head models from this material (22 NiMoCr 3 7) were tested up to rupture at VTT. The specimens were provided by Forschungszentrum Karlsruhe (FzK). These tests were performed under quasistatic pressure load at room temperature. Two specimens sizes were tested and in half of the tests the specimens contain holes describing the control rod penetrations of an actual reactor pressure vessel head. These specimens were equipped with an aluminium liner. All six tests with the smaller specimen size were conducted successfully. In the test with the large specimen with holes, the behaviour of the aluminium liner material proved to differ from those of the smaller ones. As a consequence the experiment ended at the failure of the liner. The specimen without holes yielded results that were in very good agreement with those from the small specimens. (author)
Review of Bose-Fermi and ''Supersymmetry'' models; problems in particle transfer tests
International Nuclear Information System (INIS)
Vergnes, M.
1986-01-01
The first case suggested for a supersymmetry in nuclei was that of a j = 3/2 particle coupled to an 0(6) core. A more recent and elaborate scheme is the ''multi-j'' supersymmetry, describing the coupling of a particle in more than just one orbital, with the three possible cores of the interacting boson model. A general survey of the particle transfer tests of these different models is presented and the results summarized. A comparison of IBFM-2 calculations with experimental data is discussed, as well as results of sum rules analysis. Present and future tests concerning extensions of the above mentioned models, particularly to odd-odd nuclei, are briefly indicated. It appears necessary to clearly determine if the origin of the difficulties outlined for transfer reactions indeed lies -as often suggested- in the simplified form of the transfer operator used in deriving the selection rules, and not in the models themselves
A business case method for business models
Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2013-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...
30 CFR 250.523 - How long do I keep records of casing pressure and diagnostic tests?
2010-07-01
... and diagnostic tests? 250.523 Section 250.523 Mineral Resources MINERALS MANAGEMENT SERVICE... casing pressure and diagnostic tests? Records of casing pressure and diagnostic tests must be kept at the field office nearest the well for a minimum of 2 years. The last casing diagnostic test for each casing...
Strutzenberg, Louise L.; Putman, Gabriel C.
2011-01-01
The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Building on dry simulations of the ASMAT tests with the vehicle at 5 ft. elevation (100 ft. real vehicle elevation), wet simulations of the ASMAT test setup have been performed using the Loci/CHEM computational fluid dynamics software to explore the effect of rainbird water suppression inclusion on the launch platform deck. Two-phase water simulation has been performed using an energy and mass coupled lagrangian particle system module where liquid phase emissions are segregated into clouds of virtual particles and gas phase mass transfer is accomplished through simple Weber number controlled breakup and boiling models. Comparisons have been performed to the dry 5 ft. elevation cases, using configurations with and without launch mounts. These cases have been used to explore the interaction between rainbird spray patterns and launch mount geometry and evaluate the acoustic sound pressure level knockdown achieved through above-deck rainbird deluge inclusion. This comparison has been anchored with validation from live-fire test data which showed a reduction in rainbird effectiveness with the presence of a launch mount.
Making System Dynamics Cool III : New Hot Teaching & Testing Cases
Pruyt, E.
2011-01-01
This follow-up paper presents seven actual cases for testing and teaching System Dynamics developed and used between January 2010 and January 2011 for one of the largest System Dynamics courses (250+ students per year) at Delft University of Technology in the Netherlands. The cases presented in this
International Nuclear Information System (INIS)
Acikel, Volkan; Atalar, Ergin; Uslubas, Ali
2015-01-01
Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected to any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and
Directory of Open Access Journals (Sweden)
Gopal P. Sarma
2016-08-01
Full Text Available The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.
Diagnostic yield of hair and urine toxicology testing in potential child abuse cases.
Stauffer, Stephanie L; Wood, Stephanie M; Krasowski, Matthew D
2015-07-01
Detection of drugs in a child may be the first objective finding that can be reported in cases of suspected child abuse. Hair and urine toxicology testing, when performed as part of the initial clinical evaluation for suspected child abuse or maltreatment, may serve to facilitate the identification of at-risk children. Furthermore, significant environmental exposure to a drug (considered by law to constitute child abuse in some states) may be identified by toxicology testing of unwashed hair specimens. In order to determine the clinical utility of hair and urine toxicology testing in this population we performed a retrospective chart review on all children for whom hair toxicology testing was ordered at our academic medical center between January 2004 and April 2014. The medical records of 616 children aged 0-17.5 years were reviewed for injury history, previous medication and illicit drug use by caregiver(s), urine drug screen result (if performed), hair toxicology result, medication list, and outcome of any child abuse evaluation. Hair toxicology testing was positive for at least one compound in 106 cases (17.2%), with unexplained drugs in 82 cases (13.3%). Of these, there were 48 cases in which multiple compounds (including combination of parent drugs and/or metabolites within the same drug class) were identified in the sample of one patient. The compounds most frequently identified in the hair of our study population included cocaine, benzoylecgonine, native (unmetabolized) tetrahydrocannabinol, and methamphetamine. There were 68 instances in which a parent drug was identified in the hair without any of its potential metabolites, suggesting environmental exposure. Among the 82 cases in which hair toxicology testing was positive for unexplained drugs, a change in clinical outcome was noted in 71 cases (86.5%). Urine drug screens (UDS) were performed in 457 of the 616 reviewed cases. Of these, over 95% of positive UDS results could be explained by iatrogenic drug
Application of the finite element groundwater model FEWA to the engineered test facility
International Nuclear Information System (INIS)
Craig, P.M.; Davis, E.C.
1985-09-01
A finite element model for water transport through porous media (FEWA) has been applied to the unconfined aquifer at the Oak Ridge National Laboratory Solid Waste Storage Area 6 Engineered Test Facility (ETF). The model was developed in 1983 as part of the Shallow Land Burial Technology - Humid Task (ONL-WL14) and was previously verified using several general hydrologic problems for which an analytic solution exists. Model application and calibration, as described in this report, consisted of modeling the ETF water table for three specialized cases: a one-dimensional steady-state simulation, a one-dimensional transient simulation, and a two-dimensional transient simulation. In the one-dimensional steady-state simulation, the FEWA output accurately predicted the water table during a long period in which there were no man-induced or natural perturbations to the system. The input parameters of most importance for this case were hydraulic conductivity and aquifer bottom elevation. In the two transient cases, the FEWA output has matched observed water table responses to a single rainfall event occurring in February 1983, yielding a calibrated finite element model that is useful for further study of additional precipitation events as well as contaminant transport at the experimental site
Modeling Asteroid Dynamics using AMUSE: First Test Cases
Frantseva, Kateryna; Mueller, Michael; van der Tak, Floris; Helmich, Frank P.
2015-01-01
We are creating a dynamic model of the current asteroid population. The goal is to reproduce measured impact rates in the current Solar System, from which we'll derive delivery rates of water and organic material by tracing low-albedo C-class asteroids (using the measured albedo distribution from
Nina Dobrinkova; LaWen Hollingsworth; Faith Ann Heinsch; Greg Dillon; Georgi Dobrinkov
2014-01-01
As a key component of the cross-border project between Bulgaria and Greece known as OUTLAND, a team from the Bulgarian Academy of Sciences and Rocky Mountain Research Station started a collaborative project to identify and describe various fuel types for a test area in Bulgaria in order to model fire behavior for recent wildfires. Although there have been various...
Simulation with Different Turbulence Models in an Annex 20 Benchmark Test using Star-CCM+
DEFF Research Database (Denmark)
Le Dreau, Jerome; Heiselberg, Per; Nielsen, Peter V.
The purpose of this investigation is to compare the different flow patterns obtained for the 2D isothermal test case defined in Annex 20 (1990) using different turbulence models. The different results are compared with the existing experimental data. Similar study has already been performed by Rong...
Dynamic modelling and hardware-in-the-loop testing of PEMFC
Energy Technology Data Exchange (ETDEWEB)
Vath, Andreas; Soehn, Matthias; Nicoloso, Norbert; Hartkopf, Thomas [Technische Universitaet Darmstadt/Institut fuer Elektrische Energie wand lung, Landgraf-Georg-Str. 4, D-64283 Darmstadt (Germany); Lemes, Zijad; Maencher, Hubert [MAGNUM Automatisierungstechnik GmbH, Bunsenstr. 22, D-64293 Darmstadt (Germany)
2006-07-03
Modelling and hardware-in-the-loop (HIL) testing of fuel cell components and entire systems open new ways for the design and advance development of FCs. In this work proton exchange membrane fuel cells (PEMFC) are dynamically modelled within MATLAB-Simulink at various operation conditions in order to establish a comprehensive description of their dynamic behaviour as well as to explore the modelling facility as a diagnostic tool. Set-up of a hardware-in-the-loop (HIL) system enables real time interaction between the selected hardware and the model. The transport of hydrogen, nitrogen, oxygen, water vapour and liquid water in the gas diffusion and catalyst layers of the stack are incorporated into the model according to their physical and electrochemical characteristics. Other processes investigated include, e.g., the membrane resistance as a function of the water content during fast load changes. Cells are modelled three-dimensionally and dynamically. In case of system simulations a one-dimensional model is preferred to reduce computation time. The model has been verified by experiments with a water-cooled stack. (author)
Energy Technology Data Exchange (ETDEWEB)
Wendt, Fabian F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-09
During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitch and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.
NET model coil test possibilities
International Nuclear Information System (INIS)
Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.
1987-11-01
A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de
Moser, T.; van Gestel, C.A.M.; Jones, S.E.; Koolhaas, J.E.; Rodrigues, J.M.L.; Römbke, J.
2004-01-01
The effects of the fungicide carbendazim (applied in the formulation Derosal®) on enchytraeids were determined in Terrestrial Model Ecosystem (TME) tests and field-validation studies. TMEs consisted of intact soil columns (diameter 17.5 cm; length 40 cm) taken from a grassland or, in one case, from
Moser, T.; Schallnass, H.-J.; Jones, S.E.; van Gestel, C.A.M.; Koolhaas, J.E.; Rodrigues, J.M.L.; Römbke, J.
2004-01-01
The effects of the fungicide carbendazim (applied in the formulation Derosal®) on nematodes was determined in Terrestrial Model Ecosystem (TME) tests and field-validation studies. TMEs consisted of intact soil columns (diameter 17.5 cm; length 40 cm) taken from a grassland or, in one case, from an
Hydraulic Model Tests on Modified Wave Dragon
DEFF Research Database (Denmark)
Hald, Tue; Lynggaard, Jakob
A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
International Nuclear Information System (INIS)
Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.
1989-01-01
Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs
Highly Automated Agile Testing Process: An Industrial Case Study
Directory of Open Access Journals (Sweden)
Jarosław Berłowski
2016-09-01
Full Text Available This paper presents a description of an agile testing process in a medium size software project that is developed using Scrum. The research methods used is the case study were as follows: surveys, quantifiable project data sources and qualitative project members opinions were used for data collection. Challenges related to the testing process regarding a complex project environment and unscheduled releases were identified. Based on the obtained results, we concluded that the described approach addresses well the aforementioned issues. Therefore, recommendations were made with regard to the employed principles of agility, specifically: continuous integration, responding to change, test automation and test driven development. Furthermore, an efficient testing environment that combines a number of test frameworks (e.g. JUnit, Selenium, Jersey Test with custom-developed simulators is presented.
Simulation of Model Force-Loading with Changing Its Position in the Wind Tunnel Test Section
Directory of Open Access Journals (Sweden)
V. T. Bui
2015-01-01
Full Text Available When planning and implementing an aerodynamic experiment, model sizes and its position in the test section of the wind tunnel (WT play very important role. The paper focuses on the value variations of the aerodynamic characteristics of a model through changing its position in the WT test section and on the attenuation of the velocity field disturbance in front of the model. Flow around aerodynamic model profile in the open test section of the low-speed WT T-500 is simulated at BMSTU Department SM3. The problem is solved in a two-dimensional case using the ANSYS Fluent package. The mathematical model of flow is based on the Reynolds equations closed by the SST turbulence model. The paper also presents the results of the experiment. Experiments conducted in WT T-500 well correlate with the calculated data and show the optimal position in the middle of the test section when conducting the weighing and drainage experiments. Disturbance of tunnel dynamic pressure (velocity head and flow upwash around the model profile and circular cylinder in the WT test section is analyzed. It was found that flow upstream from the front stagnation point on the body weakly depends on the Reynolds number and obtained results can be used to assess the level of disturbances in the flow around a model by incompressible airflow.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
Geochemical Testing And Model Development - Residual Tank Waste Test Plan
International Nuclear Information System (INIS)
Cantrell, K.J.; Connelly, M.P.
2010-01-01
This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.
Kayigamba, Felix R.; van Santen, Daniëla; Bakker, Mirjam I.; Lammers, Judith; Mugisha, Veronicah; Bagiruwigize, Emmanuel; de Naeyer, Ludwig; Asiimwe, Anita; Schim van der Loeff, Maarten F.
2016-01-01
Provider-initiated HIV testing and counselling (PITC) is promoted as a means to increase HIV case finding. We assessed the effectiveness of PITC to increase HIV testing rate and HIV case finding among outpatients in Rwandan health facilities (HF). PITC was introduced in six HFs in 2009-2010. HIV
Business model stress testing : A practical approach to test the robustness of a business model
Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.
Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The
Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin
Directory of Open Access Journals (Sweden)
E. H. Sutanudjaja
2011-09-01
Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.
Yield surface investigation of alloys during model disk spin tests
Directory of Open Access Journals (Sweden)
E. P. Kuzmin
2014-01-01
Full Text Available Gas-turbine engines operate under heavy subsequently static loading conditions. Disks of gas-turbine engine are high loaded parts of irregular shape having intensive stress concentrators wherein a 3D stress strain state occurs. The loss of load-carrying capability or burst of disk can lead to severe accident or disaster. Therefore, development of methods to assess deformations and to predict burst is one of the most important problems.Strength assessment approaches are used at all levels of engine creation. In recent years due to actively developing numerical method, particularly FEA, it became possible to investigate load-carrying capability of irregular shape disks, to use 3D computing schemes including flow theory and different options of force and deformation failure criteria. In spite of a wide progress and practical use of strength assessment approaches, there is a lack of detailed research data on yield surface of disk alloys. The main purpose of this work is to validate the use of basis hypothesis of flow theory and investigate the yield surface of disk alloys during the disks spin test.The results of quasi-static numerical simulation of spin tests of model disk made from high-temperature forged alloy are presented. To determine stress-strain state of disk during loading finite element analysis is used. Simulation of elastic-plastic strain fields was carried out using incremental theory of plasticity with isotropic hardening. Hardening function was taken from the results of specimens tensile test. Specimens were cut from a sinkhead of model disk. The paper investigates the model sensitivity affected by V.Mises and Tresca yield criteria as well as the Hosford model. To identify the material model parameters the eddy current sensors were used in the experimental approach to measure rim radial displacements during the load-unload of spin test. The results of calculation made using different material models were compared with the
Full scale turbine-missile casing exit tests
International Nuclear Information System (INIS)
Yoshimura, H.R.; Schamaun, J.T.; Sliter, G.E.
1979-01-01
Two full-scale tests have simulated the impact of a fragment from a failed turbine disk upon the steel casing of a low-pressure steam turbine with the objective of providing data for making more realistic assessments of turbine missile effects for nuclear power plant designers. Data were obtained on both the energy-absorbing mechanisms of the impact process and the post-impact trajectory of the fragment. (orig.)
Numerical simulations of rubber bearing tests and shaking table tests
International Nuclear Information System (INIS)
Hirata, K.; Matsuda, A.; Yabana, S.
2002-01-01
Test data concerning rubber bearing tests and shaking table tests of base-isolated model conducted by CRIEPI are provided to the participants of Coordinated Research Program (CRP) on 'Intercomparison of Analysis Methods for predicting the behaviour of Seismically Isolated Nuclear Structure', which is organized by International Atomic Energy Agency (IAEA), for the comparison study of numerical simulation of base-isolated structure. In this paper outlines of the test data provided and the numerical simulations of bearing tests and shaking table tests are described. Using computer code ABAQUS, numerical simulations of rubber bearing tests are conducted for NRBs, LRBs (data provided by CRIEPI) and for HDRs (data provided by ENEA/ENEL and KAERI). Several strain energy functions are specified according to the rubber material test corresponding to each rubber bearing. As for lead plug material in LRB, mechanical characteristics are reevaluated and are made use of. Simulation results for these rubber bearings show satisfactory agreement with the test results. Shaking table test conducted by CRIEPI is of a base isolated rigid mass supported by LRB. Acceleration time histories, displacement time histories of the isolators as well as cyclic loading test data of the LRB used for the shaking table test are provided to the participants of the CRP. Simulations of shaking table tests are conducted for this rigid mass, and also for the steel frame model which is conducted by ENEL/ENEA. In the simulation of the rigid mass model test, where LRBs are used, isolators are modeled either by bilinear model or polylinear model. In both cases of modeling of isolators, simulation results show good agreement with the test results. In the case of the steel frame model, where HDRs are used as isolators, bilinear model and polylinear model are also used for modeling isolators. The response of the model is simulated comparatively well in the low frequency range of the floor response, however, in
INTRAVAL Working group 2 summary report on Phase 2 analysis of the Finnsjoen test case
International Nuclear Information System (INIS)
Andersson, Peter; Winberg, A.
1994-01-01
A comprehensive series of tracer tests on a relatively large scale have been performed by SKB at Finnsjoen, Sweden, to increase understanding of transport phenomena which govern migration of radionuclides in major fracture zones. The conducted experiments were subsequently selected as a test in the international INTRAVAL Project, in part because the tests at Finnsjoe invite to direct address of validation of geosphere models. This report summarizes the study of the Finnsjoe test case within INTRAVAL Phase 2, which has involved nine project teams from seven countries. Porous media approaches in two dimensions dominated, although some project teams utilized one-dimensional transport models, and even three-dimensional approaches on a larger scale. The dimensionality employed did not appear to be decisive for the ability to reproduce the observed field responses. It was also demonstrated that stochastic approaches can be used in a validation process. Only four out of nine project teams studied more than one process. The general conclusion drawn is that flow and transport in the studied zone is governed by advection and that hydrodynamic dispersion is needed to explain the breakthrough curves. Matrix diffusion is assumed to have small or negligible effect. The performed analysis is dominated by numerical approaches applied on scales on the order of a 1000m. Taking scale alone into account, the results of most teams are possible to compare. A variety of validation aspects have been considered. Five teams utilized a model calibrated on one test, to predict another, whereas the two teams utilizing stochastic continuum approaches addressed; 1) validity of extrapolation of a model calibrated on one transport scale to a larger scale, 2) performance assessment implications of choice of underlying distribution model for hydraulic conductivity, respectively. 37 refs
Diversity in case management modalities: the Summit model.
Peterson, G A; Drone, I D; Munetz, M R
1997-06-01
Though ubiquitous in community mental health agencies, case management suffers from a lack of consensus regarding its definition, essential components, and appropriate application. Meaningful comparisons of various case management models await such a consensus. Global assessments of case management must be replaced by empirical studies of specific interventions with respect to the needs of specific populations. The authors describe a highly differentiated and prescriptive system of case management involving the application of more than one model of service delivery. Such a diversified and targeted system offers an opportunity to study the technology of case management in a more meaningful manner.
Overload prevention in model supports for wind tunnel model testing
Directory of Open Access Journals (Sweden)
Anton IVANOVICI
2015-09-01
Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.
Directory of Open Access Journals (Sweden)
Khalil-Moghaddam Shiva
2012-11-01
Full Text Available Abstract Background Inhibitors of pancreatic alpha-amylase are potential drugs to treat diabetes and obesity. In order to find compounds that would be effective amylase inhibitors, in vitro and in vivo models are usually used. The accuracy of models is limited, but these tools are nonetheless valuable. In vitro models could be used in large screenings involving thousands of chemicals that are tested to find potential lead compounds. In vivo models are still used as preliminary mean of testing compounds behavior in the whole organism. In the case of alpha-amylase inhibitors, both rats and rabbits could be chosen as in vivo models. The question was which animal could present more accuracy with regard to its pancreatic alpha-amylase. Results As there is no crystal structure of these enzymes, a molecular modeling study was done in order to compare the rabbit and rat enzymes with the human one. The overall result is that rabbit enzyme could probably be a better choice in this regard, but in the case of large ligands, which could make putative interactions with the −4 subsite of pancreatic alpha-amylase, interpretation of results should be made cautiously. Conclusion Molecular modeling tools could be used to choose the most suitable model enzyme that would help to identify new enzyme inhibitors. In the case of alpha-amylase, three-dimensional structures of animal enzymes show differences with the human one which should be taken into account when testing potential new drugs.
Mining Product Data Models: A Case Study
Directory of Open Access Journals (Sweden)
Cristina-Claudia DOLEAN
2014-01-01
Full Text Available This paper presents two case studies used to prove the validity of some data-flow mining algorithms. We proposed the data-flow mining algorithms because most part of mining algorithms focuses on the control-flow perspective. First case study uses event logs generated by an ERP system (Navision after we set several trackers on the data elements needed in the process analyzed; while the second case study uses the event logs generated by YAWL system. We offered a general solution of data-flow model extraction from different data sources. In order to apply the data-flow mining algorithms the event logs must comply a certain format (using InputOutput extension. But to respect this format, a set of conversion tools is needed. We depicted the conversion tools used and how we got the data-flow models. Moreover, the data-flow model is compared to the control-flow model.
Metamorphic Testing Integer Overflow Faults of Mission Critical Program: A Case Study
Directory of Open Access Journals (Sweden)
Zhanwei Hui
2013-01-01
Full Text Available For mission critical programs, integer overflow is one of the most dangerous faults. Different testing methods provide several effective ways to detect the defect. However, it is hard to validate the testing outputs, because the oracle of testing is not always available or too expensive to get, unless the program throws an exception obviously. In the present study, the authors conduct a case study, where the authors apply a metamorphic testing (MT method to detect the integer overflow defect and alleviate the oracle problem in testing critical program of Traffic Collision Avoidance System (TCAS. Experimental results show that, in revealing typical integer mutations, compared with traditional safety property testing method, MT with a novel symbolic metamorphic relation is more effective than the traditional method in some cases.
Experimental Applications of Automatic Test Markup Language (ATML)
Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris
2012-01-01
The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.
Model-based testing for embedded systems
Zander, Justyna; Mosterman, Pieter J
2011-01-01
What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used
Numerical analysis of thermal response tests with a groundwater flow and heat transfer model
Energy Technology Data Exchange (ETDEWEB)
Raymond, J.; Therrien, R. [Departement de Geologie et de Genie Ggeologique, Universite Laval, 1065 avenue de la medecine, Quebec (Qc) G1V 0A6 (Canada); Gosselin, L. [Departement de Genie Mecanique, Universite Laval, 1065 avenue de la medecine, Quebec (Qc) G1V 0A6 (Canada); Lefebvre, R. [Institut National de la Recherche Scientifique, Centre Eau Terre Environnement, 490 de la Couronne, Quebec (Qc) G1K 9A9 (Canada)
2011-01-15
The Kelvin line-source equation, used to analyze thermal response tests, describes conductive heat transfer in a homogeneous medium with a constant temperature at infinite boundaries. The equation is based on assumptions that are valid for most ground-coupled heat pump environments with the exception of geological settings where there is significant groundwater flow, heterogeneous distribution of subsurface properties, a high geothermal gradient or significant atmospheric temperature variations. To address these specific cases, an alternative method to analyze thermal response tests was developed. The method consists in estimating parameters by reproducing the output temperature signal recorded during a test with a numerical groundwater flow and heat transfer model. The input temperature signal is specified at the entrance of the ground heat exchanger, where flow and heat transfer are computed in 2D planes representing piping and whose contributions are added to the 3D porous medium. Results obtained with this method are compared to those of the line-source model for a test performed under standard conditions. A second test conducted in waste rock at the South Dump of the Doyon Mine, where conditions deviate from the line-source assumptions, is analyzed with the numerical model. The numerical model improves the representation of the physical processes involved during a thermal response test compared to the line-source equation, without a significant increase in computational time. (author)
James, Richard; Khim, Keovathanak; Boudarene, Lydia; Yoong, Joanne; Phalla, Chea; Saint, Saly; Koeut, Pichenda; Mao, Tan Eang; Coker, Richard; Khan, Mishal Sameer
2017-08-22
Globally, almost 40% of tuberculosis (TB) patients remain undiagnosed, and those that are diagnosed often experience prolonged delays before initiating correct treatment, leading to ongoing transmission. While there is a push for active case finding (ACF) to improve early detection and treatment of TB, there is extremely limited evidence about the relative cost-effectiveness of different ACF implementation models. Cambodia presents a unique opportunity for addressing this gap in evidence as ACF has been implemented using different models, but no comparisons have been conducted. The objective of our study is to contribute to knowledge and methodology on comparing cost-effectiveness of alternative ACF implementation models from the health service perspective, using programmatic data, in order to inform national policy and practice. We retrospectively compared three distinct ACF implementation models - door to door symptom screening in urban slums, checking contacts of TB patients, and door to door symptom screening focusing on rural populations aged above 55 - in terms of the number of new bacteriologically-positive pulmonary TB cases diagnosed and the cost of implementation assuming activities are conducted by the national TB program of Cambodia. We calculated the cost per additional case detected using the alternative ACF models. Our analysis, which is the first of its kind for TB, revealed that the ACF model based on door to door screening in poor urban areas of Phnom Penh was the most cost-effective (249 USD per case detected, 737 cases diagnosed), followed by the model based on testing contacts of TB patients (308 USD per case detected, 807 cases diagnosed), and symptomatic screening of older rural populations (316 USD per case detected, 397 cases diagnosed). Our study provides new evidence on the relative effectiveness and economics of three implementation models for enhanced TB case finding, in line with calls for data from 'routine conditions' to be included
Field Testing and Modeling of Supermarket Refrigeration Systems as a Demand Response Resource
Energy Technology Data Exchange (ETDEWEB)
Deru, Michael; Hirsch, Adam; Clark, Jordan; Anthony, Jamie
2016-08-26
Supermarkets offer a substantial demand response (DR) resource because of their high energy intensity and use patterns; however, refrigeration as the largest load has been challenging to access. Previous work has analyzed supermarket DR using heating, ventilating, and air conditioning; lighting; and anti-sweat heaters. This project evaluated and quantified the DR potential inherent in supermarket refrigeration systems in the Bonneville Power Administration service territory. DR events were carried out and results measured in an operational 45,590-ft2 supermarket located in Hillsboro, Oregon. Key results from the project include the rate of temperature increase in freezer reach-in cases and walk-ins when refrigeration is suspended, the load shed amount for DR tests, and the development of calibrated models to quantify available DR resources. Simulations showed that demand savings of 15 to 20 kilowatts (kW) are available for 1.5 hours for a typical store without precooling and for about 2.5 hours with precooling using only the low-temperature, non-ice cream cases. This represents an aggregated potential of 20 megawatts within BPA's service territory. Inability to shed loads for medium-temperature (MT) products because of the tighter temperature requirements is a significant barrier to realizing larger DR for supermarkets. Store owners are reluctant to allow MT case set point changes, and laboratory tests of MT case DR strategies are needed so that owners become comfortable testing, and implementing, MT case DR. The next-largest barrier is the lack of proper controls in most supermarket displays over ancillary equipment, such as anti-sweat heaters, lights, and fans.
The use of scale models in impact testing
International Nuclear Information System (INIS)
Donelan, P.J.; Dowling, A.R.
1985-01-01
Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)
Evaluation and modelling of SWIW tests performed within the SKB site characterisation programme
International Nuclear Information System (INIS)
Nordqvist, Rune
2008-08-01
factors for the sorbing tracers vary over a wide range. In some cases, differences in retardation properties may be attributed to fracture mineralogy and water chemistry conditions. The extended model evaluation including matrix diffusion resulted in consistently improved fits of the tail of the tracer breakthrough curves, in particular for the non-sorbing tracer Uranine. However, the matrix diffusion effect is considerably larger than would be indicated from laboratory data for matrix porosity and diffusivity. It is difficult to assess whether there are significant differences between sites regarding retardation, because the number of tests is limited. However, the largest values for the retardation factor are found for Oskarshamn and the smallest value in Forsmark. Moderate values in between are found at each site
Evaluation and modelling of SWIW tests performed within the SKB site characterisation programme
Energy Technology Data Exchange (ETDEWEB)
Nordqvist, Rune (Geosigma AB, Uppsala (SE))
2008-08-15
factors for the sorbing tracers vary over a wide range. In some cases, differences in retardation properties may be attributed to fracture mineralogy and water chemistry conditions. The extended model evaluation including matrix diffusion resulted in consistently improved fits of the tail of the tracer breakthrough curves, in particular for the non-sorbing tracer Uranine. However, the matrix diffusion effect is considerably larger than would be indicated from laboratory data for matrix porosity and diffusivity. It is difficult to assess whether there are significant differences between sites regarding retardation, because the number of tests is limited. However, the largest values for the retardation factor are found for Oskarshamn and the smallest value in Forsmark. Moderate values in between are found at each site
Constrained structural dynamic model verification using free vehicle suspension testing methods
Blair, Mark A.; Vadlamudi, Nagarjuna
1988-01-01
Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.
FARO base case post-test analysis by COMETA code
Energy Technology Data Exchange (ETDEWEB)
Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)
1995-09-01
The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.
Industrial-Strength Model-Based Testing - State of the Art and Current Challenges
Directory of Open Access Journals (Sweden)
Jan Peleska
2013-03-01
Full Text Available As of today, model-based testing (MBT is considered as leading-edge technology in industry. We sketch the different MBT variants that - according to our experience - are currently applied in practice, with special emphasis on the avionic, railway and automotive domains. The key factors for successful industrial-scale application of MBT are described, both from a scientific and a managerial point of view. With respect to the former view, we describe the techniques for automated test case, test data and test procedure generation for concurrent reactive real-time systems which are considered as the most important enablers for MBT in practice. With respect to the latter view, our experience with introducing MBT approaches in testing teams are sketched. Finally, the most challenging open scientific problems whose solutions are bound to improve the acceptance and effectiveness of MBT in industry are discussed.
Process-oriented tests for validation of baroclinic shallow water models: The lock-exchange problem
Kolar, R. L.; Kibbey, T. C. G.; Szpilka, C. M.; Dresback, K. M.; Tromble, E. M.; Toohey, I. P.; Hoggan, J. L.; Atkinson, J. H.
A first step often taken to validate prognostic baroclinic codes is a series of process-oriented tests, as those suggested by Haidvogel and Beckmann [Haidvogel, D., Beckmann, A., 1999. Numerical Ocean Circulation Modeling. Imperial College Press, London], among others. One of these tests is the so-called "lock-exchange" test or "dam break" problem, wherein water of different densities is separated by a vertical barrier, which is removed at time zero. Validation against these tests has primarily consisted of comparing the propagation speed of the wave front, as predicted by various theoretical and experimental results, to model output. In addition, inter-model comparisons of the lock-exchange test have been used to validate codes. Herein, we present a high resolution data set, taken from a laboratory-scale model, for direct and quantitative comparison of experimental and numerical results throughout the domain, not just the wave front. Data is captured every 0.2 s using high resolution digital photography, with salt concentration extracted by comparing pixel intensity of the dyed fluid against calibration standards. Two scenarios are discussed in this paper, symmetric and asymmetric mixing, depending on the proportion of dense/light water (17.5 ppt/0.0 ppt) in the experiment; the Boussinesq approximation applies to both. Front speeds, cast in terms of the dimensionless Froude number, show excellent agreement with literature-reported values. Data are also used to quantify the degree of mixing, as measured by the front thickness, which also provides an error band on the front speed. Finally, experimental results are used to validate baroclinic enhancements to the barotropic shallow water ADvanced CIRCulation (ADCIRC) model, including the effect of the vertical mixing scheme on simulation results. Based on salinity data, the model provides an average root-mean-square (rms) error of 3.43 ppt for the symmetric case and 3.74 ppt for the asymmetric case, most of which can
Modelling of ultrasonic nondestructive testing of cracks in claddings
Energy Technology Data Exchange (ETDEWEB)
Bostroem, Anders; Zagbai, Theo [Calmers Univ. of Technology, Goeteborg (Sweden). Dept. of Applied Mechanics
2006-05-15
Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry. To develop and qualify the methods extensive experimental work with test blocks is usually required. This can be very time-consuming and costly and it also requires a good physical intuition of the situation. A reliable mathematical model of the testing situation can, therefore, be very valuable and cost-effective as it can reduce experimental work significantly. A good mathematical model enhances the physical intuition and is very useful for parametric studies, as a pedagogical tool, and for the qualification of procedures and personnel. The present project has been concerned with the modelling of defects in claddings. A cladding is a layer of material that is put on for corrosion protection, in the nuclear power industry this layer is often an austenitic steel that is welded onto the surface. The cladding is usually anisotropic and to some degree it is most likely also inhomogeneous, particularly in that the direction of the anisotropy is varying. This degree of inhomogeneity is unknown but probably not very pronounced so for modelling purposes it may be a valid assumption to take the cladding to be homogeneous. However, another important complicating factor with claddings is that the interface between the cladding and the base material is often corrugated. This corrugation can have large effects on the transmission of ultrasound through the interface and can thus greatly affect the detectability of defects in the cladding. In the present project the only type of defect that is considered is a planar crack that is situated inside the cladding. The investigations are, furthermore, limited to two dimensions, and the crack is then only a straight line. The crack can be arbitrarily oriented and situated, but it must not intersect the interface to the base material. The crack can be surface-breaking, and this is often the case of most practical interest, but it should then be
Effectiveness of test driven development and continuous integration - a case study
Amrit, Chintan; Meijberg, Yoni
2018-01-01
In this article we describe the implementation of hybrid agile practices, namely Test Driven Development (TDD) and Continuous Integration (CI) at a Dutch SME. The quality and productivity outcomes of the case study were compared to a performance baseline set by a reference case, a preceding
Animal models for testing anti-prion drugs.
Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín
2013-01-01
Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.
Directory of Open Access Journals (Sweden)
Mei-Yu LEE
2014-11-01
Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.
International Nuclear Information System (INIS)
Dolensky, B.; Messemer, G.; Zehlein, H.; Erb, J.
1981-01-01
Finite element computations for the structural design of the large superconducting toroidal field coil contributed by EURATOM to the Large Coil Test Facility (LCTF) at ORNL, USA were performed at KfK, using the ASKA code. The layout of the coil must consider different types of requirements: firstly, an optimal D-shaped contour minimizing circumferential stress gradients under normal operation in the toroidal arrangement must be defined. Secondly, the three-dimensional real design effects due to the actual support conditions, manufacturing tolerances etc. must be mastered for different basic operational and failure load cases. And, thirdly, the design must stand a single coil qualification test in the TOSKA-facility at KfK, Karlsruhe, FRG, before it is plugged into the LCTF. The emphasis of the paper is three-pronged according to these requirements: i) the 3D magnetic body forces as well as the underlying magnetic fields as computed by the HEDO-code are described. ii) The mechanical interaction between casing and winding as given elsewhere in terms of high stress regions, gaps, slide movements and contact forces for various load cases representing the LCTF test conditions is illustrated here by a juxtaposition of the operational deformations and stresses within the LCTF and the TOSKA. iii) Particular effects like the restraint imposed by a corset-type reinforcement of the coil in the TOSKA test facility to limit the breathing deformation are parametrically studied. Moreover, the possibilities to derive scaling laws which make essential results transferable to larger coils by extracting a 1D mechanical response from the 3D finite element model is also demonstrated. (orig./GG)
Air injection test on a Kaplan turbine: prototype - model comparison
Angulo, M.; Rivetti, A.; Díaz, L.; Liscia, S.
2016-11-01
Air injection is a very well-known resource to reduce pressure pulsation magnitude in turbines, especially on Francis type. In the case of large Kaplan designs, even when not so usual, it could be a solution to mitigate vibrations arising when tip vortex cavitation phenomenon becomes erosive and induces structural vibrations. In order to study this alternative, aeration tests were performed on a Kaplan turbine at model and prototype scales. The research was focused on efficiency of different air flow rates injected in reducing vibrations, especially at the draft tube and the discharge ring and also in the efficiency drop magnitude. It was found that results on both scales presents the same trend in particular for vibration levels at the discharge ring. The efficiency drop was overestimated on model tests while on prototype were less than 0.2 % for all power output. On prototype, air has a beneficial effect in reducing pressure fluctuations up to 0.2 ‰ of air flow rate. On model high speed image computing helped to quantify the volume of tip vortex cavitation that is strongly correlated with the vibration level. The hydrophone measurements did not capture the cavitation intensity when air is injected, however on prototype, it was detected by a sonometer installed at the draft tube access gallery.
Model-Based Software Testing for Object-Oriented Software
Biju, Soly Mathew
2008-01-01
Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…
1/3-scale model testing program
International Nuclear Information System (INIS)
Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.
1989-01-01
This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system
Experimental tests of transport models using modulated ECH
International Nuclear Information System (INIS)
DeBoo, J.C.; Kinsey, J.E.; Bravenec, R.
1998-12-01
Both the dynamic and equilibrium thermal responses of an L-mode plasma to repetitive ECH heat pulses were measured and compared to predictions from several thermal transport models. While no model consistently agreed with all observations, the GLF23 model was most consistent with the perturbated electron and ion temperature responses for one of the cases studied which may indicate a key role played by electron modes in the core of these discharges. Generally, the IIF and MM models performed well for the perturbed electron response while the GLF23 and IFS/PPPL models agreed with the perturbed ion response for all three cases studied. No single model agreed well with the equilibrium temperature profiles measured
Use of combinatorial pharmacogenomic testing in two cases from community psychiatry
Directory of Open Access Journals (Sweden)
Fields ES
2016-08-01
Full Text Available Eve S Fields,1 Raymond A Lorenz,2 Joel G Winner2 1Northwest Center for Community Mental Health, Reston, VA, USA; 2Assurex Health, Mason, OH, USA Abstract: This report describes two cases in which pharmacogenomic testing was utilized to guide medication selection for difficult to treat patients. The first patient is a 29-year old male with bipolar disorder who had severe akathisia due to his long acting injectable antipsychotic. The second patient is a 59-year old female with major depressive disorder who was not responding to her medication. In both cases, a proprietary combinatorial pharmacogenomic test was used to inform medication changes and improve patient outcomes. The first patient was switched to a long acting injectable that was not affected by his genetic profile and his adverse effects abated. The second patient had her medications discontinued due to the results of the genetic testing and more intense psychotherapy initiated. While pharmacogenomic testing may be helpful in cases such as these presented here, it should never serve as a proxy for a comprehensive biopsychosocial approach. The pharmacogenomic information may be selectively added to this comprehensive approach to support medication treatment. Keywords: pharmacogenomics, adverse effects, risperidone, nortriptyline, paliperidone
Model testing for the remediation assessment of a radium contaminated site in Olen, Belgium
International Nuclear Information System (INIS)
Sweeck, Lieve; Kanyar, Bela; Krajewski, Pawel; Kryshev, Alexander; Lietava, Peter; Nenyei, Arpad; Sazykina, Tatiana; Yu, Charley; Zeevaert, Theo
2005-01-01
Environmental assessment models are used as decision-aiding tools in the selection of remediation options for radioactively contaminated sites. In most cases, the effectiveness of the remedial actions in terms of dose savings cannot be demonstrated directly, but can be established with the help of environmental assessment models, through the assessment of future radiological impacts. It should be emphasized that, given the complexity of the processes involved and our current understanding of how they operate, these models are simplified descriptions of the behaviour of radionuclides in the environment and therefore imperfect. One way of testing and improving the reliability of the models is to compare their predictions with real data and/or the predictions of other models. Within the framework of the Remediation Assessment Working Group (RAWG) of the BIOMASS (BIOsphere Modelling and ASSessment) programme coordinated by IAEA, two scenarios were constructed and applied to test the reliability of environmental assessment models when remedial actions are involved. As a test site, an area of approximately 100 ha contaminated by the discharges of an old radium extraction plant in Olen (Belgium) has been considered. In the first scenario, a real situation was evaluated and model predictions were compared with measured data. In the second scenario the model predictions for specific hypothetical but realistic situations were compared. Most of the biosphere models were not developed to assess the performance of remedial actions and had to be modified for this purpose. It was demonstrated clearly that the modeller's experience and familiarity with the mathematical model, the site and with the scenario play a very important role in the outcome of the model calculations. More model testing studies, preferably for real situations, are needed in order to improve the models and modelling methods and to expand the areas in which the models are applicable
ExEP yield modeling tool and validation test results
Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul
2017-09-01
EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.
Barsi, A.; Jager, T.; Collinet, M.; Lagadic, L.; Ducrot, V.
2014-01-01
Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information
Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies
Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.
2017-12-01
Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark
DEFF Research Database (Denmark)
Andersen, Thomas Lykke; Brorsen, Michael
This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006. The......-shaped access platforms on piles. The Model tests include mainly regular waves and a few irregular wave tests. These tests have been conducted at Aalborg University from 9. November, 2006 to 17. November, 2006.......This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006....... The objective of the tests was to investigate the impact pressures generated on a horizontal platform and a cone platform for selected sea states calibrated by Lykke Andersen & Frigaard, 2006. The measurements should be used for assessment of slamming coefficients for the design of horizontal and cone...
Concept Test of a Smoking Cessation Smart Case.
Comello, Maria Leonora G; Porter, Jeannette H
2018-04-05
Wearable/portable devices that unobtrusively detect smoking and contextual data offer the potential to provide Just-In-Time Adaptive Intervention (JITAI) support for mobile cessation programs. Little has been reported on the development of these technologies. To address this gap, we offer a case report of users' experiences with a prototype "smart" cigarette case that automatically tracks time and location of smoking. Small-scale user-experience studies are typical of iterative product design and are especially helpful when proposing novel ideas. The purpose of the study was to assess concept acceptability and potential for further development. We tested the prototype case with a small sample of potential users (n = 7). Participants used the hardware/software for 2 weeks and reconvened for a 90-min focus group to discuss experiences and provide feedback. Participants liked the smart case in principle but found the prototype too bulky for easy portability. The potential for the case to convey positive messages about self also emerged as a finding. Participants indicated willingness to pay for improved technology (USD $15-$60 on a one-time basis). The smart case is a viable concept, but design detail is critical to user acceptance. Future research should examine designs that maximize convenience and that explore the device's ability to cue intentions and other cognitions that would support cessation. This study is the first to our knowledge to report formative research on the smart case concept. This initial exploration provides insights that may be helpful to other developers of JITAI-support technology.
Safety Case Development as an Information Modelling Problem
Lewis, Robert
This paper considers the benefits from applying information modelling as the basis for creating an electronically-based safety case. It highlights the current difficulties of developing and managing large document-based safety cases for complex systems such as those found in Air Traffic Control systems. After a review of current tools and related literature on this subject, the paper proceeds to examine the many relationships between entities that can exist within a large safety case. The paper considers the benefits to both safety case writers and readers from the future development of an ideal safety case tool that is able to exploit these information models. The paper also introduces the idea that the safety case has formal relationships between entities that directly support the safety case argument using a methodology such as GSN, and informal relationships that provide links to direct and backing evidence and to supporting information.
Directory of Open Access Journals (Sweden)
Kotapati Srinivasa Reddy
2015-12-01
Full Text Available The extant social sciences and management theoretical concepts and empirical literature have mostly determined based on western (developed economies institutional context. In the recent past, a number of researchers have argued that the western theories are inadequate to study the emerging markets phenomenon and described the problems relating to data collection, data analysis, and theory development. I also (experience confirm that major problems are relating to the research data collection, especially primary data (interview and survey methods. With this in mind, I develop a new case study research design, that is, “Test-Tube” typology, to build theory from emerging markets behavior as well as to add new knowledge to the mass of disciplines, particularly social sciences, medicine, travel, tourism and hospitality, sports, management, and information systems, and engineering. I design a typology that consists of eleven steps: case development, case selection, relatedness and pattern matching, case analysis, cross-case analysis, theoretical constructs, pre-testing and development, adjusting theoretical constructs, theory testing, building theory and testable propositions, and suggesting strategic swap model. Further, I suggest a set of few guidelines on how to measure the research quality and how to strengthen the research rigor in case study settings.
DEFF Research Database (Denmark)
Silvennoinen, Annastiina; Terasvirta, Timo
The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....
Assuring consumer safety without animal testing: a feasibility case study for skin sensitisation.
Maxwell, Gavin; Aleksic, Maja; Aptula, Aynur; Carmichael, Paul; Fentem, Julia; Gilmour, Nicola; Mackay, Cameron; Pease, Camilla; Pendlington, Ruth; Reynolds, Fiona; Scott, Daniel; Warner, Guy; Westmoreland, Carl
2008-11-01
Allergic Contact Dermatitis (ACD; chemical-induced skin sensitisation) represents a key consumer safety endpoint for the cosmetics industry. At present, animal tests (predominantly the mouse Local Lymph Node Assay) are used to generate skin sensitisation hazard data for use in consumer safety risk assessments. An animal testing ban on chemicals to be used in cosmetics will come into effect in the European Union (EU) from March 2009. This animal testing ban is also linked to an EU marketing ban on products containing any ingredients that have been subsequently tested in animals, from March 2009 or March 2013, depending on the toxicological endpoint of concern. Consequently, the testing of cosmetic ingredients in animals for their potential to induce skin sensitisation will be subject to an EU marketing ban, from March 2013 onwards. Our conceptual framework and strategy to deliver a non-animal approach to consumer safety risk assessment can be summarised as an evaluation of new technologies (e.g. 'omics', informatics), leading to the development of new non-animal (in silico and in vitro) predictive models for the generation and interpretation of new forms of hazard characterisation data, followed by the development of new risk assessment approaches to integrate these new forms of data and information in the context of human exposure. Following the principles of the conceptual framework, we have been investigating existing and developing new technologies, models and approaches, in order to explore the feasibility of delivering consumer safety risk assessment decisions in the absence of new animal data. We present here our progress in implementing this conceptual framework, with the skin sensitisation endpoint used as a case study. 2008 FRAME.
Comparison of vibration test results for Atucha II NPP and large scale concrete block models
International Nuclear Information System (INIS)
Iizuka, S.; Konno, T.; Prato, C.A.
2001-01-01
In order to study the soil structure interaction of reactor building that could be constructed on a Quaternary soil, a comparison study of the soil structure interaction springs was performed between full scale vibration test results of Atucha II NPP and vibration test results of large scale concrete block models constructed on Quaternary soil. This comparison study provides a case data of soil structure interaction springs on Quaternary soil with different foundation size and stiffness. (author)
Deformation modeling and the strain transient dip test
International Nuclear Information System (INIS)
Jones, W.B.; Rohde, R.W.; Swearengen, J.C.
1980-01-01
Recent efforts in material deformation modeling reveal a trend toward unifying creep and plasticity with a single rate-dependent formulation. While such models can describe actual material deformation, most require a number of different experiments to generate model parameter information. Recently, however, a new model has been proposed in which most of the requisite constants may be found by examining creep transients brought about through abrupt changes in creep stress (strain transient dip test). The critical measurement in this test is the absence of a resolvable creep rate after a stress drop. As a consequence, the result is extraordinarily sensitive to strain resolution as well as machine mechanical response. This paper presents the design of a machine in which these spurious effects have been minimized and discusses the nature of the strain transient dip test using the example of aluminum. It is concluded that the strain transient dip test is not useful as the primary test for verifying any micromechanical model of deformation. Nevertheless, if a model can be developed which is verifiable by other experimentts, data from a dip test machine may be used to generate model parameters
Methods for testing transport models
International Nuclear Information System (INIS)
Singer, C.; Cox, D.
1993-01-01
This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases
An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg
2003-01-01
We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....
An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg
2006-01-01
We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....
Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras
2018-05-01
The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.
Energy Technology Data Exchange (ETDEWEB)
Nour, Ali [Hydro Québec, Montréal, Québec H2L4P5 (Canada); École Polytechnique de Montréal, Montréal, Québec H3C3A7 (Canada); Cherfaoui, Abdelhalim; Gocevski, Vladimir [Hydro Québec, Montréal, Québec H2L4P5 (Canada); Léger, Pierre [École Polytechnique de Montréal, Montréal, Québec H3C3A7 (Canada)
2016-08-01
Highlights: • In this case study, the seismic PSA methodology adopted for a CANDU 6 is presented. • Ambient vibrations testing to calibrate a 3D FEM and to reduce uncertainties is performed. • Procedure for the development of FRS for the RB considering wave incoherency effect is proposed. • Seismic fragility analysis for the RB is presented. - Abstract: Following the 2011 Fukushima Daiichi nuclear accident in Japan there is a worldwide interest in reducing uncertainties in seismic safety assessment of existing nuclear power plant (NPP). Within the scope of a Canadian refurbishment project of a CANDU 6 (NPP) put in service in 1983, structures and equipment must sustain a new seismic demand characterised by the uniform hazard spectrum (UHS) obtained from a site specific study defined for a return period of 1/10,000 years. This UHS exhibits larger spectral ordinates in the high-frequency range than those used in design. To reduce modeling uncertainties as part of a seismic probabilistic safety assessment (PSA), Hydro-Québec developed a procedure using ambient vibrations testing to calibrate a detailed 3D finite element model (FEM) of the containment and reactor building (RB). This calibrated FE model is then used for generating floor response spectra (FRS) based on ground motion time histories compatible with the UHS. Seismic fragility analyses of the reactor building (RB) and structural components are also performed in the context of a case study. Because the RB is founded on a large circular raft, it is possible to consider the effect of the seismic wave incoherency to filter out the high-frequency content, mainly above 10 Hz, using the incoherency transfer function (ITF) method. This allows reducing significantly the non-necessary conservatism in resulting FRS, an important issue for an existing NPP. The proposed case study, and related methodology using ambient vibration testing, is particularly useful to engineers involved in seismic re-evaluation of
Prekop Ľubomír
2017-01-01
This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from exper...
Energy Technology Data Exchange (ETDEWEB)
Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P. [and others
1996-12-01
This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.
International Nuclear Information System (INIS)
Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.
1996-01-01
This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising
An Iterative Procedure for Efficient Testing of B2B: A Case in Messaging Service Tests
Energy Technology Data Exchange (ETDEWEB)
Kulvatunyou, Boonserm [ORNL
2007-03-01
Testing is a necessary step in systems integration. Testing in the context of inter-enterprise, business-to-business (B2B) integration is more difficult and expensive than intra-enterprise integration. Traditionally, the difficulty is alleviated by conducting the testing in two stages: conformance testing and then interoperability testing. In conformance testing, systems are tested independently against a reference system. In interoperability testing, they are tested simultaneously against one another. In the traditional approach for testing, these two stages are performed sequentially with little feedback between them. In addition, test results and test traces are left only to human analysis or even discarded if the solution passes the test. This paper proposes an approach where test results and traces from both the conformance and interoperability tests are analyzed for potential interoperability issues; conformance test cases are then derived from the analysis. The result is that more interoperability issues can be resolved in the lower-cost conformance testing mode; consequently, time and cost required for achieving interoparble solutions are reduced.
International Nuclear Information System (INIS)
Gordon, H.; Marciano, W.; Williams, H.H.
1982-01-01
We summarize here the results of the standard model group which has studied the ways in which different facilities may be used to test in detail what we now call the standard model, that is SU/sub c/(3) x SU(2) x U(1). The topics considered are: W +- , Z 0 mass, width; sin 2 theta/sub W/ and neutral current couplings; W + W - , Wγ; Higgs; QCD; toponium and naked quarks; glueballs; mixing angles; and heavy ions
lmerTest Package: Tests in Linear Mixed Effects Models
DEFF Research Database (Denmark)
Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2017-01-01
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald; Liever, Peter; Nielsen, Tanner
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
Nance, Donald K.; Liever, Peter A.
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in
Directory of Open Access Journals (Sweden)
Prekop Ľubomír
2017-01-01
Full Text Available This paper deals with the modelling of the load test of horizontal resistance of reinforced concrete piles. The pile belongs to group of piles with reinforced concrete heads. The head is pressed with steel arches of a bridge on motorway D1 Jablonov - Studenec. Pile model was created in ANSYS with several models of foundation having properties found out from geotechnical survey. Finally some crucial results obtained from computer models are presented and compared with these obtained from experiment.
Negative Exercise Stress Test: Does it Mean Anything? Case study
Directory of Open Access Journals (Sweden)
Hassan A. Mohamed
2007-01-01
Full Text Available Despite its low sensitivity and specificity (67% and 72%, respectively, exercise testing has remained one of the most widely used noninvasive tests to determine the prognosis in patients with suspected or established coronary disease.As a screening test for coronary artery disease, the exercise stress test is useful in that it is relatively simple and inexpensive. It has been considered particularly helpful in patients with chest pain syndromes who have moderate probability for coronary artery disease, and in whom the resting electrocardiogram (ECG is normal. The following case presentation and discussion will question the predictive value of a negative stress testing in patients with moderate probability for coronary artery disease.
A business case method for business models
Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2013-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model
Making System Dynamics Cool II : New Hot Teaching and Testing Cases of Increasing Complexity
Pruyt, E.
2010-01-01
This follow-up paper presents several actual cases for testing and teaching System Dynamics. The cases were developed between April 2009 and January 2010 for the Introductory System Dynamics courses at Delft University of Technology in the Netherlands. They can be used for teaching and testing
Directory of Open Access Journals (Sweden)
Farshad Fathian
2017-02-01
arenot specified. Keenan test has also been proposed for assessing the linearity or nonlinearitybehavior of a time series in time series analysis. Keenan (1985 derived a test for nonlinearity analogous to Tukey’s degree of freedom for nonadditivity test. Keenan’s test is motivated by approximation a nonlinear stationary time series by a second-order Volterra expansion. While Keenan’s test for nonlinearity is designed for detecting quadratic nonlinearity, it may not be sensitive to threshold nonlinearity. Here, we applied the likelihood ratio test (TLRT with the threshold model as the specific alternative.The null hypothesis of the TLRT approach for threshold nonlinearity is the fitted model to the series is an AR (p model, and the alternative hypothesis is the fitted model to the series is a threshold autoregressive (TAR model with autoregressive order p in each regime. Results and Discussion: Because both the ADF and KPSS tests are based on linear regression, which has the normal distribution assumption, logarithmization can convert exponential trend possibly present in the data into a linear trend. In the case of stationary analysis, the results showed the standardized daily streamflow time series of all stations are significantly stationary. According to KPSS stationary test, the daily standardized streamflow time series are stationary around a fixed level, but they are not stationary around a trend stationaryin low lag values. Based on the BDS test, the results showed the daily streamflowseries have strong nonlinear structure, but based on the Keenan test, it can be seen the linear structure in thembyusing logarithmization and deseasonalization operators, and it means the coefficients of the double sum part are zero. It should be considered the Keenan test is used to detect quadratic nonlinearity, and it cannot be adequatelyfor threshold autoregressive models since they are linear in each regime. Conclusion: Streamflow processes of main rivers at 6 stations
A Method to Select Software Test Cases in Consideration of Past Input Sequence
International Nuclear Information System (INIS)
Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook
2015-01-01
In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software
Jaworska, Joanna; Harol, Artsiom; Kern, Petra S; Gerberick, G Frank
2011-01-01
There is an urgent need to develop data integration and testing strategy frameworks allowing interpretation of results from animal alternative test batteries. To this end, we developed a Bayesian Network Integrated Testing Strategy (BN ITS) with the goal to estimate skin sensitization hazard as a test case of previously developed concepts (Jaworska et al., 2010). The BN ITS combines in silico, in chemico, and in vitro data related to skin penetration, peptide reactivity, and dendritic cell activation, and guides testing strategy by Value of Information (VoI). The approach offers novel insights into testing strategies: there is no one best testing strategy, but the optimal sequence of tests depends on information at hand, and is chemical-specific. Thus, a single generic set of tests as a replacement strategy is unlikely to be most effective. BN ITS offers the possibility of evaluating the impact of generating additional data on the target information uncertainty reduction before testing is commenced.
Directory of Open Access Journals (Sweden)
Philip C Hill
2008-01-01
Full Text Available Studies of Tuberculosis (TB case contacts are increasingly being utilised for understanding the relationship between M. tuberculosis and the human host and for assessing new interventions and diagnostic tests. We aimed to identify the incidence rate of new TB cases among TB contacts and to relate this to their initial Mantoux and ELISPOT test results.After initial Mantoux and ELISPOT tests and exclusion of co-prevalent TB cases, we followed 2348 household contacts of sputum smear positive TB cases. We visited them at 3 months, 6 months, 12 months, 18 months and 24 months, and investigated those with symptoms consistent with TB. Those who were diagnosed separately at a government clinic had a chest x-ray. Twenty six contacts were diagnosed with definite TB over 4312 person years of follow-up (Incidence rate 603/100,000 person years; 95% Confidence Interval, 370-830. Nine index and secondary case pairs had cultured isolates available for genotyping. Of these, 6 pairs were concordant and 3 were discordant. 2.5% of non-progressors were HIV positive compared to 12% of progressors (HR 6.2; 95% CI 1.7-22.5; p = 0.010. 25 secondary cases had initial Mantoux results, 14 (56% were positive ; 21 had initial ELISPOT results, 11 (52% were positive; 15 (71% of 21 tested were positive by one or the other test. Of the 6 contacts who had concordant isolates with their respective index case, 4 (67% were Mantoux positive at recruitment, 3 (50% were ELISPOT positive; 5 (83% were positive by one or other of the two tests. ELISPOT positive contacts, and those with discordant results, had a similar rate of progression to those who were Mantoux positive. Those negative on either or both tests had the lowest rate of progression.The incidence rate of TB disease in Gambian TB case contacts, after screening for co-prevalent cases, was 603/100,000 person years. Since initial ELISPOT test and Mantoux tests were each positive in only just over half of cases, but 71% were
Equilibrium star formation in a constant Q disc: model optimization and initial tests
Zheng, Zheng; Meurer, Gerhardt R.; Heckman, Timothy M.; Thilker, David A.; Zwaan, Martin A.
2013-10-01
We develop a model for the distribution of the interstellar medium (ISM) and star formation in galaxies based on recent studies that indicate that galactic discs stabilize to a constant stability parameter, which we combine with prescriptions of how the phases of the ISM are determined and for the star formation law (SFL). The model predicts the gas surface mass density and star formation intensity of a galaxy given its rotation curve, stellar surface mass density and the gas velocity dispersion. This model is tested on radial profiles of neutral and molecular ISM surface mass density and star formation intensity of 12 galaxies selected from the H I Nearby Galaxy Survey sample. Our tests focus on intermediate radii (0.3 to 1 times the optical radius) because there are insufficient data to test the outer discs and the fits are less accurate in detail in the centre. Nevertheless, the model produces reasonable agreement with the ISM mass and star formation rate integrated over the central region in all but one case. To optimize the model, we evaluate four recipes for the stability parameter, three recipes for apportioning the ISM into molecular and neutral components, and eight versions of the SFL. We find no clear-cut best prescription for the two-fluid (gas and stars) stability parameter Q2f and therefore for simplicity, we use the Wang and Silk approximation (QWS). We found that an empirical scaling between the molecular-to-neutral ISM ratio (Rmol) and the stellar surface mass density proposed by Leroy et al. works marginally better than the other two prescriptions for this ratio in predicting the ISM profiles, and noticeably better in predicting the star formation intensity from the ISM profiles produced by our model with the SFLs we tested. Thus, in the context of our modelled ISM profiles, the linear molecular SFL and the two-component SFL work better than the other prescriptions we tested. We incorporate these relations into our `constant Q disc' model.
Code cases for implementing risk-based inservice testing in the ASME OM code
International Nuclear Information System (INIS)
Rowley, C.W.
1996-01-01
Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices
Code cases for implementing risk-based inservice testing in the ASME OM code
Energy Technology Data Exchange (ETDEWEB)
Rowley, C.W.
1996-12-01
Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.
Making System Dynamics Cool? Using Hot Testing & Teaching Cases
Pruyt, E.
2009-01-01
This paper deals with the use of ‘hot’ real-world cases for both testing and teaching purposes such as in the Introductory System Dynamics course at Delft University of Technology in the Netherlands. The paper starts with a brief overview of the System Dynamics curriculum. Then the problem-oriented
Linear Logistic Test Modeling with R
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Model Checking and Model-based Testing in the Railway Domain
DEFF Research Database (Denmark)
Haxthausen, Anne Elisabeth; Peleska, Jan
2015-01-01
This chapter describes some approaches and emerging trends for verification and model-based testing of railway control systems. We describe state-of-the-art methods and associated tools for verifying interlocking systems and their configuration data, using bounded model checking and k...... with good test strength are explained. Interlocking systems represent just one class of many others, where concrete system instances are created from generic representations, using configuration data for determining the behaviour of the instances. We explain how the systematic transition from generic...... to concrete instances in the development path is complemented by associated transitions in the verification and testing paths....
Case Studies in Modelling, Control in Food Processes.
Glassey, J; Barone, A; Montague, G A; Sabou, V
This chapter discusses the importance of modelling and control in increasing food process efficiency and ensuring product quality. Various approaches to both modelling and control in food processing are set in the context of the specific challenges in this industrial sector and latest developments in each area are discussed. Three industrial case studies are used to demonstrate the benefits of advanced measurement, modelling and control in food processes. The first case study illustrates the use of knowledge elicitation from expert operators in the process for the manufacture of potato chips (French fries) and the consequent improvements in process control to increase the consistency of the resulting product. The second case study highlights the economic benefits of tighter control of an important process parameter, moisture content, in potato crisp (chips) manufacture. The final case study describes the use of NIR spectroscopy in ensuring effective mixing of dry multicomponent mixtures and pastes. Practical implementation tips and infrastructure requirements are also discussed.
Digital test assembly of truck parts with the IMMA-tool--an illustrative case.
Hanson, L; Högberg, D; Söderholm, M
2012-01-01
Several digital human modelling (DHM) tools have been developed for simulation and visualisation of human postures and motions. In 2010 the DHM tool IMMA (Intelligently Moving Manikins) was introduced as a DHM tool that uses advanced path planning techniques to generate collision free and biomechanically acceptable motions for digital human models (as well as parts) in complex assembly situations. The aim of the paper is to illustrate how the IPS/IMMA tool is used at Scania CV AB in a digital test assembly process, and to compare the tool with other DHM tools on the market. The illustrated case of using the IMMA tool, here combined with the path planner tool IPS, indicates that the tool is promising. The major strengths of the tool are its user friendly interface, the motion generation algorithms, the batch simulation of manikins and the ergonomics assessment methods that consider time.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Lührs, Nikolas; Jager, Nicolas W.; Challies, Edward; Newig, Jens
2018-02-01
Public participation is potentially useful to improve public environmental decision-making and management processes. In corporate management, the Vroom-Yetton-Jago normative decision-making model has served as a tool to help managers choose appropriate degrees of subordinate participation for effective decision-making given varying decision-making contexts. But does the model recommend participatory mechanisms that would actually benefit environmental management? This study empirically tests the improved Vroom-Jago version of the model in the public environmental decision-making context. To this end, the key variables of the Vroom-Jago model are operationalized and adapted to a public environmental governance context. The model is tested using data from a meta-analysis of 241 published cases of public environmental decision-making, yielding three main sets of findings: (1) The Vroom-Jago model proves limited in its applicability to public environmental governance due to limited variance in its recommendations. We show that adjustments to key model equations make it more likely to produce meaningful recommendations. (2) We find that in most of the studied cases, public environmental managers (implicitly) employ levels of participation close to those that would have been recommended by the model. (3) An ANOVA revealed that such cases, which conform to model recommendations, generally perform better on stakeholder acceptance and environmental standards of outputs than those that diverge from the model. Public environmental management thus benefits from carefully selected and context-sensitive modes of participation.
Vibration analysis diagnostics by continuous-time models: A case study
International Nuclear Information System (INIS)
Pedregal, Diego J.; Carmen Carnero, Ma.
2009-01-01
In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme
Vibration analysis diagnostics by continuous-time models: A case study
Energy Technology Data Exchange (ETDEWEB)
Pedregal, Diego J. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Diego.Pedregal@uclm.es; Carmen Carnero, Ma. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Carmen.Carnero@uclm.es
2009-02-15
In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme.
The Latent Class Model as a Measurement Model for Situational Judgment Tests
Directory of Open Access Journals (Sweden)
Frank Rijmen
2011-11-01
Full Text Available In a situational judgment test, it is often debatable what constitutes a correct answer to a situation. There is currently a multitude of scoring procedures. Establishing a measurement model can guide the selection of a scoring rule. It is argued that the latent class model is a good candidate for a measurement model. Two latent class models are applied to the Managing Emotions subtest of the Mayer, Salovey, Caruso Emotional Intelligence Test: a plain-vanilla latent class model, and a second-order latent class model that takes into account the clustering of several possible reactions within each hypothetical scenario of the situational judgment test. The results for both models indicated that there were three subgroups characterised by the degree to which differentiation occurred between possible reactions in terms of perceived effectiveness. Furthermore, the results for the second-order model indicated a moderate cluster effect.
A Case-Based Learning Model in Orthodontics.
Engel, Francoise E.; Hendricson, William D.
1994-01-01
A case-based, student-centered instructional model designed to mimic orthodontic problem solving and decision making in dental general practice is described. Small groups of students analyze case data, then record and discuss their diagnoses and treatments. Students and instructors rated the seminars positively, and students reported improved…
Schnall, Rebecca; Bakken, Suzanne
2011-09-01
To assess the applicability of the Technology Acceptance Model (TAM) constructs in explaining HIV case managers' behavioural intention to use a continuity of care record (CCR) with context-specific links designed to meet their information needs. Data were collected from 94 case managers who provide care to persons living with HIV (PLWH) using an online survey comprising three components: (1) demographic information: age, gender, ethnicity, race, Internet usage and computer experience; (2) mock-up of CCR with context-specific links; and items related to TAM constructs. Data analysis included: principal components factor analysis (PCA), assessment of internal consistency reliability and univariate and multivariate analysis. PCA extracted three factors (Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use), explained variance = 84.9%, Cronbach's ά = 0.69-0.91. In a linear regression model, Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use explained 43.6% (p Technology assessed.
International Nuclear Information System (INIS)
Bonnet, M.; Delapalme, A.; Becker, P.
1976-01-01
This paper shows that polarized neutron experiments, which do not depend on any scale factor, are very dependent on extinction and provide original tests for extinction models. Moon, Koehler, Cable and Child (1972) have formulated the problem and proposed a first-order solution applicable only when the extinction is small. In the first part, some analytical derivations of secondary extinction corrections are discussed, using the formalism of Becker and Coppens (1974). In the second part, the main principles governing polarized neutron diffraction are briefly reviewed, with a special discussion of extinction problems. The method is then applied to the case of yttrium iron garnet (YIG). This experiment shows the technique of polarized neutrons to be very powerful for testing extinction models and for deciding whether the crystal behaves dynamically or kinematically (following Kato's criterion). (Auth.)
Binedell, J; Soldan, J R; Scourfield, J; Harper, P S
1996-01-01
Adolescents who are actively requesting Huntington's predictive testing of their own accord pose a dilemma to those providing testing. In the absence of empirical evidence as regards the impact of genetic testing on minors, current policy and guidelines, based on the ethical principles of non-maleficence and respect for individual autonomy and confidentiality, generally exclude the testing of minors. It is argued that adherence to an age based exclusion criterion in Huntington's disease predictive testing protocols is out of step with trends in UK case law concerning minors' consent to medical treatment. Furthermore, contributions from developmental psychology and research into adolescents' decision making competence suggest that adolescents can make informed choices about their health and personal lives. Criteria for developing an assessment approach to such requests are put forward and the implications of a case by case evaluation of competence to consent in terms of clinicians' tolerance for uncertainty are discussed. PMID:8950670
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keppens, R. [School of Astronomy and Space Science, Nanjing University, Nanjing 210023 (China); Xia, C. [Centre for mathematical Plasma-Astrophysics, Department of Mathematics, KU Leuven, B-3001 Leuven (Belgium); Valori, G., E-mail: guoyang@nju.edu.cn [University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey RH5 6NT (United Kingdom)
2016-09-10
We report our implementation of the magneto-frictional method in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC). The method aims at applications where local adaptive mesh refinement (AMR) is essential to make follow-up dynamical modeling affordable. We quantify its performance in both domain-decomposed uniform grids and block-adaptive AMR computations, using all frequently employed force-free, divergence-free, and other vector comparison metrics. As test cases, we revisit the semi-analytic solution of Low and Lou in both Cartesian and spherical geometries, along with the topologically challenging Titov–Démoulin model. We compare different combinations of spatial and temporal discretizations, and find that the fourth-order central difference with a local Lax–Friedrichs dissipation term in a single-step marching scheme is an optimal combination. The initial condition is provided by the potential field, which is the potential field source surface model in spherical geometry. Various boundary conditions are adopted, ranging from fully prescribed cases where all boundaries are assigned with the semi-analytic models, to solar-like cases where only the magnetic field at the bottom is known. Our results demonstrate that all the metrics compare favorably to previous works in both Cartesian and spherical coordinates. Cases with several AMR levels perform in accordance with their effective resolutions. The magneto-frictional method in MPI-AMRVAC allows us to model a region of interest with high spatial resolution and large field of view simultaneously, as required by observation-constrained extrapolations using vector data provided with modern instruments. The applications of the magneto-frictional method to observations are shown in an accompanying paper.
DEFF Research Database (Denmark)
Andersen, Thomas Lykke; Frigaard, Peter
This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU). The objective of the tests was: To investigate the combined influence of the pile diameter to water depth ratio and the wave height to water...... depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles. The Model tests include: Calibration of regular and irregular sea states at the location of the pile (without structure in place). Measurement of wave run-up for the calibrated sea states...... on the front side of the pile (0 to 90 degrees). These tests have been conducted at Aalborg University from 9. October, 2006 to 8. November, 2006. Unless otherwise mentioned, all values given in this report are in model scale....
Model tests on dynamic performance of RC shear walls
International Nuclear Information System (INIS)
Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.
1991-01-01
For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)
THE MISHKIN TEST: AN ANALYSIS OF MODEL EXTENSIONS
Directory of Open Access Journals (Sweden)
Diana MURESAN
2015-04-01
Full Text Available This paper reviews empirical research that apply Mishkin test for the examination of the existence of accruals anomaly using alternative approaches. Mishkin test is a test used in macro-econometrics for rational hypothesis, which test for the market efficiency. Starting with Sloan (1996 the model has been applied to accruals anomaly literature. Since Sloan (1996, the model has known various improvements and it has been the subject to many debates in the literature regarding its efficacy. Nevertheless, the current evidence strengthens the pervasiveness of the model. The analyses realized on the extended studies on Mishkin test highlights that adding additional variables enhances the results, providing insightful information about the occurrence of accruals anomaly.
Analysis of the flexural mode response of a novel trimaran by segmented model test
Directory of Open Access Journals (Sweden)
Karim Akbari Vakilabadi
Full Text Available A novel ship concept design is significantly an "adhoc" process. In the preliminary design stage of novel vessels, it is very important to be able to develop an initial estimate of the effects of stiffness and mass distribution on the longitudinal flexural natural frequencies due to different general arrangements in still water at zero speed to satisfy design specifications. For new emerging designs, this estimate has to be made based on a model test. The experiments should also be planned so that scales effects and other features that are not present in full scale case, are minimized. A model with a length of 1.5 meter has been selected. The model was cut into four segments longitudinally and connected by a backbone beam with three elastic hinges joining the four segments. Wet vibration tests were conducted on the model, showed significant influences on the flexural natural frequencies through variations in stiffness and different mass distributions. The whipping frequency was calculated with four degrees of freedom theoretical model to compare with the experimental results. The theoretical model shows a good agreement with the experimental results.
Discussions On Worst-Case Test Condition For Single Event Burnout
Liu, Sandra; Zafrani, Max; Sherman, Phillip
2011-10-01
This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.
Huang, J.; Bou-Zeid, E.; Golaz, J.
2011-12-01
Parameterization of the stably-stratified atmospheric boundary-layer is of crucial importance to different aspects of numerical weather prediction at regional scales and climate modeling at global scales, such as land-surface temperature forecasts, fog and frost prediction, and polar climate. It is well-known that most operational climate models require excessive turbulence mixing of the stable boundary-layer to prevent decoupling of the atmospheric component from the land component under strong stability, but the performance of such a model is unlikely to be satisfactory under weakly and moderately stable conditions. In this study we develop and test a general turbulence mixing model of the stable boundary-layer which works under different stabilities and for steady as well as unsteady conditions. A-priori large-eddy simulation (LES) tests are presented to motivate and verify the new parameterization. Subsequently, an assessment of this model using the GFDL single-column model (SCM) is performed. Idealized test cases including continuously varying stability, as well as stability discontinuity, are used to test the new SCM against LES results. A good match of mean and flux profiles is found when the new parameterization is used, while other traditional first-order turbulence models using the concept of stability function perform poorly. SCM spatial resolution is also found to have little impact on the performance of the new turbulence closure, but temporal resolution is important and a numerical stability criterion based on the model time step is presented.
Model Testing - Bringing the Ocean into the Laboratory
DEFF Research Database (Denmark)
Aage, Christian
2000-01-01
Hydrodynamic model testing, the principle of bringing the ocean into the laboratory to study the behaviour of the ocean itself and the response of man-made structures in the ocean in reduced scale, has been known for centuries. Due to an insufficient understanding of the physics involved, however......, the early model tests often gave incomplete or directly misleading results.This keynote lecture deals with some of the possibilities and problems within the field of hydrodynamic and hydraulic model testing....
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
Automated Test Case Generation from Highly Reliable System Requirements Models, Phase I
National Aeronautics and Space Administration — Software testing is a complex and expensive phase of the software development cycle. Effective software testing is especially important in mission-critical software,...
The Model Identification Test: A Limited Verbal Science Test
McIntyre, P. J.
1972-01-01
Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)
Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results
Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul
1992-01-01
The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.
Empirical test of Capital Asset Pricing Model on Selected Banking Shares from Borsa Istanbul
Directory of Open Access Journals (Sweden)
Fuzuli Aliyev
2018-03-01
Full Text Available In this paper we tested Capital Asset Pricing Model (shortly CAPM hereafter on the selected banking stocks of Borsa Istanbul. Here we tried to explain how to price financial assets based on their risks in the case of BIST-100 index. CAPM is an important model in the portfolio management theory used by economic agents for the selection of financial assets. We used 12 random banking stocks’ monthly return data for 2001–2010 periods. To test the validity of the CAPM, we first derived the regression equation for the risk-free interest rate and risk premium relationship using January 2001–December 2009 data. Then, estimated January–December 2010 returns with the equation. Comparing forecasted return with the actual return, we concluded that the CAPM is valid for the portfolio consisting of the 12 banks traded in the ISE, i.e. The model could predict the overall outcome of portfolio of selected banking shares
Application of a numerical model in the interpretation of a leaky aquifer test
International Nuclear Information System (INIS)
Schroth, B.; Narasimhan, T.N.
1997-01-01
The potential use of numerical models in aquifer analysis is by no means a new concept; yet relatively few engineers and scientists are taking advantage of this powerful tool that is more convenient to use now than ever before. In this technical note the authors present an example of using a numerical model in an integrated analysis of data from a three-layer leaky aquifer system involving well-bore storage, skin effects, variable discharge, and observation wells in the pumped aquifer and in an unpumped aquifer. The modeling detail may differ for other cases. The intent is to show that interpretation can be achieved with reduced bias by reducing assumptions in regard to system geometry, flow rate, and other details. A multiwell aquifer test was carried out at a site on the western part of the Lawrence Livermore National Laboratory (LLNL), located about 60 kilometers east of San Francisco. The test was conducted to hydraulically characterize one part of the site and thus help develop remediation strategies to alleviate the ground-water contamination
DEFF Research Database (Denmark)
Burcharth, H. F.; Larsen, Brian Juul
The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...
Test-driven verification/validation of model transformations
Institute of Scientific and Technical Information of China (English)
László LENGYEL; Hassan CHARAF
2015-01-01
Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.
Software Testing and Verification in Climate Model Development
Clune, Thomas L.; Rood, RIchard B.
2011-01-01
Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.
Singularity analysis in nonlinear biomathematical models: Two case studies
International Nuclear Information System (INIS)
Meletlidou, E.; Leach, P.G.L.
2007-01-01
We investigate the possession of the Painleve Property for certain values of the parameters in two biological models. The first is a metapopulation model for two species (prey and predator) and the second one is a study of a sexually transmitted disease, into which 'education' is introduced. We determine the cases for which the systems possess the Painleve Property, in particular some of the cases for which the equations can be directly integrated. We draw conclusions for these cases
Case analysis online: a strategic management case model for the health industry.
Walsh, Anne; Bearden, Eithne
2004-01-01
Despite the plethora of methods and tools available to support strategic management, the challenge for health executives in the next century will relate to their ability to access and interpret data from multiple and intricate communication networks. Integrated digital networks and satellite systems will expand the scope and ease of sharing information between business divisions, and networked systems will facilitate the use of virtual case discussions across universities. While the internet is frequently used to support clinical decisions in the healthcare industry, few executives rely upon the internetfor strategic analysis. Although electronic technologies can easily synthesize data from multiple information channels, research as well as technical issues may deter their application in strategic analysis. As digital models transform access to information, online models may become increasingly relevant in designing strategic solutions. While there are various pedagogical models available to support the strategic management process, this framework was designed to enhance strategic analysis through the application of technology and electronic research. A strategic analysis framework, which incorporated internet research and case analysis in a strategic managementcourse, is described alongwith design and application issues that emerged during the case analysis process.
Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten
2017-05-01
Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.
Kinematic tests of exotic flat cosmological models
International Nuclear Information System (INIS)
Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)
1987-01-01
Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references
Kinematic tests of exotic flat cosmological models
International Nuclear Information System (INIS)
Charlton, J.C.; Turner, M.S.
1986-05-01
Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations
Kinematic tests of exotic flat cosmological models
Energy Technology Data Exchange (ETDEWEB)
Charlton, J.C.; Turner, M.S.
1986-05-01
Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations.
International Nuclear Information System (INIS)
Viallet, E.; Bolsee, G.; Ladouceur, B.; Goubin, T.; Rigaudeau, J.
2003-01-01
The fuel assembly mechanical strength must be justified with respect to the lateral loads under accident conditions, in particular seismic loads. This justification is performed by means of time-history analyses with dynamic models of an assembly row in the core, allowing for assembly deformations, impacts at grid locations and reactor coolant effects. Due to necessary simplifications, the models include 'equivalent' parameters adjusted with respect to dynamic characterisation tests of the fuel assemblies. Complementing such tests on isolated assemblies by an overall model validation with shaking table tests on interacting assemblies is obviously desirable. Seismic tests have been performed by French CEA (Commissariat a l'Energie Atomique) on a row of six full scale fuel assemblies, including two types of 17 x 17 12ft design. The row models are built according to the usual procedure, with preliminary characterisation tests performed on a single assembly. The test-calculation comparisons are made for two test configurations : in air and in water. The relatively large number of accelerograms (15, used for each configuration) is also favourable to significant comparisons. The results are presented for the impact forces at row ends, displacements at mid assembly, and also 'statistical' parameters. Despite a non-negligible scattering in the results obtained with different accelerograms, the calculations prove realistic, and the modelling process is validated with a good confidence level. This satisfactory validation allows to evaluate precisely the margins in the seismic design methodology of the fuel assemblies, and thus to confirm the safety of the plants in case of seismic event. (author)
Hocine, Mounia; Guillemot, Didier; Tubert-Bitter, Pascale; Moreau, Thierry
2005-12-30
In case-series or cohort studies, we propose a test of independence between the occurrences of two types of recurrent events (such as two repeated infections) related to an intermittent exposure (such as an antibiotic treatment). The test relies upon an extension of a recent method for analysing case-series data, in the presence of one type of recurrent event. The test statistic is derived from a bivariate Poisson generated-multinomial distribution. Simulations for checking the validity of the test concerning the type I error and the power properties are presented. The test is illustrated using data from a cohort on antibiotics bacterial resistance in schoolchildren. Copyright 2005 John Wiley & Sons, Ltd.
Thermal-Chemical Model Of Subduction: Results And Tests
Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.
2005-12-01
Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
2001-01-01
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
1997-01-01
A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
System Dynamic Modelling for a Balanced Scorecard: A Case Study
DEFF Research Database (Denmark)
Nielsen, Steen; Nielsen, Erland Hejn
Purpose - The purpose of this research is to make an analytical model of the BSC foundation by using a dynamic simulation approach for a 'hypothetical case' model, based on only part of an actual case study of BSC. Design/methodology/approach - The model includes five perspectives and a number...
Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)
Ahmad, Nash'at; Proctor, Fred
2011-01-01
The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.
Simulation of VVER MCCI reactor test case with ASTEC V2/MEDICIS computer code
International Nuclear Information System (INIS)
Stefanova, A.; Grudev, P.; Gencheva, R.
2011-01-01
This paper presents an application of the ASTEC v2, module MEDICIS for simulation of VVER Molten core concrete interaction test (MCCI) case without water injection. The main purpose of performed calculation is verification and improvement of module MEDICIS/ASTECv2 for better simulation of core concrete interaction processes. The VVER-1000 reference nuclear power plant was chosen as SARNET2 benchmark MCCI test-case. The initial conditions for MCCI test are taken after SBO scenario calculated with ASTEC version 1.3R2 by INRNE. (authors)
Model Selection in Continuous Test Norming With GAMLSS.
Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E
2017-06-01
To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.
Cyber-Physical Energy Systems Modeling, Test Specification, and Co-Simulation Based Testing
DEFF Research Database (Denmark)
van der Meer, A. A.; Palensky, P.; Heussen, Kai
2017-01-01
The gradual deployment of intelligent and coordinated devices in the electrical power system needs careful investigation of the interactions between the various domains involved. Especially due to the coupling between ICT and power systems a holistic approach for testing and validating is required....... Taking existing (quasi-) standardised smart grid system and test specification methods as a starting point, we are developing a holistic testing and validation approach that allows a very flexible way of assessing the system level aspects by various types of experiments (including virtual, real......, and mixed lab settings). This paper describes the formal holistic test case specification method and applies it to a particular co-simulation experimental setup. The various building blocks of such a simulation (i.e., FMI, mosaik, domain-specific simulation federates) are covered in more detail...
PedGenie: meta genetic association testing in mixed family and case-control designs
Directory of Open Access Journals (Sweden)
Allen-Brady Kristina
2007-11-01
Full Text Available Abstract Background- PedGenie software, introduced in 2006, includes genetic association testing of cases and controls that may be independent or related (nuclear families or extended pedigrees or mixtures thereof using Monte Carlo significance testing. Our aim is to demonstrate that PedGenie, a unique and flexible analysis tool freely available in Genie 2.4 software, is significantly enhanced by incorporating meta statistics for detecting genetic association with disease using data across multiple study groups. Methods- Meta statistics (chi-squared tests, odds ratios, and confidence intervals were calculated using formal Cochran-Mantel-Haenszel techniques. Simulated data from unrelated individuals and individuals in families were used to illustrate meta tests and their empirically-derived p-values and confidence intervals are accurate, precise, and for independent designs match those provided by standard statistical software. Results- PedGenie yields accurate Monte Carlo p-values for meta analysis of data across multiple studies, based on validation testing using pedigree, nuclear family, and case-control data simulated under both the null and alternative hypotheses of a genotype-phenotype association. Conclusion- PedGenie allows valid combined analysis of data from mixtures of pedigree-based and case-control resources. Added meta capabilities provide new avenues for association analysis, including pedigree resources from large consortia and multi-center studies.
Rate-control algorithms testing by using video source model
DEFF Research Database (Denmark)
Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna
2008-01-01
In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....
High-dose calcium stimulation test in a case of insulinoma masquerading as hysteria.
Nakamura, Yoshio; Doi, Ryuichiro; Kohno, Yasuhiro; Shimono, Dai; Kuwamura, Naomitsu; Inoue, Koichi; Koshiyama, Hiroyuki; Imamura, Masayuki
2002-11-01
It is reported that some cases with insulinoma present with neuropsychiatric symptoms and are often misdiagnosed as psychosis. Here we report a case of insulinoma masquerading as hysteria, whose final diagnosis could be made using high-dose calcium stimulation test. A 28-yr-old woman was referred presenting with substupor, mutism, mannerism, restlessness, and incoherence. Laboratory examinations revealed hypoglycemia (33 mg/dL) and detectable insulin levels (9.7 microU/mL), suggesting the diagnosis of insulinoma. However, neither imaging studies nor selective arterial calcium injection (SACI) test with a conventional dose of calcium (0.025 mEq/kg) indicated the tumor. High-dose calcium injection (0.05 mEq/kg) evoked insulin secretion when injected into superior mesenteric artery. A solitary tumor in the head of the pancreas was resected, and her plasma glucose returned to normal. Postoperatively, iv injection of secretin resulted in a normal response of insulin, which was not found preoperatively. This case suggests the usefulness of the SACI test with high-dose of calcium in the case of insulinoma when the standard dose fails to detect such a tumor.
Superconducting solenoid model magnet test results
International Nuclear Information System (INIS)
Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab
2006-01-01
Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests
Superconducting solenoid model magnet test results
Energy Technology Data Exchange (ETDEWEB)
Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab
2006-08-01
Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.
Is the standard model really tested?
International Nuclear Information System (INIS)
Takasugi, E.
1989-01-01
It is discussed how the standard model is really tested. Among various tests, I concentrate on the CP violation phenomena in K and B meson system. Especially, the resent hope to overcome the theoretical uncertainty in the evaluation on the CP violation of K meson system is discussed. (author)
Directory of Open Access Journals (Sweden)
Aschengrau Ann
2006-06-01
Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.
Dropouts and Budgets: A Test of a Dropout Reduction Model among Students in Israeli Higher Education
Bar-Am, Ran; Arar, Osama
2017-01-01
This article deals with the problem of student dropout during the first year in a higher education institution. To date, no model on a budget has been developed and tested to prevent dropout among Engineering Students. This case study was conducted among first-year students taking evening classes in two practical engineering colleges in Israel.…
Use of Quality Models and Indicators for Evaluating Test Quality in an ESP Course
Directory of Open Access Journals (Sweden)
IEVA RUDZINSKA
2013-12-01
Full Text Available Qualitative methods of assessment play a decisive role in education in general and in language learning in particular. The necessity to perform a qualitative assessment comes from both increased student competition in higher education institutions (HEIs, and hence higher demands for fair assessment, and a growing public awareness on higher education issues, and therefore the need to account for a wider circle of stakeholders, including society as a whole. The aim of the present paper is to study the regulations and laws pertaining to the issue of assessment in Latvian HEIs, as well as to carry out literature sources analysis about assessment in language testing, seeking to select criteria characterizing the quality of English for Specific Purposes (ESP tests and to apply the model of evaluating the quality of a language test on an example of a test in sport English, developed in a Latvian higher education institution. An analysis of the regulations and laws about assessment in higher education and literature sources about tests in language courses has enabled the development of a test quality model, consisting of seven intrinsic quality criteria: clarity, adequacy, deep approach, attractiveness, originality/similarity, orientation towards student learning result/process, test scoring objectivity/subjectivity. Quality criteria comprise eleven indicators. The reliability of the given model is evaluated by means of the whole model, its criteria and indicator Cronbach’s alphas and point-biserial (item-total correlations or discrimination indexes DI. The test was taken by 63 participants, all of them 2nd year full time students attending a Latvian higher education institution. A statistical data analysis was performed with SPSS 17.0. The results show that, although test adequacy and clarity is sufficiently high, attractiveness and deep approach should be improved. Also the reliability of one version of the test is higher than that of the other one
Kenyon, Lisa K; Sleeper, Mark D; Tovin, Melissa M
2010-01-01
This case report describes the development, implementation, and outcomes of a fitness-related intervention program that addressed the sport-specific goals of an adolescent with cerebral palsy. The participant in this case was a 16-year-old African American male with spastic diplegia. The participant joined his high school wrestling team and asked to focus his physical therapy on interventions that would improve his wrestling performance. An examination was performed using the muscle power sprint test, the 10 x 5-m sprint test, strength tests, the 10-m shuttle run test, and the Gross Motor Function Measure. The intervention consisted of interval training, which focused on the demands of wrestling. Scores on all tests and measures were higher after the intervention. The outcomes of this case report seem to support the use of a fitness-related intervention program for addressing the sport-specific goals of an adolescent with cerebral palsy.
Methods for testing transport models
International Nuclear Information System (INIS)
Singer, C.; Cox, D.
1991-01-01
Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data
Directory of Open Access Journals (Sweden)
Cihad DELEN
2015-12-01
Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.
A tutorial on testing the race model inequality
DEFF Research Database (Denmark)
Gondan, Matthias; Minakata, Katsumi
2016-01-01
, to faster responses to redundant signals. In contrast, coactivation models assume integrated processing of the combined stimuli. To distinguish between these two accounts, Miller (1982) derived the well-known race model inequality, which has become a routine test for behavioral data in experiments...... with redundant signals. In this tutorial, we review the basic properties of redundant signals experiments and current statistical procedures used to test the race model inequality during the period between 2011 and 2014. We highlight and discuss several issues concerning study design and the test of the race...... model inequality, such as inappropriate control of Type I error, insufficient statistical power, wrong treatment of omitted responses or anticipations and the interpretation of violations of the race model inequality. We make detailed recommendations on the design of redundant signals experiments...
Decision-case mix model for analyzing variation in cesarean rates.
Eldenburg, L; Waller, W S
2001-01-01
This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs. (c) 2015 APA, all rights reserved).
Variable amplitude fatigue, modelling and testing
International Nuclear Information System (INIS)
Svensson, Thomas.
1993-01-01
Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated
International Nuclear Information System (INIS)
Hardy, J. C.; Towner, I. S.
2001-01-01
Superallowed β-decay provides a sensitive means for probing the limitations of the Electroweak Standard Model. To date, the strengths (ft-values) of superallowed 0 +→ 0 + β-decay transitions have been determined with high precision from nine different short-lived nuclei, ranging from 10 C to 54 Co. Each result leads to an independent measure for the vector coupling constant G V and collectively the nine values can be used to test the conservation of the weak vector current (CVC). Within current uncertainties, the results support CVC to better than a few parts in 10,000 - a clear success for the Standard Model! However, when the average value of G V , as determined in this way, is combined with data from decays of the muon and kaon to test another prediction of the Standard Model, the result is much more provocative. A test of the unitarity of the Cabibbo-Kobayashi-Maskawa matrix fails by more than two standard deviations. This result can be made more definitive by experiments that require extremely precise mass measurements, in some cases on very short-lived (≤100 ms) nuclei. This talk presents the current status and future prospects for these Standard-Model tests, emphasizing the role of precise mass, or mass-difference measurements. There remains a real challenge to mass-measurement technique with the opportunity for significant new results
Testing Parametric versus Semiparametric Modelling in Generalized Linear Models
Härdle, W.K.; Mammen, E.; Müller, M.D.
1996-01-01
We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.
Bayes Factor Covariance Testing in Item Response Models.
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-12-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.
A person fit test for IRT models for polytomous items
Glas, Cornelis A.W.; Dagohoy, A.V.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability
Meyer, Sebastian; Warnke, Ingeborg; Rössler, Wulf; Held, Leonhard
2016-05-01
Spatio-temporal interaction is inherent to cases of infectious diseases and occurrences of earthquakes, whereas the spread of other events, such as cancer or crime, is less evident. Statistical significance tests of space-time clustering usually assess the correlation between the spatial and temporal (transformed) distances of the events. Although appealing through simplicity, these classical tests do not adjust for the underlying population nor can they account for a distance decay of interaction. We propose to use the framework of an endemic-epidemic point process model to jointly estimate a background event rate explained by seasonal and areal characteristics, as well as a superposed epidemic component representing the hypothesis of interest. We illustrate this new model-based test for space-time interaction by analysing psychiatric inpatient admissions in Zurich, Switzerland (2007-2012). Several socio-economic factors were found to be associated with the admission rate, but there was no evidence of general clustering of the cases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Muduli, Pradyut; Das, Sarat
2014-06-01
This paper discusses the evaluation of liquefaction potential of soil based on standard penetration test (SPT) dataset using evolutionary artificial intelligence technique, multi-gene genetic programming (MGGP). The liquefaction classification accuracy (94.19%) of the developed liquefaction index (LI) model is found to be better than that of available artificial neural network (ANN) model (88.37%) and at par with the available support vector machine (SVM) model (94.19%) on the basis of the testing data. Further, an empirical equation is presented using MGGP to approximate the unknown limit state function representing the cyclic resistance ratio (CRR) of soil based on developed LI model. Using an independent database of 227 cases, the overall rates of successful prediction of occurrence of liquefaction and non-liquefaction are found to be 87, 86, and 84% by the developed MGGP based model, available ANN and the statistical models, respectively, on the basis of calculated factor of safety (F s) against the liquefaction occurrence.
Creating a Business Case from a Business Model
Meertens, Lucas Onno; Starreveld, Eelco; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris
2014-01-01
Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model
Mixed Portmanteau Test for Diagnostic Checking of Time Series Models
Directory of Open Access Journals (Sweden)
Sohail Chand
2014-01-01
Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.
Testing of materials and scale models for impact limiters
International Nuclear Information System (INIS)
Maji, A.K.; Satpathi, D.; Schryer, H.L.
1991-01-01
Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper
Horizontal crash testing and analysis of model flatrols
International Nuclear Information System (INIS)
Dowler, H.J.; Soanes, T.P.T.
1985-01-01
To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)
Franceschini, A.; Teatini, P.; Janna, C.; Ferronato, M.; Gambolati, G.; Ye, S.; Carreón-Freyre, D.
2015-11-01
The stress variation induced by aquifer overdraft in sedimentary basins with shallow bedrock may cause rupture in the form of pre-existing fault activation or earth fissure generation. The process is causing major detrimental effects on a many areas in China and Mexico. Ruptures yield discontinuity in both displacement and stress field that classic continuous finite element (FE) models cannot address. Interface finite elements (IE), typically used in contact mechanics, may be of great help and are implemented herein to simulate the fault geomechanical behaviour. Two main approaches, i.e. Penalty and Lagrangian, are developed to enforce the contact condition on the element interface. The incorporation of IE incorporation into a three-dimensional (3-D) FE geomechanical simulator shows that the Lagrangian approach is numerically more robust and stable than the Penalty, thus providing more reliable solutions. Furthermore, the use of a Newton-Raphson scheme to deal with the non-linear elasto-plastic fault behaviour allows for quadratic convergence. The FE - IE model is applied to investigate the likely ground rupture in realistic 3-D geologic settings. The case studies are representative of the City of Wuxi in the Jiangsu Province (China), and of the City of Queretaro, Mexico, where significant land subsidence has been accompanied by the generation of several earth fissures jeopardizing the stability and integrity of the overland structures and infrastructure.
IDC Use Case Model Survey Version 1.0.
Energy Technology Data Exchange (ETDEWEB)
Carr, Dorthe B.; Harris, James M.
2014-12-01
This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model Survey. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris
Dose Prediction for surface nuclear explosions: case studies for Semipalatinsk and Lop Nur tests
International Nuclear Information System (INIS)
Takada, Jun
2008-01-01
Dose prediction method RAPS after surface nuclear explosion has been developed by using the empirical dose function of USA nuclear test. This method which provides us external total dose, dose rate at any distant, at any time for any yield of nuclear explosion, is useful for radiation protection in case of nuclear events such as terrorism and nuclear war. The validity of RAPS has been confirmed by application to historical surface nuclear test explosions. The first test case study which was done for the first test explosion of the former USSR at the Semipalatinsk Nuclear Test Site on August 29th 1949, shows a good agreement with luminescence dosimetry on a brick. This dose prediction method was applied nuclear tests in Lop Nur. The results indicate dangerous nuclear radiation influences including fatal risk in the wide Uygur area. (author)
International Nuclear Information System (INIS)
Perianez, R.
2004-01-01
Three kinetic models for adsorption/release of 137 Cs between water and sediments have been tested when they are included in a previously validated dispersion model of the English Channel. Radionuclides are released to the Channel from La Hague nuclear fuel reprocessing plant (France). The kinetic models are a 1-step model consisting of a single reversible reaction, a 2-step model consisting of two consecutive reversible reactions and an irreversible model consisting of three parallel reactions: two reversible and one irreversible. The models have been tested under three typical situations that correspond to the source terms that can generally be found: instantaneous release, continuous release and redissolution of radionuclides from contaminated sediments. Differences between the models become more evident when contact times between water and sediments are larger (continuous release) and in the case of redissolution from sediments. Time scales for the redissolution process are rather different between the three models. The 1-step model produces a redissolution that is too fast when compared with experimental evidence. The irreversible model requires that saturation effects of the irreversible phase are included. Probably, the 2-step model represents the best compromise between ease and level of detail of the description of sorption/release processes
DEFF Research Database (Denmark)
Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan
2014-01-01
investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...
Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.
Directory of Open Access Journals (Sweden)
Umair Khalil
Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.
Melo, A.P.; Cóstola, D.; Lamberts, R.; Hensen, J.L.M.
2012-01-01
This paper reports the use of an internationally recognized validation and diagnostics procedure to test the fidelity of a simplified calculation method. The case study is the simplified model for calculation of energy performance of building envelopes, introduced by the Brazilian regulation for
International Nuclear Information System (INIS)
Glasbergen, P.
1992-01-01
INTRAVAL is an international coordinated research program for predicting the potential radionuclide migration in the geosphere with the use of mathematical models. Such models are used to help assess the long-term safety of radioactive waste disposal systems. This report describes the findings of the project teams involved in test case 13 of INTRAVAL Phase 1. The test case is based on laboratory experiments dealing with flow and dispersion of brine in a porous medium. The purpose of these experiments was twofold : (i) to investigate some of the relevant processes in brine transport in porous media, and (ii) to provide sets of data to be used for (partial) validation of transport models. The experiments were carried out in a column packed with glass beads of diameter 0.40 to 0.52 mm. Salt water was injected through nine holes at the bottom and withdrawn through nine holes at the top. Initially a low salt concentration was used which was then displaced with higher concentrated salt water. The salt mass-fraction was detected using an array of electrodes such that breakthrough curves were obtained at five different levels in the column. The report reviews a number of conceptual models and the corresponding numerical codes employed by different modelling teams. The experiments on one- and two-dimensional flow and transport were simulated by various groups. The question underlying the experiments, namely the applicability of Fick's laws over the whole range of salt concentration, could be addressed satisfactorily. All models could simulate low-concentration experiment using a dispersivity value of 0.8 mm to 1.00 mm. However, using the same dispersivity value, it was not possible to simulate high concentration experiments. Another question intended to be studied by the experiments was the validity of Darcy's law at high concentrations. Two-dimensional experiments were carried out for this purpose. In practice, calculations were hampered by extremely high demand on
Comparison between the Lactation Model and the Test-Day Model ...
African Journals Online (AJOL)
ARC-IRENE
National Genetic Evaluation, using a Fixed Regression Test-day Model (TDM). This comparison is made for. Ayrshire, Guernsey, Holstein and Jersey cows participating in the South African Dairy Animal Improvement. Scheme. Specific differences between the two models were documented, with differences in statistical.
Verification of CFD model of plane jet used for smoke free zone separation in case of fire
Krajewski, Grzegorz; Suchy, Przemysław
2018-01-01
This paper presents the basic information about the use of air curtains in fire safety, as a barrier for heat and smoke. Mathematical model of an air curtain presented hereallows estimation of velocity of air in various points of space, including the velocity of air from an angled air curtain. Presented equations show how various parameters influence the performance of air curtain. Further, authors present results of their air curtain performance. Authors of that article have done tests in a real scale model. Tests results were used to verify chosen turbulence model and boundary conditions. Results of new studies are presented with regards to the performance of air curtain in case of fire, and final remarks on its design are given.
On the importance of methods in hydrological modelling. Perspectives from a case study
Fenicia, Fabrizio; Kavetski, Dmitri
2017-04-01
The hydrological community generally appreciates that developing any non-trivial hydrological model requires a multitude of modelling choices. These choices may range from a (seemingly) straightforward application of mass conservation, to the (often) guesswork-like selection of constitutive functions, parameter values, etc. The application of a model itself requires a myriad of methodological choices - the selection of numerical solvers, objective functions for model calibration, validation approaches, performance metrics, etc. Not unreasonably, hydrologists embarking on ever ambitious projects prioritize hydrological insight over the morass of methodological choices. Perhaps to emphasize "ideas" over "methods", some journals have even reduced the fontsize of the methodology sections of its articles. However, the very nature of modelling is that seemingly routine methodological choices can significantly affect the conclusions of case studies and investigations - making it dangerous to skimp over methodological details in an enthusiastic rush towards the next great hydrological idea. This talk shares modelling insights from a hydrological study of a 300 km2 catchment in Luxembourg, where the diversity of hydrograph dynamics observed at 10 locations begs the question of whether external forcings or internal catchment properties act as dominant controls on streamflow generation. The hydrological insights are fascinating (at least to us), but in this talk we emphasize the impact of modelling methodology on case study conclusions and recommendations. How did we construct our prior set of hydrological model hypotheses? What numerical solver was implemented and why was an objective function based on Bayesian theory deployed? And what would have happened had we omitted model cross-validation, or not used a systematic hypothesis testing approach?
Munk, Max M
1926-01-01
This report contains the results of a series of tests with three wing models. By changing the section of one of the models and painting the surface of another, the number of models tested was increased to five. The tests were made in order to obtain some general information on the air forces on wing sections at a high Reynolds number and in particular to make sure that the Reynolds number is really the important factor, and not other things like the roughness of the surface and the sharpness of the trailing edge. The few tests described in this report seem to indicate that the air forces at a high Reynolds number are not equivalent to respective air forces at a low Reynolds number (as in an ordinary atmospheric wind tunnel). The drag appears smaller at a high Reynolds number and the maximum lift is increased in some cases. The roughness of the surface and the sharpness of the trailing edge do not materially change the results, so that we feel confident that tests with systematic series of different wing sections will bring consistent results, important and highly useful to the designer.
Directory of Open Access Journals (Sweden)
Abdalla Ahmed Abdel-Ghaly
2016-06-01
Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.
A Diagnostic Model for Dementia in Clinical Practice-Case Methodology Assisting Dementia Diagnosis.
Londos, Elisabet
2015-04-02
Dementia diagnosis is important for many different reasons. Firstly, to separate dementia, or major neurocognitive disorder, from MCI (mild cognitive impairment), mild neurocognitive disorder. Secondly, to define the specific underlying brain disorder to aid treatment, prognosis and decisions regarding care needs and assistance. The diagnostic method of dementias is a puzzle of different data pieces to be fitted together in the best possible way to reach a clinical diagnosis. Using a modified case methodology concept, risk factors affecting cognitive reserve and symptoms constituting the basis of the brain damage hypothesis, can be visualized, balanced and reflected against test results as well as structural and biochemical markers. The model's origin is the case method initially described in Harvard business school, here modified to serve dementia diagnostics.
Thurstonian models for sensory discrimination tests as generalized linear models
DEFF Research Database (Denmark)
Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2010-01-01
as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...
Cooperative Testing of Uncontrollable Timed Systems
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao
2008-01-01
the SUT against the test purpose as long as the SUT reacts to our moves in a cooperative style. We present an operational framework of cooperative winning strategy generation, test case derivation and execution. The test method is proved to be sound and complete. Preliminary experimental results indicate......Abstract. This paper deals with targeted testing of timed systems with uncontrollable behavior. The testing activity is viewed as a game between the tester and the system under test (SUT) towards a given test purpose. The SUT is modeled as Timed Game Automaton and the test purpose is specified...... in Timed CTL formula. We can employ a timed game solver UPPAAL-TIGA to check if the test purpose is ture w.r.t. the model, and if yes, to generate a winning strategy and use it for black-box conformance testing. Specifically, we show that in case the checking yields a negative result, we can still test...
Freeman, John W.
2000-10-01
Rice University has developed a dynamic model of the Earth's radiation belts based on real-time data driven boundary conditions and full adiabaticity. The Radiation Belt Test Model (RBTM) successfully replicates the major features of storm-time behavior of energetic electrons: sudden commencement induced main phase dropout and recovery phase enhancement. It is the only known model to accomplish the latter. The RBTM shows the extent to which new energetic electrons introduced to the magnetosphere near the geostationary orbit drift inward due to relaxation of the magnetic field. It also shows the effects of substorm related rapid motion of magnetotail field lines for which the 3rd adiabatic invariant is violated. The radial extent of this violation is seen to be sharply delineated to a region outside of 5Re, although this distance is determined by the Hilmer-Voigt magnetic field model used by the RBTM. The RBTM appears to provide an excellent platform on which to build parameterized refinements to compensate for unknown acceleration processes inside 5Re where adiabaticity is seen to hold. Moreover, built within the framework of the MSFM, it offers the prospect of an operational forecast model for MeV electrons.
Improved animal models for testing gene therapy for atherosclerosis.
Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A
2014-04-01
Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long
DISCRETIZATION APPROACH USING RAY-TESTING MODEL IN PARTING LINE AND PARTING SURFACE GENERATION
Institute of Scientific and Technical Information of China (English)
HAN Jianwen; JIAN Bin; YAN Guangrong; LEI Yi
2007-01-01
Surface classification, 3D parting line, parting surface generation and demoldability analysis which is helpful to select optimal parting direction and optimal parting line are involved in automatic cavity design based on the ray-testing model. A new ray-testing approach is presented to classify the part surfaces to core/cavity surfaces and undercut surfaces by automatic identifying the visibility of surfaces. A simple, direct and efficient algorithm to identify surface visibility is developed. The algorithm is robust and adapted to rather complicated geometry, so it is valuable in computer-aided mold design systems. To validate the efficiency of the approach, an experimental program is implemented. Case studies show that the approach is practical and valuable in automatic parting line and parting surface generation.
Maasha, Rumaasha; Towner, Robert L.
2012-01-01
High-fidelity Finite Element Models (FEMs) were developed to support a recent test program at Marshall Space Flight Center (MSFC). The FEMs correspond to test articles used for a series of acoustic tests. Modal survey tests were used to validate the FEMs for five acoustic tests (a bare panel and four different mass-loaded panel configurations). An additional modal survey test was performed on the empty test fixture (orthogrid panel mounting fixture, between the reverb and anechoic chambers). Modal survey tests were used to test-validate the dynamic characteristics of FEMs used for acoustic test excitation. Modal survey testing and subsequent model correlation has validated the natural frequencies and mode shapes of the FEMs. The modal survey test results provide a basis for the analysis models used for acoustic loading response test and analysis comparisons
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing
A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns
Dao, Ngocanh
2014-04-03
Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
A Model for Random Student Drug Testing
Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle
2011-01-01
The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…
van der Spek, Mijndert; Ramirez, Andrea; Faaij, André
2016-01-01
This article aims to improve uncertainty evaluation of process models by combining a quantitative uncertainty evaluation method (data validation) with a qualitative uncertainty evaluation method (pedigree analysis). The approach is tested on a case study of monoethanolamine based postcombustion CO2
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron [National Renewable Energy Lab. (NREL), Golden, CO (United States; Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States; Bianchi, Marcus [National Renewable Energy Lab. (NREL), Golden, CO (United States; Neymark, Joel [J. Neymark & Associates, Golden, CO (United States)
2010-08-01
This report documents the initial Phase 1 test process for testing the reliability of software models that predict retrofit energy savings of existing homes, including their associated calibration methods.
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
1983-01-01
Water impact tests using a 12.5 inch diameter model representing a 8.56 percent scale of the Space Shuttle Solid Rocket Booster configuration were conducted. The two primary objectives of this SRB scale model water impact test program were: 1. Obtain cavity collapse applied pressure distributions for the 8.56 percent rigid body scale model FWC pressure magnitudes as a function of full-scale initial impact conditions at vertical velocities from 65 to 85 ft/sec, horizontal velocities from 0 to 45 ft/sec, and angles from -10 to +10 degrees. 2. Obtain rigid body applied pressures on the TVC pod and aft skirt internal stiffener rings at initial impact and cavity collapse loading events. In addition, nozzle loads were measured. Full scale vertical velocities of 65 to 85 ft/sec, horizontal velocities of 0 to 45 ft/sec, and impact angles from -10 to +10 degrees simulated.
An Approach for Generating Precipitation Input for Worst-Case Flood Modelling
Felder, Guido; Weingartner, Rolf
2015-04-01
There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.
Terra, Sandra M
2007-01-01
This research seeks to determine whether there is adequate evidence-based justification for selection of one acute care case management model over another. Acute Inpatient Hospital. This article presents a systematic review of published case management literature, resulting in classification specific to terms of level of evidence. This review examines the best available evidence in an effort to select an acute care case management model. Although no single case management model can be identified as preferred, it is clear that adequate evidence-based literature exists to acknowledge key factors driving the acute care model and to form a foundation for the efficacy of hospital case management practice. Although no single case management model can be identified as preferred, this systematic review demonstrates that adequate evidence-based literature exists to acknowledge key factors driving the acute care model and forming a foundation for the efficacy of hospital case management practice. Distinctive aspects of case management frameworks can be used to guide the development of an acute care case management model. The study illustrates: * The effectiveness of case management when there is direct patient contact by the case manager regardless of disease condition: not only does the quality of care increase but also length of stay (LOS) decreases, care is defragmented, and both patient and physician satisfaction can increase. * The preferred case management models result in measurable outcomes that can directly relate to, and demonstrate alignment with, organizational strategy. * Acute care management programs reduce cost and LOS, and improve outcomes. * An integrated case management program that includes social workers, as well as nursing, is the most effective acute care management model. * The successful case management model will recognize physicians, as well as patients, as valued customers with whom partnership can positively affect financial outcomes in terms of
Kluk, Michael Joseph; An, Yu; James, Philip; Coulter, David; Harris, David; Wu, Bai-Lin; Shen, Yiping
2011-05-01
The molecular testing options available for the diagnosis of genetic disorders are numerous and include a variety of different assay platforms. The consultative input of molecular pathologists and cytogeneticists, working closely with the ordering clinicians, is often important for definitive diagnosis. Herein, we describe two patients who had long histories of unexplained signs and symptoms with a high clinical suspicion of an underlying genetic etiology. Initial molecular testing in both cases was negative, but the application of high-resolution array comparative genomic hybridization technology lead to definitive diagnosis in both cases. We summarize the clinical findings and molecular testing in each case, discuss the differential diagnoses, and review the clinical and pathological findings of Mowat-Wilson syndrome. This report highlights the importance for those involved in molecular testing to know the nature of the underlying genetic abnormalities associated with the suspected diagnosis, to recognize the limitations of each testing platform, and to persistently pursue repeat testing using high-resolution technologies when indicated. This concept is applicable to both germline and somatic molecular genetic testing. Copyright © 2011 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Conducting field studies for testing pesticide leaching models
Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.
1990-01-01
A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.
Flight Test Maneuvers for Efficient Aerodynamic Modeling
Morelli, Eugene A.
2011-01-01
Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.
Development of dynamic Bayesian models for web application test management
Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.
2018-03-01
The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.
Experimental tests for the Babu-Zee two-loop model of Majorana neutrino masses
International Nuclear Information System (INIS)
Sierra, Diego Aristizabal; Hirsch, Martin
2006-01-01
The smallness of the observed neutrino masses might have a radiative origin. Here we revisit a specific two-loop model of neutrino mass, independently proposed by Babu and Zee. We point out that current constraints from neutrino data can be used to derive strict lower limits on the branching ratio of flavour changing charged lepton decays, such as μ→eγ. Non-observation of Br(μ→eγ) at the level of 10 -13 would rule out singly charged scalar masses smaller than 590 GeV (5.04 TeV) in case of normal (inverse) neutrino mass hierarchy. Conversely, decay branching ratios of the non-standard scalars of the model can be fixed by the measured neutrino angles (and mass scale). Thus, if the scalars of the model are light enough to be produced at the LHC or ILC, measuring their decay properties would serve as a direct test of the model as the origin of neutrino masses
Experimental tests for the Babu-Zee two-loop model of Majorana neutrino masses
International Nuclear Information System (INIS)
Aristizabal, D.
2006-01-01
Abstract: The smallness of the observed neutrino masses might have a radiative origin. Here we revisit a specific two-loop model of neutrino mass, independently proposed by Babu and Zee. We point out that current constraints from neutrino data can be used to derive strict lower limits on the branching ratio of flavour changing charged lepton decays, such as μ → e γ. Non-observation of Br(μ → e γ) at the level of 10 -13 would rule out singly charged scalar masses smaller than 590 GeV (5.04 TeV) in case of normal (inverse) neutrino mass hierarchy. Conversely, decay branching ratios of the non-standard scalars of the model can be fixed by the measured neutrino angles (and mass scale). Thus, if the scalars of the model are light enough to be produced at the LHC or ILC, measuring their decay properties would serve as a direct test of the model as the origin of neutrino masses. (author)
Liebsch, Christian; Zimmermann, Julia; Graf, Nicolas; Schilling, Christoph; Wilke, Hans-Joachim; Kienle, Annette
2018-01-01
each case between 140N and 280N, while abrupt failures of the specimen were observed only in vitro. In the mechanical testing model, no translational motion was detected in the screw entry point, while in vitro, translational motions of up to 2.5mm in inferior direction were found, leading to a slight shift of the centre of rotation towards the screw tip. Translational motions of the screw tip of about 5mm in superior direction were observed both in vitro and in the mechanical testing model, while they were continuous in the mechanical testing model and rapidly increasing after screw loosening initiation in vitro. The overall pedicle screw loosening characteristics were qualitatively and quantitatively similar between the mechanical testing model and the human vertebral specimens as long as there was no translation of the screw at the screw entrance point. Therefore, the novel mechanical testing model represents a promising method for the standardized testing of pedicle screws regarding screw loosening for cases where the screw rotates around a point close to the screw entry point. Copyright © 2017 Elsevier Ltd. All rights reserved.
Testing one model of family role in the development of formal operations
Directory of Open Access Journals (Sweden)
Stepanović Ivana
2008-01-01
Full Text Available Contemporary authors emphasise the importance of viewing the family as a specific educational context and of studying its role in the cognitive development. In this paper, we tested the model that postulates the way in which the different ways of parental mediation and various means of the family cultural-supportive system affect the development of formal operations. We assumed that the education of parents and financial status of the family form a wider context that influences the general dimensions of family interaction (emotional exchange and democratism, but also the cultural-pedagogical status of the family, and that their connection with formal operations is mediated by the above-mentioned variables. We expected the education of parents and general dimensions of family interaction to influence the parental mediation characteristic for the development of formal operations, operationalised by CSS scale, and to mediate, via this variable, the development of that form of thinking. The direct link with formal operations is postulated in the case of variables of cultural-pedagogical status and CSS scale. The sample consists of 305 pupils aged 15 to 19. The Structural Equation Modeling was used for testing the postulated model. The results show that there is a direct influence of cultural-pedagogical status and CSS scale on formal operations, but of mother's education as well. Some relations between other predictors were confirmed, and some not, which suggests that the proposed explanatory model must be revised to some degree.
Model tests for prestressed concrete pressure vessels
International Nuclear Information System (INIS)
Stoever, R.
1975-01-01
Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de
Design, modeling and testing of data converters
Kiaei, Sayfe; Xu, Fang
2014-01-01
This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.
International Nuclear Information System (INIS)
Zimmerman, D.A.; Gallegos, D.P.
1993-10-01
The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ''Geostatistics Test Problem'' is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1
Pashayan, N; Powles, J; Brown, C; Duffy, S W
2006-01-01
This study aimed to estimate the extent of ‘overdiagnosis' of prostate cancer attributable to prostate-specific antigen (PSA) testing in the Cambridge area between 1996 and 2002. Overdiagnosis was defined conceptually as detection of prostate cancer through PSA testing that otherwise would not have been diagnosed within the patient's lifetime. Records of PSA tests in Addenbrookes Hospital were linked to prostate cancer registrations by NHS number. Differences in prostate cancer registration rates between those receiving and not receiving prediagnosis PSA tests were calculated. The proportion of men aged 40 years or over with a prediagnosis PSA test increased from 1.4 to 5.2% from 1996 to 2002. The rate of diagnosis of prostate cancer was 45% higher (rate ratios (RR)=1.45, 95% confidence intervals (CI) 1.02–2.07) in men with a history of prediagnosis PSA testing. Assuming average lead times of 5 to 10 years, 40–64% of the PSA-detected cases were estimated to be overdiagnosed. In East Anglia, from 1996 to 2000, a 1.6% excess of cases was associated with PSA testing (around a quarter of the 5.3% excess incidence cases observed in East Anglia from 1996 to 2000). Further quantification of the overdiagnosis will result from continued surveillance and from linkage of incidence to testing in other hospitals. PMID:16832417
International Nuclear Information System (INIS)
Khalaquzzaman, M.; Lee, Seung Jun; Cho, Jaehyun; Jung, Wondea
2016-01-01
Recently, the input-profile-based testing for safety critical software has been proposed for determining the number of test cases and quantifying the failure probability of the software. Input-profile of a reactor protection system (RPS) software is the input which causes activation of the system for emergency shutdown of a reactor. This paper presents a method to determine the input-profile of a RPS software which considers concurrent events/transients. A deviation of a process parameter value begins through an event and increases owing to the concurrent multi-events depending on the correlation of process parameters and severity of incidents. A case of reactor trip caused by feedwater loss and main steam line break is simulated and analyzed to determine the RPS software input-profile and estimate the number of test cases. The different sizes of the main steam line breaks (e.g., small, medium, large break) with total loss of feedwater supply are considered in constructing the input-profile. The uncertainties of the simulation related to the input-profile-based software testing are also included. Our study is expected to provide an option to determine test cases and quantification of RPS software failure probability. (author)
Modelling of ultrasonic nondestructive testing in anisotropic materials - Rectangular crack
International Nuclear Information System (INIS)
Bostroem, A.
2001-12-01
Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry when searching for defects, in particular cracks. To develop and qualify testing procedures extensive experimental work on test blocks is usually required. This can take a lot of time and therefore be quite costly. A good mathematical model of the testing situation is therefore of great value as it can reduce the experimental work to a great extent. A good model can be very useful for parametric studies and as a pedagogical tool. A further use of a model is as a tool in the qualification of personnel. In anisotropic materials, e.g. austenitic welds, the propagation of ultrasound becomes much more complicated as compared to isotropic materials. Therefore, modelling is even more useful for anisotropic materials, and it in particular has a greater pedagogical value. The present project has been concerned with a further development of the anisotropic capabilities of the computer program UTDefect, which has so far only contained a strip-like crack as the single defect type for anisotropic materials. To be more specific, the scattering by a rectangular crack in an anisotropic component has been studied and the result is adapted to include transmitting and receiving ultrasonic probes. The component under study is assumed to be anisotropic with arbitrary anisotropy. On the other hand, it is assumed to be homogeneous, and this in particular excludes most welds, where it is seldom an adequate approximation to assume homogeneity. The anisotropy may be arbitrarily oriented and the same is true of the rectangular crack. The crack may also be located near a backside of the component. To solve the scattering problem for the crack an integral equation method is used. The probe model has been developed in an earlier project and to compute the signal response in the receiving probe an electromechanical reciprocity argument is employed. As a rectangle is a truly 3D scatterer the sizes of the
Test models for improving filtering with model errors through stochastic parameter estimation
International Nuclear Information System (INIS)
Gershgorin, B.; Harlim, J.; Majda, A.J.
2010-01-01
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
Nonparametric tests for censored data
Bagdonavicus, Vilijandas; Nikulin, Mikhail
2013-01-01
This book concerns testing hypotheses in non-parametric models. Generalizations of many non-parametric tests to the case of censored and truncated data are considered. Most of the test results are proved and real applications are illustrated using examples. Theories and exercises are provided. The incorrect use of many tests applying most statistical software is highlighted and discussed.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Extending the Reach of Statistical Software Testing
National Research Council Canada - National Science Library
Weber, Robert
2004-01-01
.... In particular, as system complexity increases, the matrices required to generate test cases and perform model analysis can grow dramatically, even exponentially, overwhelming the test generation...
[Application of ARIMA model to predict number of malaria cases in China].
Hui-Yu, H; Hua-Qin, S; Shun-Xian, Z; Lin, A I; Yan, L U; Yu-Chun, C; Shi-Zhu, L I; Xue-Jiao, T; Chun-Li, Y; Wei, H U; Jia-Xu, C
2017-08-15
Objective To study the application of autoregressive integrated moving average (ARIMA) model to predict the monthly reported malaria cases in China, so as to provide a reference for prevention and control of malaria. Methods SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported malaria cases of the time series of 20062015 and 2011-2015, respectively. The data of malaria cases from January to December, 2016 were used as validation data to compare the accuracy of the two ARIMA models. Results The models of the monthly reported cases of malaria in China were ARIMA (2, 1, 1) (1, 1, 0) 12 and ARIMA (1, 0, 0) (1, 1, 0) 12 respectively. The comparison between the predictions of the two models and actual situation of malaria cases showed that the ARIMA model based on the data of 2011-2015 had a higher accuracy of forecasting than the model based on the data of 2006-2015 had. Conclusion The establishment and prediction of ARIMA model is a dynamic process, which needs to be adjusted unceasingly according to the accumulated data, and in addition, the major changes of epidemic characteristics of infectious diseases must be considered.
IDC Use Case Model Survey Version 1.1.
Energy Technology Data Exchange (ETDEWEB)
Harris, James Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carr, Dorthe B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 SNL IDC Reengineering Project Team Initial delivery M. Harris V1.1 2/2015 SNL IDC Reengineering Project Team Iteration I2 Review Comments M. Harris
Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article
Gupta, Anju
2013-01-01
This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.
Case Management Reduces Length of Stay, Charges, and Testing in Emergency Department Frequent Users
Directory of Open Access Journals (Sweden)
Jameel Sughair
2018-02-01
Full Text Available Introduction: Case management is an effective, short-term means to reduce emergency department (ED visits in frequent users of the ED. This study sought to determine the effectiveness of case management on frequent ED users, in terms of reducing ED and hospital length of stay (LOS, accrued costs, and utilization of diagnostic tests. Methods: The study consisted of a retrospective chart review of ED and inpatient visits in our hospital’s ED case management program, comparing patient visits made in the one year prior to enrollment in the program, to the visits made in the one year after enrollment in the program. We examined the LOS, use of diagnostic testing, and monetary charges incurred by these patients one year prior and one year after enrollment into case management. Results: The study consisted of 158 patients in case management. Comparing the one year prior to enrollment to the one year after enrollment, ED visits decreased by 49%, inpatient admissions decreased by 39%, the use of computed tomography imaging decreased 41%, the use of ultrasound imaging decreased 52%, and the use of radiographs decreased 38%. LOS in the ED and for inpatient admissions decreased by 39%, reducing total LOS for these patients by 178 days. ED and hospital charges incurred by these patients decreased by 5.8 million dollars, a 41% reduction. All differences were statistically significant. Conclusion: Case management for frequent users of the ED is an effective method to reduce patient visits, the use of diagnostic testing, length of stay, and cost within our institution.
Khoharo, Haji Khan
2011-07-01
Seventy-six blood culture positive typhoid cases and forty-eight controls were studied. The typhidot test was positive in 74 (97.36%) cases, with a sensitivity, specificity and positive predictive value of 96%, 89.5%, and 95%, respectively, compared to the Widal test which was positive in 56 (73.68%) cases with a sensitivity, specificity, and positive predictive value of 72%, 87%, and 87%, respectively (P = 0.001). In the control group, seven (14.5%) cases tested positive for the Widal test and two (4.16%) for the typhidot (P = 0.001), yielding the sensitivity and specificity for the Widal test and the typhidot test of 63% and 83%, and 85% and 97%, respectively. We conclude that the Dot-EIA (enzyme immunoassay; typhidot) is a more sensitive and specific test which is easy to perform and more reliable compared to the Widal test and that it is useful in early therapy.
International Nuclear Information System (INIS)
Brankov, Jovan G
2013-01-01
The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone
Brankov, Jovan G
2013-10-21
The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone
Automated model-based testing of hybrid systems
Osch, van M.P.W.J.
2009-01-01
In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to
Unstandardized Measures: A Cross-Case Analysis of Test Prep in Two Urban
Kesler, Ted
2013-01-01
This article presents a cross-case analysis of two fourth-grade teachers' instruction while preparing their students for an English language arts test. Both teachers taught in high-needs urban public schools and were
A test for the parameters of multiple linear regression models ...
African Journals Online (AJOL)
A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...
Direct cointegration testing in error-correction models
F.R. Kleibergen (Frank); H.K. van Dijk (Herman)
1994-01-01
textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The
Performance testing of the sediment-contaminant transport model, SERATRA, at different rivers
International Nuclear Information System (INIS)
Onishi, Y.; Yabusaki, S.B.; Kincaid, C.T.
1982-04-01
Mathematical models of sediment-contaminant migration in surface water must account for transport, intermedia transfer, decay and degradation, and transformation processes. The unsteady, two dimensional, sediment-contaminant transport code, SERATRA (Onishi, Schreiber and Codell 1980) includes these mechanisms. To assess the accuracy of SERATRA to simulate the sediment-contaminant transport and fate processes, the code was tested against one-dimensional analytical solutions, checked for its mass balance, and applied to field sites. The field application cases ranged from relatively simple, steady conditions to unsteady, nonuniform conditions for large, intermediate, and small rivers. It was found that SERATRA is capable of simulating sediment-contaminant transport under a wide range of conditions
Energy Technology Data Exchange (ETDEWEB)
Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Yang, Zhaoqing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Wang, Taiping [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Coastal Sciences Division; Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies; Dallman, Ann Renee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Water Power Technologies
2016-03-01
A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending on the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.
Charge asymmetry in e+e- → γ + hadrons: New tests of the quark-parton model and fractional charge
International Nuclear Information System (INIS)
Brodsky, S.J.; Carlson, C.E.; Suaya, R.
1976-01-01
We consider the process e + e - → γ + h + X, where h is a hadron and γ is a hard photon, and show how it can be used to test the quark-parton model. Detailed formulas are given for the cross sections, which in the quark-parton model are products of cross sections for e + e - → γμanti μ and quark breakup functions. We focus on the asymmetry between h and h-bar production, and display sum rules and ratio tests which measure the quark charge, the quark Compton amplitude, and the large-x behavior of the quark breakup function. The asymmetry is calculated for the muon case, and is about 100% for the forward direction
Model-Driven Test Generation of Distributed Systems
Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin
2012-01-01
This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.
Testing and Modeling of Machine Properties in Resistance Welding
DEFF Research Database (Denmark)
Wu, Pei
The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... as real projection welding tests, is easy to realize in industry, since tests may be performed in situ. In part II, an approach of characterizing the electrical properties of AC resistance welding machines is presented, involving testing and mathematical modelling of the weld current, the firing angle...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...
A test case of the deformation rate analysis (DRA) stress measurement method
Energy Technology Data Exchange (ETDEWEB)
Dight, P.; Hsieh, A. [Australian Centre for Geomechanics, Univ. of WA, Crawley (Australia); Johansson, E. [Saanio and Riekkola Oy, Helsinki (Finland); Hudson, J.A. [Rock Engineering Consultants (United Kingdom); Kemppainen, K.
2012-01-15
As part of Posiva's site and ONKALO investigations, the in situ rock stress has been measured by a variety of techniques, including hydraulic fracturing, overcoring, and convergence measurements. All these techniques involve direct measurements in a drillhole or at the rock surface. An alternative method is to test drillhole core in a way that enables estimation of the magnitudes and orientations of the in situ rock stress. The Kaiser Effect (KE) and Deformation Rate Analysis (DRA) are two ways to do this. In the work reported here, a 'blind' DRA test was conducted on core obtained from the POSE (Posiva's Olkiluoto Spalling Experiment) niche in the ONKALO. The term 'blind' means that the two first authors of this report, who conducted the tests at the Australian Centre for Geomechanics, did not know the depths below surface at which the cores had been obtained. The results of this DRA Test Case are presented, together with an explanation of the DRA procedure. Also, additional information that would help in such DRA testing and associated analysis is explained. One of the problems in comparing the DRA results with the known Olkiluoto stress field is that the latter is highly variable across the site, as experienced by the previous in situ stress measurements and as predicted by numerical analysis. The variability is mainly caused by the presence of the large brittle deformation zones which perturb the local stress state. However, this variability reduces with depth and the stress field becomes more stable at the {approx} 350 m at which the drillhole cores were obtained. Another compounding difficulty is that the stress quantity, being a second order tensor, requires six independent components for its specification. In other words, comparison of the DRA results and the known stress field requires comparison of six different quantities. In terms of the major principal stress orientation, the DRA results predict an orientation completely
A test case of the deformation rate analysis (DRA) stress measurement method
International Nuclear Information System (INIS)
Dight, P.; Hsieh, A.; Johansson, E.; Hudson, J.A.; Kemppainen, K.
2012-01-01
As part of Posiva's site and ONKALO investigations, the in situ rock stress has been measured by a variety of techniques, including hydraulic fracturing, overcoring, and convergence measurements. All these techniques involve direct measurements in a drillhole or at the rock surface. An alternative method is to test drillhole core in a way that enables estimation of the magnitudes and orientations of the in situ rock stress. The Kaiser Effect (KE) and Deformation Rate Analysis (DRA) are two ways to do this. In the work reported here, a 'blind' DRA test was conducted on core obtained from the POSE (Posiva's Olkiluoto Spalling Experiment) niche in the ONKALO. The term 'blind' means that the two first authors of this report, who conducted the tests at the Australian Centre for Geomechanics, did not know the depths below surface at which the cores had been obtained. The results of this DRA Test Case are presented, together with an explanation of the DRA procedure. Also, additional information that would help in such DRA testing and associated analysis is explained. One of the problems in comparing the DRA results with the known Olkiluoto stress field is that the latter is highly variable across the site, as experienced by the previous in situ stress measurements and as predicted by numerical analysis. The variability is mainly caused by the presence of the large brittle deformation zones which perturb the local stress state. However, this variability reduces with depth and the stress field becomes more stable at the ∼ 350 m at which the drillhole cores were obtained. Another compounding difficulty is that the stress quantity, being a second order tensor, requires six independent components for its specification. In other words, comparison of the DRA results and the known stress field requires comparison of six different quantities. In terms of the major principal stress orientation, the DRA results predict an orientation completely different to the NW-SE regional
Accuracy tests of the tessellated SLBM model
International Nuclear Information System (INIS)
Ramirez, A L; Myers, S C
2007-01-01
We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are
[Study on the ARIMA model application to predict echinococcosis cases in China].
En-Li, Tan; Zheng-Feng, Wang; Wen-Ce, Zhou; Shi-Zhu, Li; Yan, Lu; Lin, Ai; Yu-Chun, Cai; Xue-Jiao, Teng; Shun-Xian, Zhang; Zhi-Sheng, Dang; Chun-Li, Yang; Jia-Xu, Chen; Wei, Hu; Xiao-Nong, Zhou; Li-Guang, Tian
2018-02-26
To predict the monthly reported echinococcosis cases in China with the autoregressive integrated moving average (ARIMA) model, so as to provide a reference for prevention and control of echinococcosis. SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported echinococcosis cases of time series from 2007 to 2015 and 2007 to 2014, respectively, and the accuracies of the two ARIMA models were compared. The model based on the data of the monthly reported cases of echinococcosis in China from 2007 to 2015 was ARIMA (1, 0, 0) (1, 1, 0) 12 , the relative error among reported cases and predicted cases was -13.97%, AR (1) = 0.367 ( t = 3.816, P ARIMA (1, 0, 0) (1, 0, 1) 12 , the relative error among reported cases and predicted cases was 0.56%, AR (1) = 0.413 ( t = 4.244, P ARIMA models as for the same infectious diseases. It is needed to be further verified that the more data are accumulated, the shorter time of predication is, and the smaller the average of the relative error is. The establishment and prediction of an ARIMA model is a dynamic process that needs to be adjusted and optimized continuously according to the accumulated data, meantime, we should give full consideration to the intensity of the work related to infectious diseases reported (such as disease census and special investigation).
Model of ASTM Flammability Test in Microgravity: Iron Rods
Steinberg, Theodore A; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)
2000-01-01
There is extensive qualitative results from burning metallic materials in a NASA/ASTM flammability test system in normal gravity. However, this data was shown to be inconclusive for applications involving oxygen-enriched atmospheres under microgravity conditions by conducting tests using the 2.2-second Lewis Research Center (LeRC) Drop Tower. Data from neither type of test has been reduced to fundamental kinetic and dynamic systems parameters. This paper reports the initial model analysis for burning iron rods under microgravity conditions using data obtained at the LERC tower and modeling the burning system after ignition. Under the conditions of the test the burning mass regresses up the rod to be detached upon deceleration at the end of the drop. The model describes the burning system as a semi-batch, well-mixed reactor with product accumulation only. This model is consistent with the 2.0-second duration of the test. Transient temperature and pressure measurements are made on the chamber volume. The rod solid-liquid interface melting rate is obtained from film records. The model consists of a set of 17 non-linear, first-order differential equations which are solved using MATLAB. This analysis confirms that a first-order rate, in oxygen concentration, is consistent for the iron-oxygen kinetic reaction. An apparent activation energy of 246.8 kJ/mol is consistent for this model.
Forecasting dengue hemorrhagic fever cases using ARIMA model: a case study in Asahan district
Siregar, Fazidah A.; Makmur, Tri; Saprin, S.
2018-01-01
Time series analysis had been increasingly used to forecast the number of dengue hemorrhagic fever in many studies. Since no vaccine exist and poor public health infrastructure, predicting the occurrence of dengue hemorrhagic fever (DHF) is crucial. This study was conducted to determine trend and forecasting the occurrence of DHF in Asahan district, North Sumatera Province. Monthly reported dengue cases for the years 2012-2016 were obtained from the district health offices. A time series analysis was conducted by Autoregressive integrated moving average (ARIMA) modeling to forecast the occurrence of DHF. The results demonstrated that the reported DHF cases showed a seasonal variation. The SARIMA (1,0,0)(0,1,1)12 model was the best model and adequate for the data. The SARIMA model for DHF is necessary and could applied to predict the incidence of DHF in Asahan district and assist with design public health maesures to prevent and control the diseases.
Sudan Acharya, Madhu
2010-05-01
of deformation and failure and provides benchmarks useful for verification of numerical models. In this case this test is mainly carried out to verify the stability analysis and deformation characteristics of a bamboo crib wall. Models of crib wall of dimensions 37x13x10 cm and 37x13x14cm were placed inside a Plexiglas box of internal dimensions of 42.5x42.5x30 cm and slope was formed leaving a space about 10 cm in the front. The model crib wall tests were all performed at 40-70 times earth's gravity. This means that the 5 mm diameters bamboo rods in model used represents a prototype diameter of 20-35 cm. The horizontal and vertical displacements were measured with the help of three displacements sensor fixed horizontally and one sensor fixed vertically at the top of the model crib wall. All together nine tests were carried out with varying model parameters. Standard medium sand and coarse sand were used as fill material in the testing. Two wall heights variations and three slopes variations were used in the testing. The test model was constructed either compacted or uncompacted. The compaction in the model was carried out by hand to about 90% of the Proctor density. Three slopes inclinations were used. For flat slope the slope angle was less than 25° , and for steep slope it was 25° -35° and for extremely steep slope it was > 35° . The test results and conclusions are presented in this paper.
Tests of the single-pion exchange model
International Nuclear Information System (INIS)
Treiman, S.B.; Yang, C.N.
1983-01-01
The single-pion exchange model (SPEM) of high-energy particle reactions provides an attractively simple picture of seemingly complex processes and has accordingly been much discussed in recent times. The purpose of this note is to call attention to the possibility of subjecting the model to certain tests precisely in the domain where the model stands the best chance of making sense
Scalable Power-Component Models for Concept Testing
2011-08-17
motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN
Use Case Modelling of Bingham University Library Management ...
African Journals Online (AJOL)
With the advent of object oriented design, Unified Modelling Language (UML) has become prominent in software industry. Software is better modelled with the use of UML diagrams like use cases which provide a better flow of logic and comprehensive summary of the whole software system in a single illustration.
DKIST enclosure modeling and verification during factory assembly and testing
Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka
2014-08-01
The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.
TESTING MAGNETIC FIELD MODELS FOR THE CLASS 0 PROTOSTAR L1527
International Nuclear Information System (INIS)
Davidson, J. A.; Li, Z.-Y.; Hull, C. L. H.; Plambeck, R. L.; Kwon, W.; Crutcher, R. M.; Looney, L. W.; Novak, G.; Chapman, N. L.; Matthews, B. C.; Stephens, I. W.; Tobin, J. J.; Jones, T. J.
2014-01-01
For the Class 0 protostar L1527 we compare 131 polarization vectors from SCUPOL/JCMT, SHARP/CSO, and TADPOL/CARMA observations with the corresponding model polarization vectors of four ideal-MHD, nonturbulent, cloud core collapse models. These four models differ by their initial magnetic fields before collapse; two initially have aligned fields (strong and weak) and two initially have orthogonal fields (strong and weak) with respect to the rotation axis of the L1527 core. Only the initial weak orthogonal field model produces the observed circumstellar disk within L1527. This is a characteristic of nearly all ideal-MHD, nonturbulent, core collapse models. In this paper we test whether this weak orthogonal model also has the best agreement between its magnetic field structure and that inferred from the polarimetry observations of L1527. We found that this is not the case; based on the polarimetry observations, the most favored model of the four is the weak aligned model. However, this model does not produce a circumstellar disk, so our result implies that a nonturbulent, ideal-MHD global collapse model probably does not represent the core collapse that has occurred in L1527. Our study also illustrates the importance of using polarization vectors covering a large area of a cloud core to determine the initial magnetic field orientation before collapse; the inner core magnetic field structure can be highly altered by a collapse, and so measurements from this region alone can give unreliable estimates of the initial field configuration before collapse
TESTING MAGNETIC FIELD MODELS FOR THE CLASS 0 PROTOSTAR L1527
Energy Technology Data Exchange (ETDEWEB)
Davidson, J. A. [University of Western Australia, School of Physics, 35 Stirling Highway, Crawley, WA 6009 (Australia); Li, Z.-Y. [Astronomy Department, University of Virginia, Charlottesville, VA 22904 (United States); Hull, C. L. H.; Plambeck, R. L. [Astronomy Department and Radio Astronomy Laboratory, University of California, Berkeley, CA 94720-3411 (United States); Kwon, W. [SRON Netherlands Institute for Space Research, Landleven 12, 9747 AD, Groningen (Netherlands); Crutcher, R. M.; Looney, L. W. [Department of Astronomy, University of Illinois, 1002 West Green Street, Urbana, IL 61801 (United States); Novak, G.; Chapman, N. L. [Northwestern University, Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and the Department of Physics and Astronomy, 2145 Sheridan Road, Evanston, IL 60208 (United States); Matthews, B. C. [Herzberg Astronomy and Astrophysics, National Research Council of Canada, 5071 West Saanich Road, Victoria, BC, V9E 2E7 (Canada); Stephens, I. W. [Boston University, Institute for Astrophysical Research, Boston, MA 02215 (United States); Tobin, J. J. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Jones, T. J., E-mail: jackie.davidson@uwa.edu.au [University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)
2014-12-20
For the Class 0 protostar L1527 we compare 131 polarization vectors from SCUPOL/JCMT, SHARP/CSO, and TADPOL/CARMA observations with the corresponding model polarization vectors of four ideal-MHD, nonturbulent, cloud core collapse models. These four models differ by their initial magnetic fields before collapse; two initially have aligned fields (strong and weak) and two initially have orthogonal fields (strong and weak) with respect to the rotation axis of the L1527 core. Only the initial weak orthogonal field model produces the observed circumstellar disk within L1527. This is a characteristic of nearly all ideal-MHD, nonturbulent, core collapse models. In this paper we test whether this weak orthogonal model also has the best agreement between its magnetic field structure and that inferred from the polarimetry observations of L1527. We found that this is not the case; based on the polarimetry observations, the most favored model of the four is the weak aligned model. However, this model does not produce a circumstellar disk, so our result implies that a nonturbulent, ideal-MHD global collapse model probably does not represent the core collapse that has occurred in L1527. Our study also illustrates the importance of using polarization vectors covering a large area of a cloud core to determine the initial magnetic field orientation before collapse; the inner core magnetic field structure can be highly altered by a collapse, and so measurements from this region alone can give unreliable estimates of the initial field configuration before collapse.
Modelling and Testing of Friction in Forging
DEFF Research Database (Denmark)
Bay, Niels
2007-01-01
Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...
FUMEX cases 1, 2, and 3 calculated pre-test and post-test results
Energy Technology Data Exchange (ETDEWEB)
Stefanova, S; Vitkova, M; Passage, G; Manolova, M; Simeonova, V [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika; Scheglov, A; Proselkov, V [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation); Kharalampieva, Ts [Kombinat Atomna Energetika, Kozloduj (Bulgaria)
1994-12-31
Two versions (modified pre-test and modified post-test) of PIN-micro code were used to analyse the fuel rod behaviour of three FUMEX experiments. The experience of applying PIN-micro code with its simple structure and old conception of the steady-state operation shows significant difficulties in treating the complex processes like those in FUMEX experiments. These difficulties were partially overcame through different model modifications and corrections based on special engineering estimations and the results obtained as a whole do not seem unreasonable. The calculations have been performed by a group from two Bulgarian institutions in collaboration with specialists from the Kurchatov Research Center. 1 tab., 14 figs., 8 refs.
Năpăruş, Magdalena; Kuntner, Matjaž
2012-01-01
Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World), N. livida (Madagascar), N. malabarensis (S-SE Asia), and N. papuana (Australasia). For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range), a large part of Brazil and the Guianas (area of synanthropic spread), and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (sub)tropics. Our model is a customizable GIS tool intended to predict current and future potential distributions of globally distributed terrestrial lineages. Its predictive
Directory of Open Access Journals (Sweden)
Magdalena Năpăruş
Full Text Available BACKGROUND: Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. METHODOLOGY/PRINCIPAL FINDINGS: We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World, N. livida (Madagascar, N. malabarensis (S-SE Asia, and N. papuana (Australasia. For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range, a large part of Brazil and the Guianas (area of synanthropic spread, and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (subtropics. CONCLUSIONS: Our model is a customizable GIS tool intended to predict current and future potential
HSPF Modeling for Compliance and Enforcement: An Urban Case Study
Marshalonis, D.
2017-12-01
Stormwater runoff is one of the most significant challenges to water quality facing surface waters globally. In the United States, the Environmental Protection Agency (EPA) regulates stormwater flows through its National Pollutant Discharge Elimination System (NPDES) program permits. When egregious violations occur, EPA may develop its case and prove those violations through the legal dispute process. However, evidence in stormwater-related cases is ephemeral, difficult to collect due to unpredictable weather dynamics, and there are usually no witnesses. The work presented here illustrates an approach EPA takes for certain wet weather cases: introduce results from hydrologic and hydraulic models as evidence to meet legal burden of proof standards. The challenges and opportunities of using models in stormwater discharge modeling are highlighted.
The microelectronics and photonics test bed (MPTB) space, ground test and modeling experiments
International Nuclear Information System (INIS)
Campbell, A.
1999-01-01
This paper is an overview of the MPTB (microelectronics and photonics test bed) experiment, a combination of a space experiment, ground test and modeling programs looking at the response of advanced electronic and photonic technologies to the natural radiation environment of space. (author)
Matrix diffusion model. In situ tests using natural analogues
International Nuclear Information System (INIS)
Rasilainen, K.
1997-11-01
Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories
Ballistic and Cyclic Rig Testing of Braided Composite Fan Case Structures
Watson, William R.; Roberts, Gary D.; Pereira, J. Michael; Braley, Michael S.
2015-01-01
FAA fan blade-out certification testing on turbofan engines occurs very late in an engine's development program and is very costly. It is of utmost importance to approach the FAA Certification engine test with a high degree of confidence that the containment structure will not only contain the high-energy debris, but that it will also withstand the cyclic loads that occur with engine spooldown and continued rotation as the non-running engine maintains a low rotor RPM due to forced airflow as the engine-out aircraft returns to an airport. Accurate rig testing is needed for predicting and understanding material behavior of the fan case structure during all phases of this fan blade-out event.
Kubinger, Klaus D.; Reif, Manuel; Yanagida, Takuya
2011-01-01
Item position effects provoke serious problems within adaptive testing. This is because different testees are necessarily presented with the same item at different presentation positions, as a consequence of which comparing their ability parameter estimations in the case of such effects would not at all be fair. In this article, a specific…
A dual memory theory of the testing effect.
Rickard, Timothy C; Pan, Steven C
2017-06-05
A new theoretical framework for the testing effect-the finding that retrieval practice is usually more effective for learning than are other strategies-is proposed, the empirically supported tenet of which is that separate memories form as a consequence of study and test events. A simplest case quantitative model is derived from that framework for the case of cued recall. With no free parameters, that model predicts both proportion correct in the test condition and the magnitude of the testing effect across 10 experiments conducted in our laboratory, experiments that varied with respect to material type, retention interval, and performance in the restudy condition. The model also provides the first quantitative accounts of (a) the testing effect as a function of performance in the restudy condition, (b) the upper bound magnitude of the testing effect, (c) the effect of correct answer feedback, (d) the testing effect as a function of retention interval for the cases of feedback and no feedback, and (e) the effect of prior learning method on subsequent learning through testing. Candidate accounts of several other core phenomena in the literature, including test-potentiated learning, recognition versus cued recall training effects, cued versus free recall final test effects, and other select transfer effects, are also proposed. Future prospects and relations to other theories are discussed.
Energy Technology Data Exchange (ETDEWEB)
Riederer, P.
2002-01-15
Room models, currently used for controller tests, assume the room air to be perfectly mixed. A new room model is developed, assuming non-homogeneous room conditions and distinguishing between different sensor positions. From measurement in real test rooms and detailed CFD simulations, a list of convective phenomena is obtained that has to be considered in the development of a model for a room equipped with different HVAC systems. The zonal modelling approach that divides the room air into several sub-volumes is chosen, since it is able to represent the important convective phenomena imposed on the HVAC system. The convective room model is divided into two parts: a zonal model, representing the air at the occupant zone and a second model, providing the conditions at typical sensor positions. Using this approach, the comfort conditions at the occupant zone can be evaluated as well as the impact of different sensor positions. The model is validated for a test room equipped with different HVAC systems. Sensitivity analysis is carried out on the main parameters of the model. Performance assessment and energy consumption are then compared for different sensor positions in a room equipped with different HVAC systems. The results are also compared with those obtained when a well-mixed model is used. A main conclusion of these tests is, that the differences obtained, when changing the position of the controller's sensor, is a function of the HVAC system and controller type. The differences are generally small in terms of thermal comfort but significant in terms of overall energy consumption. For different HVAC systems the cases are listed, in which the use of a simplified model is not recommended. (author)
Cases as Shared Inquiry: A Dialogical Model of Teacher Preparation.
Harrington, Helen L.; Garrison, James W.
1992-01-01
A dialogical model is proposed for connecting theory to practice in teacher education by conceiving of cases from case-based pedagogy as problems that initiate shared inquiry. Cases with genuine cognitive and axiological content can initiate self-directed, student-centered inquiry while building democratic dialogical communities. (SLD)
Physical modelling and testing in environmental geotechnics
Energy Technology Data Exchange (ETDEWEB)
Garnier, J.; Thorel, L.; Haza, E. [Laboratoire Central des Ponts et Chaussees a Nantes, 44 - Nantes (France)
2000-07-01
The preservation of natural environment has become a major concern, which affects nowadays a wide range of professionals from local communities administrators to natural resources managers (water, wildlife, flora, etc) and, in the end, to the consumers that we all are. Although totally ignored some fifty years ago, environmental geotechnics has become an emergent area of study and research which borders on the traditional domains, with which the geo-technicians are confronted (soil and rock mechanics, engineering geology, natural and anthropogenic risk management). Dedicated to experimental approaches (in-situ investigations and tests, laboratory tests, small-scale model testing), the Symposium fits in with the geotechnical domains of environment and transport of soil pollutants. These proceedings report some progress of developments in measurement techniques and studies of transport of pollutants in saturated and unsaturated soils in order to improve our understanding of such phenomena within multiphase environments. Experimental investigations on decontamination and isolation methods for polluted soils are discussed. The intention is to assess the impact of in-situ and laboratory tests, as well as small-scale model testing, on engineering practice. One paper has been analyzed in INIS data base for its specific interest in nuclear industry.
Directory of Open Access Journals (Sweden)
Ping Yu
Full Text Available A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke.The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient.The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82, and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84. Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008.The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.
Model-Based GUI Testing Using Uppaal at Novo Nordisk
DEFF Research Database (Denmark)
H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand
2009-01-01
This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...
Two-dimensional models as testing ground for principles and concepts of local quantum physics
Energy Technology Data Exchange (ETDEWEB)
Schroer, Bert [FU Berlin (Germany). Institut fuer Theoretische Physik
2005-04-15
In the past two-dimensional models of QFT have served as theoretical laboratories for testing new concepts under mathematically controllable condition. In more recent times low-dimensional models (e.g. chiral models, factoring models) often have been treated by special recipes in a way which sometimes led to a loss of unity of QFT. In the present work I try to counteract this apartheid tendency by reviewing past results within the setting of the general principles of QFT. To this I add two new ideas: (1) a modular interpretation of the chiral model Diff(S)-covariance with a close connection to the recently formulated local covariance principle for QFT in curved spacetime and (2) a derivation of the chiral model temperature duality from a suitable operator formulation of the angular Wick rotation (in analogy to the Nelson-Symanzik duality in the Ostertwalder-Schrader setting) for rational chiral theories. The SL(2,Z) modular Verlinde relation is a special case of this thermal duality and (within the family of rational models) the matrix S appearing in the thermal duality relation becomes identified with the statistics character matrix S. The relevant angular 'Euclideanization' is done in the setting of the Tomita-Takesaki modular formalism of operator algebras. I find it appropriate to dedicate this work to the memory of J. A. Swieca with whom I shared the interest in two-dimensional models as a testing ground for QFT for more than one decade. This is a significantly extended version of an 'Encyclopedia of Mathematical Physics' contribution hep-th/0502125. (author)
Two-dimensional models as testing ground for principles and concepts of local quantum physics
International Nuclear Information System (INIS)
Schroer, Bert
2005-04-01
In the past two-dimensional models of QFT have served as theoretical laboratories for testing new concepts under mathematically controllable condition. In more recent times low-dimensional models (e.g. chiral models, factoring models) often have been treated by special recipes in a way which sometimes led to a loss of unity of QFT. In the present work I try to counteract this apartheid tendency by reviewing past results within the setting of the general principles of QFT. To this I add two new ideas: (1) a modular interpretation of the chiral model Diff(S)-covariance with a close connection to the recently formulated local covariance principle for QFT in curved spacetime and (2) a derivation of the chiral model temperature duality from a suitable operator formulation of the angular Wick rotation (in analogy to the Nelson-Symanzik duality in the Ostertwalder-Schrader setting) for rational chiral theories. The SL(2,Z) modular Verlinde relation is a special case of this thermal duality and (within the family of rational models) the matrix S appearing in the thermal duality relation becomes identified with the statistics character matrix S. The relevant angular 'Euclideanization' is done in the setting of the Tomita-Takesaki modular formalism of operator algebras. I find it appropriate to dedicate this work to the memory of J. A. Swieca with whom I shared the interest in two-dimensional models as a testing ground for QFT for more than one decade. This is a significantly extended version of an 'Encyclopedia of Mathematical Physics' contribution hep-th/0502125. (author)
MODELING OF POWER SYSTEMS AND TESTING OF RELAY PROTECTION DEVICES IN REAL AND MODEL TIME
Directory of Open Access Journals (Sweden)
I. V. Novash
2017-01-01
Full Text Available The methods of modelling of power system modes and of testing of relay protection devices with the aid the simulation complexes in real time and with the help of computer software systems that enables the simulation of virtual time scale are considered. Information input protection signals in the simulation of the virtual model time are being obtained in the computational experiment, whereas the tests of protective devices are carried out with the help of hardware and software test systems with the use of estimated input signals. Study of power system stability when modes of generating and consuming electrical equipment and conditions of devices of relay protection are being changed requires testing with the use of digital simulators in a mode of a closed loop. Herewith feedbacks between a model of the power system operating in a real time and external devices or their models must be determined (modelled. Modelling in real time and the analysis of international experience in the use of digital simulation power systems for real-time simulation (RTDS simulator have been fulfilled. Examples are given of the use of RTDS systems by foreign energy companies to test relay protection systems and control, to test the equipment and devices of automatic control, analysis of cyber security and evaluation of the operation of energy systems under different scenarios of occurrence of emergency situations. Some quantitative data on the distribution of RTDS in different countries and Russia are presented. It is noted that the leading energy universities of Russia use the real-time simulation not only to solve scientific and technical problems, but also to conduct training and laboratory classes on modelling of electric networks and anti-emergency automatic equipment with the students. In order to check serviceability of devices of relay protection without taking into account the reaction of the power system tests can be performed in an open loop mode with the
Testing mechanistic models of growth in insects.
Maino, James L; Kearney, Michael R
2015-11-22
Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. © 2015 The Author(s).
TESTING INFORMATIONAL EFFICIENCY: THE CASE OF U.E. AND BRIC EMERGENT MARKETS
OPREAN Camelia
2012-01-01
Empirical finance has brought together a considerable number of studies in determining the market efficiency in terms of information in the case of an emerging financial market. Conflicting results have been generated by these researches in efficient market hypothesis (EMH), so efficiency tests in the emerging financial markets are rarely definitive in reaching a conclusion about the existence of informational efficiency. This paper tests weak-form market efficiency of eight emerging markets:...
Rabieh, Masood; Soukhakian, Mohammad Ali; Mosleh Shirazi, Ali Naghi
2016-06-01
Selecting the best suppliers is crucial for a company's success. Since competition is a determining factor nowadays, reducing cost and increasing quality of products are two key criteria for appropriate supplier selection. In the study, first the inventories of agglomeration plant of Isfahan Steel Company were categorized through VED and ABC methods. Then the models to supply two important kinds of raw materials (inventories) were developed, considering the following items: (1) the optimal consumption composite of the materials, (2) the total cost of logistics, (3) each supplier's terms and conditions, (4) the buyer's limitations and (5) the consumption behavior of the buyers. Among diverse developed and tested models—using the company's actual data within three pervious years—the two new innovative models of mixed-integer non-linear programming type were found to be most suitable. The results of solving two models by lingo software (based on company's data in this particular case) were equaled. Comparing the results of the new models to the actual performance of the company revealed 10.9 and 7.1 % reduction in total procurement costs of the company in two consecutive years.
Auditing predictive models : a case study in crop growth
Metselaar, K.
1999-01-01
Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize
SVM and ANFIS Models for precipitaton Modeling (Case Study: GonbadKavouse
Directory of Open Access Journals (Sweden)
N. Zabet Pishkhani
2016-10-01
since it is less computationally exhaustive and more transparent than other models. A consequent membership function (MF of the Sugeno model could be any arbitrary parameterized function of the crisp inputs, most like lya polynomial. Zero and first order polynomials were used as consequent MF in constant and linear Sugeno models, respectively. In addition, the defuzzification process in Sugeno fuzzy models is a simple weighted average calculation. The fuzzy space was divided via grid partitioning according to the number of antecedent MF, and each fuzzy region was covered with a fuzzy rule. Results Discussion: The statistical results showed that in first structure determination coefficient values for both the training and test was not good performance in precipitation prediction so that ANFIS and SVM had determination coefficient of 0.67 and 0.33 in training phase and 0.45 and 0.40 in test phase. Also the error RMSE values showed that both models had failed to predict precipitation in first structure. The results of second structure in precipitation prediction showed that determination coefficient of ANFIS at training and testing was 0.93 and 0.87 respectively and RMSE was 7.06 and 9.28 respectively. MBE values showed that the ANFIS underestimated at training phase and overestimated at test phase. Determination coefficient of SVM at training and testing was 0.89 and 0.91 respectively and RMSE was 9.28 and 5.59 respectively. SVM underestimated precipitation at train phase and overestimated it at test phase. ANFIS and SVM modeled precipitation using precipitation gauging stations with reasonable accuracy. Determining coefficient in the test phase was almost the same for ANFIS and SVM but the RMSE error of SVM model was about 20% lower than the ANFIS. The coefficient of determination and error values indicated SVM had greater accuracy than ANFIS. ANFIS overestimated precipitation for less than 20 mm but for higher values of uniformly distributed around the 1:1. SVM
Directory of Open Access Journals (Sweden)
John G. Oetzel
2018-01-01
Full Text Available Objectives. A key challenge in evaluating the impact of community-based participatory research (CBPR is identifying what mechanisms and pathways are critical for health equity outcomes. Our purpose is to provide an empirical test of the CBPR conceptual model to address this challenge. Methods. A three-stage quantitative survey was completed: (1 294 US CBPR projects with US federal funding were identified; (2 200 principal investigators completed a questionnaire about project-level details; and (3 450 community or academic partners and principal investigators completed a questionnaire about perceived contextual, process, and outcome variables. Seven in-depth qualitative case studies were conducted to explore elements of the model not captured in the survey; one is presented due to space limitations. Results. We demonstrated support for multiple mechanisms illustrated by the conceptual model using a latent structural equation model. Significant pathways were identified, showing the positive association of context with partnership structures and dynamics. Partnership structures and dynamics showed similar associations with partnership synergy and community involvement in research; both of these had positive associations with intermediate community changes and distal health outcomes. The case study complemented and extended understandings of the mechanisms of how partnerships can improve community conditions. Conclusions. The CBPR conceptual model is well suited to explain key relational and structural pathways for impact on health equity outcomes.
DEFF Research Database (Denmark)
Christensen, Bent Jesper; van der Wel, Michel
of the risk premium is associated with the slope factor, and individual risk prices depend on own past values, factor realizations, and past values of other risk prices, and are significantly related to the output gap, consumption, and the equity risk price. The absence of arbitrage opportunities is strongly...... is tested, but in addition to the standard bilinear term in factor loadings and market prices of risk, the relevant mean restriction in the term structure case involves an additional nonlinear (quadratic) term in factor loadings. We estimate our general model using likelihood-based dynamic factor model...... techniques for a variety of volatility factors, and implement the relevant likelihood ratio tests. Our factor model estimates are similar across a general state space implementation and an alternative robust two-step principal components approach. The evidence favors time-varying market prices of risk. Most...
Large scale injection test (LASGIT) modelling
International Nuclear Information System (INIS)
Arnedo, D.; Olivella, S.; Alonso, E.E.
2010-01-01
Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug
Service-oriented enterprise modelling and analysis: a case study
Iacob, Maria Eugenia; Jonkers, H.; Lankhorst, M.M.; Steen, M.W.A.
2007-01-01
In order to validate the concepts and techniques for service-oriented enterprise architecture modelling, developed in the ArchiMate project (Lankhorst, et al., 2005), we have conducted a number of case studies. This paper describes one of these case studies, conducted at the Dutch Tax and Customs
Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis
International Nuclear Information System (INIS)
Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.
1991-01-01
The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs
Testing R&D-Based Endogenous Growth Models
DEFF Research Database (Denmark)
Kruse-Andersen, Peter Kjær
2017-01-01
R&D-based growth models are tested using US data for the period 1953-2014. A general growth model is developed which nests the model varieties of interest. The model implies a cointegrating relationship between multifactor productivity, research intensity, and employment. This relationship...... is estimated using cointegrated VAR models. The results provide evidence against the widely used fully endogenous variety and in favor of the semi-endogenous variety. Forecasts based on the empirical estimates suggest that the slowdown in US productivity growth will continue. Particularly, the annual long...
2-D Model Test Study of the Suape Breakwater, Brazil
DEFF Research Database (Denmark)
Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.
This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...
Directory of Open Access Journals (Sweden)
Christopher Naugler
2012-01-01
Full Text Available Background: The use of adjuvant tamoxifen therapy in the treatment of estrogen receptor (ER expressing breast carcinomas represents a major advance in personalized cancer treatment. Because there is no benefit (and indeed there is increased morbidity and mortality associated with the use of tamoxifen therapy in ER-negative breast cancer, its use is restricted to women with ER expressing cancers. However, correctly classifying cancers as ER positive or negative has been challenging given the high reported false negative test rates for ER expression in surgical specimens. In this paper I model practice recommendations using published information from clinical trials to address the question of whether there is a false negative test rate above which it is more efficacious to forgo ER testing and instead treat all patients with tamoxifen regardless of ER test results. Methods: I used data from randomized clinical trials to model two different hypothetical treatment strategies: (1 the current strategy of treating only ER positive women with tamoxifen and (2 an alternative strategy where all women are treated with tamoxifen regardless of ER test results. The variables used in the model are literature-derived survival rates of the different combinations of ER positivity and treatment with tamoxifen, varying true ER positivity rates and varying false negative ER testing rates. The outcome variable was hypothetical 10-year survival. Results: The model predicted that there will be a range of true ER rates and false negative test rates above which it would be more efficacious to treat all women with breast cancer with tamoxifen and forgo ER testing. This situation occurred with high true positive ER rates and false negative ER test rates in the range of 20-30%. Conclusions: It is hoped that this model will provide an example of the potential importance of diagnostic error on clinical outcomes and furthermore will give an example of how the effect of that