WorldWideScience

Sample records for model test measurements

  1. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  2. Specification test for Markov models with measurement errors.

    Science.gov (United States)

    Kim, Seonjin; Zhao, Zhibiao

    2014-09-01

    Most existing works on specification testing assume that we have direct observations from the model of interest. We study specification testing for Markov models based on contaminated observations. The evolving model dynamics of the unobservable Markov chain is implicitly coded into the conditional distribution of the observed process. To test whether the underlying Markov chain follows a parametric model, we propose measuring the deviation between nonparametric and parametric estimates of conditional regression functions of the observed process. Specifically, we construct a nonparametric simultaneous confidence band for conditional regression functions and check whether the parametric estimate is contained within the band.

  3. Data Modeling for Measurements in the Metrology and Testing Fields

    CERN Document Server

    Pavese, Franco

    2009-01-01

    Offers a comprehensive set of modeling methods for data and uncertainty analysis. This work develops methods and computational tools to address general models that arise in practice, allowing for a more valid treatment of calibration and test data and providing an understanding of complex situations in measurement science

  4. Numerical Modelling and Measurement in a Test Secondary Settling Tank

    DEFF Research Database (Denmark)

    Dahl, C.; Larsen, Torben; Petersen, O.

    1994-01-01

    sludge. Phenomena as free and hindered settling and the Bingham plastic characteristic of activated sludge suspensions are included in the numerical model. Further characterisation and test tank experiments are described. The characterisation experiments were designed to measure calibration parameters...... and for comparing measured and calculated result. The numerical model could, fairly accuratly, predict the measured results and both the measured and the calculated results showed a flow field pattern identical to flow fields in full-scale secondary setling tanks. A specific calibration of the Bingham plastic...

  5. Measurement of Function Post Hip Fracture: Testing a Comprehensive Measurement Model of Physical Function.

    Science.gov (United States)

    Resnick, Barbara; Gruber-Baldini, Ann L; Hicks, Gregory; Ostir, Glen; Klinedinst, N Jennifer; Orwig, Denise; Magaziner, Jay

    2016-07-01

    Measurement of physical function post hip fracture has been conceptualized using multiple different measures. This study tested a comprehensive measurement model of physical function. This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living, and performance was tested for fit at 2 and 12 months post hip fracture, and among male and female participants. Validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise, and social activities post hip fracture. The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participants. The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. © 2015 Association of Rehabilitation Nurses.

  6. Can atom-surface potential measurements test atomic structure models?

    Science.gov (United States)

    Lonij, Vincent P A; Klauss, Catherine E; Holmgren, William F; Cronin, Alexander D

    2011-06-30

    van der Waals (vdW) atom-surface potentials can be excellent benchmarks for atomic structure calculations. This is especially true if measurements are made with two different types of atoms interacting with the same surface sample. Here we show theoretically how ratios of vdW potential strengths (e.g., C₃(K)/C₃(Na)) depend sensitively on the properties of each atom, yet these ratios are relatively insensitive to properties of the surface. We discuss how C₃ ratios depend on atomic core electrons by using a two-oscillator model to represent the contribution from atomic valence electrons and core electrons separately. We explain why certain pairs of atoms are preferable to study for future experimental tests of atomic structure calculations. A well chosen pair of atoms (e.g., K and Na) will have a C₃ ratio that is insensitive to the permittivity of the surface, whereas a poorly chosen pair (e.g., K and He) will have a ratio of C₃ values that depends more strongly on the permittivity of the surface.

  7. The Latent Class Model as a Measurement Model for Situational Judgment Tests

    Directory of Open Access Journals (Sweden)

    Frank Rijmen

    2011-11-01

    Full Text Available In a situational judgment test, it is often debatable what constitutes a correct answer to a situation. There is currently a multitude of scoring procedures. Establishing a measurement model can guide the selection of a scoring rule. It is argued that the latent class model is a good candidate for a measurement model. Two latent class models are applied to the Managing Emotions subtest of the Mayer, Salovey, Caruso Emotional Intelligence Test: a plain-vanilla latent class model, and a second-order latent class model that takes into account the clustering of several possible reactions within each hypothetical scenario of the situational judgment test. The results for both models indicated that there were three subgroups characterised by the degree to which differentiation occurred between possible reactions in terms of perceived effectiveness. Furthermore, the results for the second-order model indicated a moderate cluster effect.

  8. Measuring damage in physical model tests of rubble mounds

    NARCIS (Netherlands)

    Hofland, B.; Rosa-Santos, Paulo; Taveira-Pinto, Francisco; Lemos, Rute; Mendonça, A.; Juana Fortes, C

    2017-01-01

    This paper studies novel ways to evaluate armour damage in physical models of coastal structures. High-resolution damage data for reference rubble mound breakwaters obtained under the HYDRALAB+ joint-research project are analysed and discussed. These tests are used to analyse the way to describe

  9. Design and Testing of a Flexible Inclinometer Probe for Model Tests of Landslide Deep Displacement Measurement.

    Science.gov (United States)

    Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin

    2018-01-14

    The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model.

  10. A blast absorber test: measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Berg, F. van den; Hof, J. van 't; Arkel, E. van

    2006-01-01

    A blast absorber test was conducted at the Aberdeen Test Centre from 13 to 17 June 2005. The test was set up to determine the absorbing and shielding effect of a gravel pile, of 1.5 meters high and 15 by 15 meters wide, on blasts from large weapons: e.g. armor, artillery or demolition. The blast was

  11. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  12. Ares I Scale Model Acoustic Tests Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas D.

    2011-01-01

    The Ares I Scale Model Acoustic Test (ASMAT) was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116. The test article included a 5% scale Ares I vehicle model and tower mounted on the Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments located throughout the test article. There were four primary ASMAT instrument suites: ignition overpressure (IOP), lift-off acoustics (LOA), ground acoustics (GA), and spatial correlation (SC). Each instrumentation suite incorporated different sensor models which were selected based upon measurement requirements. These requirements included the type of measurement, exposure to the environment, instrumentation check-outs and data acquisition. The sensors were attached to the test article using different mounts and brackets dependent upon the location of the sensor. This presentation addresses the observed effect of the sensors and mounts on the acoustic and pressure measurements.

  13. Testing Measurement Invariance of the Students' Affective Characteristics Model across Gender Sub-Groups

    Science.gov (United States)

    Demir, Ergül

    2017-01-01

    In this study, the aim was to construct a significant structural measurement model comparing students' affective characteristics with their mathematic achievement. According to this model, the aim was to test the measurement invariances between gender sub-groups hierarchically. This study was conducted as basic and descriptive research. Secondary…

  14. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  15. Toward Intelligent Assessment: An Integration of Constructed Response Testing, Artificial Intelligence, and Model-Based Measurement.

    Science.gov (United States)

    Bennett, Randy Elliot

    A new assessment conception is described that integrates constructed-response testing, artificial intelligence, and model-based measurement. The conception incorporates complex constructed-response items for their potential to increase the validity, instructional utility, and credibility of standardized tests. Artificial intelligence methods are…

  16. Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain

    Science.gov (United States)

    Wilson, Mark; Moore, Stephen

    2011-01-01

    This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…

  17. TESTING BRAND VALUE MEASUREMENT METHODS IN A RANDOM COEFFICIENT MODELING FRAMEWORK

    OpenAIRE

    Szõcs Attila

    2014-01-01

    Our objective is to provide a framework for measuring brand equity, that is, the added value to the product endowed by the brand. Based on a demand and supply model, we propose a structural model that enables testing the structural effect of brand equity (demand side effect) on brand value (supply side effect), using Monte Carlo simulation. Our main research question is which of the three brand value measurement methods (price premium, revenue premium and profit premium) is more suitable from...

  18. A more general model for testing measurement invariance and differential item functioning.

    Science.gov (United States)

    Bauer, Daniel J

    2017-09-01

    The evaluation of measurement invariance is an important step in establishing the validity and comparability of measurements across individuals. Most commonly, measurement invariance has been examined using 1 of 2 primary latent variable modeling approaches: the multiple groups model or the multiple-indicator multiple-cause (MIMIC) model. Both approaches offer opportunities to detect differential item functioning within multi-item scales, and thereby to test measurement invariance, but both approaches also have significant limitations. The multiple groups model allows 1 to examine the invariance of all model parameters but only across levels of a single categorical individual difference variable (e.g., ethnicity). In contrast, the MIMIC model permits both categorical and continuous individual difference variables (e.g., sex and age) but permits only a subset of the model parameters to vary as a function of these characteristics. The current article argues that moderated nonlinear factor analysis (MNLFA) constitutes an alternative, more flexible model for evaluating measurement invariance and differential item functioning. We show that the MNLFA subsumes and combines the strengths of the multiple group and MIMIC models, allowing for a full and simultaneous assessment of measurement invariance and differential item functioning across multiple categorical and/or continuous individual difference variables. The relationships between the MNLFA model and the multiple groups and MIMIC models are shown mathematically and via an empirical demonstration. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. TESTING BRAND VALUE MEASUREMENT METHODS IN A RANDOM COEFFICIENT MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Szõcs Attila

    2014-07-01

    Full Text Available Our objective is to provide a framework for measuring brand equity, that is, the added value to the product endowed by the brand. Based on a demand and supply model, we propose a structural model that enables testing the structural effect of brand equity (demand side effect on brand value (supply side effect, using Monte Carlo simulation. Our main research question is which of the three brand value measurement methods (price premium, revenue premium and profit premium is more suitable from the perspective of the structural link between brand equity and brand value. Our model is based on recent developments in random coefficients model applications.

  20. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    Science.gov (United States)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  1. Bayesian analysis of canopy transpiration models: A test of posterior parameter means against measurements

    Science.gov (United States)

    Mackay, D. Scott; Ewers, Brent E.; Loranty, Michael M.; Kruger, Eric L.; Samanta, Sudeep

    2012-04-01

    SummaryBig-leaf models of transpiration are based on the hypothesis that structural heterogeneity within forest canopies can be ignored at stand or larger scales. However, the adoption of big-leaf models is de facto rather than de jure, as forests are never structurally or functionally homogeneous. We tested big-leaf models both with and without modification to include canopy gaps, in a heterogeneous quaking aspen stand having a range of canopy densities. Leaf area index (L) and canopy closure were obtained from biometric data, stomatal conductance parameters were obtained from sap flux measurements, while leaf gas exchange data provided photosynthetic parameters. We then rigorously tested model-data consistency by incrementally starving the models of these measured parameters and using Bayesian Markov Chain Monte Carlo simulation to retrieve the withheld parameters. Model acceptability was quantified with Deviance Information Criterion (DIC), which penalized model accuracy by the number of retrieved parameters. Big-leaf models overestimated canopy transpiration with increasing error as canopy density declined, but models that included gaps had minimal error regardless of canopy density. When models used measured L the other parameters were retrieved with minimal bias. This showed that simple canopy models could predict transpiration in data scarce regions where only L was measured. Models that had L withheld had the lowest DIC values suggesting that they were the most acceptable models. However, these models failed to retrieve unbiased parameter estimates indicating a mismatch between model structure and data. By quantifying model structure and data requirements this new approach to evaluating model-data fusion has advanced the understanding of canopy transpiration.

  2. Evaluation of Two Methods for Modeling Measurement Errors When Testing Interaction Effects with Observed Composite Scores

    Science.gov (United States)

    Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.

    2018-01-01

    Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…

  3. Measurement of longitudinal impedance for a KAON test pipe model with TSD-calibration method

    International Nuclear Information System (INIS)

    Yin, Y.; Oram, C.; Ilinsky, N.; Reinhardt-Nikulin, P.

    1991-05-01

    We report measurements of longitudinal impedances for a KAON factory beam pipe model by means of the TSD-calibration method. The experimental method and the results are discussed. The frequency band is from 48 MHz up to 900 MHz, within which range the method produces measured impedances accurate enough to be useful in indicating whether a test pipe will have a suitably low impedance. (Author) 9 refs., 7 figs

  4. In Situ Measurements of the NO2/NO Ratio for Testing Atmospheric Photochemical Models

    Science.gov (United States)

    Jaegle, L.; Webster, C. R.; May, R. D.; Fahey, D. W.; Woodbridge, E. L.; Keim, E. R.; Gao, R. S.; Proffitt, M. H.; Stimpfle, R. M.; Salawitch, R. J.

    1994-01-01

    Simultaneous in situ measurements of NO2, NO, O3, ClO, pressure and temperature have been made for the first time, presenting a unique opportunity to test our current understanding of the photochemistry of the lower stratosphere. Data were collected from several flights of the ER-2 aircraft at mid-latitudes in May 1993 during NASA's Stratospheric Photochemistry, Aerosols and Dynamics Expedition (SPADE). The daytime ratio of NO2/NO remains fairly constant at 19 km with a typical value of 0.68 and standard deviation of +/- 0.17. The ratio observations are compared with simple steady-state calculations based on laboratory-measured reaction rates and modeled NO2 photolysis rates. At each measurement point the daytime NO2/NO with its measurement uncertainty overlap the results of steady-state calculations and associated uncertainty. However, over all the ER-2 flights examined, the model systematically overestimates the ratio by 40% on average. Possible sources of error are examined in both model and measurements. It is shown that more accurate laboratory determinations of the NO + O3 reaction rate and of the NO2 cross-sections in the 200-220 K temperature range characteristic of the lower stratosphere would allow for a more robust test of our knowledge of NO(x) photochemistry by reducing significant sources of uncertainties in the interpretation of stratospheric measurements. The present measurements are compared with earlier observations of the ratio at higher altitudes.

  5. Detailed measurements and modelling of thermo active components using a room size test facility

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    measurements in an office sized test facility with thermo active ceiling and floor as well as modelling of similar conditions in a computer program designed for analysis of building integrated heating and cooling systems. A method for characterizing the cooling capacity of thermo active components is described...... based on measurements of the energy balance of the thermo active deck. A cooling capacity of around 60W/m² at a temperature difference of 10K between room and fluid temperature has been found. It is also shown, that installing a lowered acoustic ceiling covering around 50% of the ceiling surface area...... only causes a reduction in the cooling capacity of around 10%. At the same time, the simulation model is able to reproduce the results from the measurements. Especially the heat flows are well predicted with a deviation of only a few percent, while the temperatures are not as well predicted, though...

  6. Extrapolation of model tests measurements of whipping to identify the dimensioning sea states for container ships

    DEFF Research Database (Denmark)

    Storhaug, Gaute; Andersen, Ingrid Marie Vincent

    2015-01-01

    to small storms. Model tests of three container ships have been carried out in different sea states under realistic assumptions. Preliminary extrapolation of the measured data suggested that moderate storms are dimensioning when whipping is included due to higher maximum speed in moderate storms......Whipping can contribute to increased fatigue and extreme loading of container ships, and guidelines have been made available by the leading class societies. Reports concerning the hogging collapse of MSC Napoli and MOL Comfort suggest that whipping contributed. The accidents happened in moderate...

  7. Testing of a measurement model for baccalaureate nursing students' self-evaluation of core competencies.

    Science.gov (United States)

    Hsu, Li-Ling; Hsieh, Suh-Ing

    2009-11-01

    Testing of a measurement model for baccalaureate nursing students' self-evaluation of core competencies. This paper is a report of a study to test the psychometric properties of the Self-Evaluated Core Competencies Scale for baccalaureate nursing students. Baccalaureate nursing students receive basic nursing education and continue to build competency in practice settings after graduation. Nursing students today face great challenges. Society demands analytic, critical, reflective and transformative attitudes from graduates. It also demands that institutions of higher education take the responsibility to encourage students, through academic work, to acquire knowledge and skills that meet the needs of the modern workplace, which favours highly skilled and qualified workers. A survey of 802 senior nursing students in their last semester at college or university was conducted in Taiwan in 2007 using the Self-Evaluated Core Competencies Scale. Half of the participants were randomly assigned either to principal components analysis with varimax rotation or confirmatory factor analysis. Principal components analysis revealed two components of core competencies that were named as humanity/responsibility and cognitive/performance. The initial model of confirmatory factor analysis was then converged to an acceptable solution but did not show a good fit; however, the final model of confirmatory factor analysis was converged to an acceptable solution with acceptable fit. The final model has two components, namely humanity/responsibility and cognitive/performance. Both components have four indicators. In addition, six indicators have their correlated measurement errors. Self-Evaluated Core Competencies Scale could be used to assess the core competencies of undergraduate nursing students. In addition, it should be used as a teaching guide to increase students' competencies to ensure quality patient care in hospitals.

  8. Measurements of evaporation from a mine void lake and testing of modelling approaches

    Science.gov (United States)

    McJannet, David; Hawdon, Aaron; Van Niel, Tom; Boadle, Dave; Baker, Brett; Trefry, Mike; Rea, Iain

    2017-12-01

    Pit lakes often form in the void that remains after open cut mining operations cease. As pit lakes fill, hydrological and geochemical processes interact and these need to be understood for appropriate management actions to be implemented. Evaporation is important in the evolution of pit lakes as it acts to concentrate various constituents, controls water level and changes the thermal characteristics of the water body. Despite its importance, evaporation from pit lakes is poorly understood. To address this, we used an automated floating evaporation pan and undertook measurements at a pit lake over a 12 month period. We also developed a new procedure for correcting floating pan evaporation estimates to lake evaporation estimates based on surface temperature differences. Total annual evaporation was 2690 mm and reflected the strong radiation inputs, high temperatures and low humidity experienced in this region. Measurements were used to test the performance of evaporation estimates derived using both pan coefficient and aerodynamic modelling techniques. Daily and monthly evaporation estimates were poorly reproduced using pan coefficient techniques and their use is not recommended for such environments. Aerodynamic modelling was undertaken using a range of input datasets that may be available to those who manage pit lake systems. Excellent model performance was achieved using over-water or local over-land meteorological observations, particularly when the sheltering effects of the pit were considered. Model performance was reduced when off-site data were utilised and differences between local and off-site vapor pressure and wind speed were found to be the major cause.

  9. Modeling nitrous oxide emissions from irrigated agriculture: testing DayCent with high-frequency measurements.

    Science.gov (United States)

    Scheer, Clemens; Del Grosso, Stephen J; Parton, William J; Rowlings, David W; Grace, Peter R

    2014-04-01

    A unique high temporal frequency data set from an irrigated cotton-wheat rotation was used to test the agroecosystem model DayCent to simulate daily N20 emissions from subtropical vertisols under different irrigation intensities. DayCent was able to simulate the effect of different irrigation intensities on N20 fluxes and yield, although it tended to overestimate seasonal fluxes during the cotton season. DayCent accurately predicted soil moisture dynamics and the timing and magnitude of high fluxes associated with fertilizer additions and irrigation events. At the daily scale we found a good correlation of predicted vs. measured N20 fluxes (r2 = 0.52), confirming that DayCent can be used to test agricultural practices for mitigating N20 emission from irrigated cropping systems. A 25-year scenario analysis indicated that N20 losses from irrigated cotton-wheat rotations on black vertisols in Australia can be substantially reduced by an optimized fertilizer and irrigation management system (i.e., frequent irrigation, avoidance of excessive fertilizer application), while sustaining maximum yield potentials.

  10. Thermal Testing Measurements Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Wagner

    2002-09-26

    The purpose of the Thermal Testing Measurements Report (Scientific Analysis Report) is to document, in one report, the comprehensive set of measurements taken within the Yucca Mountain Project Thermal Testing Program since its inception in 1996. Currently, the testing performed and measurements collected are either scattered in many level 3 and level 4 milestone reports or, in the case of the ongoing Drift Scale Test, mostly documented in eight informal progress reports. Documentation in existing reports is uneven in level of detail and quality. Furthermore, while all the data collected within the Yucca Mountain Site Characterization Project (YMP) Thermal Testing Program have been submitted periodically to the Technical Data Management System (TDMS), the data structure--several incremental submittals, and documentation formats--are such that the data are often not user-friendly except to those who acquired and processed the data. The documentation in this report is intended to make data collected within the YMP Thermal Testing Program readily usable to end users, such as those representing the Performance Assessment Project, Repository Design Project, and Engineered Systems Sub-Project. Since either detailed level 3 and level 4 reports exist or the measurements are straightforward, only brief discussions are provided for each data set. These brief discussions for different data sets are intended to impart a clear sense of applicability of data, so that they will be used properly within the context of measurement uncertainty. This approach also keeps this report to a manageable size, an important consideration because the report encompasses nearly all measurements for three long-term thermal tests. As appropriate, thermal testing data currently residing in the TDMS have been reorganized and reformatted from cumbersome, user-unfriendly Input-Data Tracking Numbers (DTNs) into a new set of Output-DTNs. These Output-DTNs provide a readily usable data structure

  11. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  12. Measuring Fit of Sequence Data to Phylogenetic Model: Gain of Power Using Marginal Tests

    Science.gov (United States)

    Waddell, Peter J.; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (1978) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (1982) to the present. We compare the general log-likelihood ratio (the G or G2 statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (p~0.5), but the marginalized tests do. Tests on pair-wise frequency (F) matrices, strongly (p < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (p < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4t patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with p << 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published analyses may really be far larger than the analytical methods (e.g., bootstrap) report.

  13. Measuring Japanese EFL Student Perceptions of Internet-Based Tests with the Technology Acceptance Model

    Science.gov (United States)

    Dizon, Gilbert

    2016-01-01

    The Internet has made it possible for teachers to administer online assessments with affordability and ease. However, little is known about Japanese English as a Foreign Language (EFL) students' attitudes of internet-based tests (IBTs). Therefore, this study aimed to measure the perceptions of IBTs among Japanese English language learners with the…

  14. Wind Loads on Ships and Offshore Structures Determined by Model Tests, CFD and Full-Scale Measurements

    DEFF Research Database (Denmark)

    Aage, Christian

    1998-01-01

    Wind loads on ships and offshore structures have until recently been determined only by model tests, or by statistical methods based on model tests. By the development of Computational Fluid Dynamics or CFD there is now a realistic computational alternative. In principle, both methods should...... be validated systematically against full-scale measurements, but due to the great practical difficulties involved, this is almost never done. In this investigation, wind loads on a seagoing ferry and on a semisubmersible platform have been determined by model tests and by CFD. On the ferry, full......-scale measurements have been carried out as well. The CFD method also offers the possibility of a computational estimate of scale effects related to wind tunnel model testing. An example of such an estimate on the ferry is discussed. This work has been published in more details in Proceedings of BOSS'97, Aage et al...

  15. Testing the Standard Model by precision measurement of the weak charges of quarks

    Energy Technology Data Exchange (ETDEWEB)

    Ross Young; Roger Carlini; Anthony Thomas; Julie Roche

    2007-05-01

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.

  16. Psychometric Characteristics of a Measure of Emotional Dispositions Developed to Test a Developmental Propensity Model of Conduct Disorder

    Science.gov (United States)

    Lahey, Benjamin B.; Applegate, Brooks; Chronis, Andrea M.; Jones, Heather A.; Williams, Stephanie Hall; Loney, Jan; Waldman, Irwin D.

    2008-01-01

    Lahey and Waldman proposed a developmental propensity model in which three dimensions of children's emotional dispositions are hypothesized to transact with the environment to influence risk for conduct disorder, heterogeneity in conduct disorder, and comorbidity with other disorders. To prepare for future tests of this model, a new measure of…

  17. Test limits using correlated measurements

    NARCIS (Netherlands)

    Albers, Willem/Wim; Arts, G.R.J.; Kallenberg, W.C.M.

    1998-01-01

    In the standard model for inspection of manufactured parts measurements of the characteristic of interest are subject to, typically small, measurement errors. This leads to test limits which are slightly more strict than the corresponding specification limits. Quite often, however, such direct

  18. Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.

    Science.gov (United States)

    Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian

    2015-10-01

    Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 mm) as well as string length congruity (congruent: 1 m_2 km with m 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.

  19. Modeling and Experimental Tests of a Mechatronic Device to Measure Road Profiles Considering Impact Dynamics

    DEFF Research Database (Denmark)

    Souza, A.; Santos, Ilmar

    2002-01-01

    of a vehicle and to test its components in laboratory. In this framework a mechanism to measure road profiles is designed and presented. Such a mechanism is composed of two rolling wheels and two long beams attached to the vehicles by means of four Kardan joints. The wheels are kept in contact to the ground...... to highlight that the aim of this device is to independently measure two road profiles, without the influence of the vehicle dynamics where the mechanism is attached. Before the mechatronic mechanism is attached to a real vehicle, its dynamic behavior must be known. A theoretical analysis of the mechanism...... predicts well the mechanism movements. However it was also experimentally observed that the contact between the wheels and the road profile is not permanent. To analyze the non-contact between the wheels and the road, the Newton-Euler´s Method is used to calculate forces and moments of reactions between...

  20. Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members

    NARCIS (Netherlands)

    Rusman, Ellen; Van Bruggen, Jan; Valcke, Martin

    2009-01-01

    Rusman, E., Van Bruggen, J., & Valcke, M. (2009). Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members. Paper presented at the Trust Workshop at the Eighth International Conference on Autonomous Agents and Multiagent Systems

  1. Standard-Model Tests with Superallowed β-Decay: An Important Application of Very Precise Mass Measurements

    International Nuclear Information System (INIS)

    Hardy, J. C.; Towner, I. S.

    2001-01-01

    Superallowed β-decay provides a sensitive means for probing the limitations of the Electroweak Standard Model. To date, the strengths (ft-values) of superallowed 0 +→ 0 + β-decay transitions have been determined with high precision from nine different short-lived nuclei, ranging from 10 C to 54 Co. Each result leads to an independent measure for the vector coupling constant G V and collectively the nine values can be used to test the conservation of the weak vector current (CVC). Within current uncertainties, the results support CVC to better than a few parts in 10,000 - a clear success for the Standard Model! However, when the average value of G V , as determined in this way, is combined with data from decays of the muon and kaon to test another prediction of the Standard Model, the result is much more provocative. A test of the unitarity of the Cabibbo-Kobayashi-Maskawa matrix fails by more than two standard deviations. This result can be made more definitive by experiments that require extremely precise mass measurements, in some cases on very short-lived (≤100 ms) nuclei. This talk presents the current status and future prospects for these Standard-Model tests, emphasizing the role of precise mass, or mass-difference measurements. There remains a real challenge to mass-measurement technique with the opportunity for significant new results

  2. Modeling and Experimental Tests of a Mechatronic Device to Measure Road Profiles Considering Impact Dynamics

    DEFF Research Database (Denmark)

    Souza, A.; Santos, Ilmar

    2002-01-01

    to highlight that the aim of this device is to independently measure two road profiles, without the influence of the vehicle dynamics where the mechanism is attached. Before the mechatronic mechanism is attached to a real vehicle, its dynamic behavior must be known. A theoretical analysis of the mechanism......Vehicles travel at different speeds and, as a consequence, experience a broad spectrum of vibrations. One of the most important source of vehicle vibration is the road profile. Hence the knowledge of the characteristics of a road profile enables engineers to predict the dynamic behavior...... dynamics is led with help of a set of non-linear equations of motion obtained using Newton-Euler-Jourdain´s Method. Such a set of equation is numerically solved and the theoretical results are compared with experimental carried out with a laboratory prototype. Comparisons show that the theoretical model...

  3. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  4. Measuring English Language Workplace Proficiency across Subgroups: Using CFA Models to Validate Test Score Interpretation

    Science.gov (United States)

    Yoo, Hanwook; Manna, Venessa F.

    2017-01-01

    This study assessed the factor structure of the Test of English for International Communication (TOEIC®) Listening and Reading test, and its invariance across subgroups of test-takers. The subgroups were defined by (a) gender, (b) age, (c) employment status, (d) time spent studying English, and (e) having lived in a country where English is the…

  5. Testing the Standard Model

    CERN Document Server

    Riles, K

    1998-01-01

    The Large Electron Project (LEP) accelerator near Geneva, more than any other instrument, has rigorously tested the predictions of the Standard Model of elementary particles. LEP measurements have probed the theory from many different directions and, so far, the Standard Model has prevailed. The rigour of these tests has allowed LEP physicists to determine unequivocally the number of fundamental 'generations' of elementary particles. These tests also allowed physicists to ascertain the mass of the top quark in advance of its discovery. Recent increases in the accelerator's energy allow new measurements to be undertaken, measurements that may uncover directly or indirectly the long-sought Higgs particle, believed to impart mass to all other particles.

  6. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  7. PROPOSAL FOR A MEASUREMENT MODEL FOR SOFTWARE TESTS WITH A FOCUS ON THE MANAGEMENT OF OUTSOURCED SERVICES

    Directory of Open Access Journals (Sweden)

    Angelica Toffano Seidel Calazans

    2012-08-01

    Full Text Available The need for outsourcing IT services has shown a significant growth over the past few years. This article presents a proposal for a measurement model for Software Tests with a focus on the management of these outsourced services by governmental organizations. The following specific goals were defined: to identify and analyze the test process; to identify and analyze the existing standards that govern the hiring of IT services and to propose a Measurement Model for outsourced services of this type. As to the analysis of the data collected (documentary research and semi-structured interviews, content analysis was adopted, and in order to prepare the metrics, the GQM – Goal, Questions, Metrics – approach was used. The result was confirmed by semi-structured interviews. Here is what the research identifies as possible: to establish objective and measurable criteria for a measurement size as the input to evaluate the efforts and deadlines involved; to follow up the test sub-processes and to evaluate the service quality. Therefore, the management of this type of service hiring can be done more efficiently.

  8. Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects

    Science.gov (United States)

    Lockwood, J. R.; McCaffrey, Daniel F.

    2014-01-01

    A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…

  9. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  10. Understanding water uptake in bioaerosols using laboratory measurements, field tests, and modeling

    Science.gov (United States)

    Chaudhry, Zahra; Ratnesar-Shumate, Shanna A.; Buckley, Thomas J.; Kalter, Jeffrey M.; Gilberry, Jerome U.; Eshbaugh, Jonathan P.; Corson, Elizabeth C.; Santarpia, Joshua L.; Carter, Christopher C.

    2013-05-01

    Uptake of water by biological aerosols can impact their physical and chemical characteristics. The water content in a bioaerosol can affect the backscatter cross-section as measured by LIDAR systems. Better understanding of the water content in controlled-release clouds of bioaerosols can aid in the development of improved standoff detection systems. This study includes three methods to improve understanding of how bioaerosols take up water. The laboratory method measures hygroscopic growth of biological material after it is aerosolized and dried. Hygroscopicity curves are created as the humidity is increased in small increments to observe the deliquescence point, then the humidity is decreased to observe the efflorescence point. The field component of the study measures particle size distributions of biological material disseminated into a large humidified chamber. Measurements are made with a Twin-Aerodynamic Particle Sizer (APS, TSI, Inc), -Relative Humidity apparatus where two APS units measure the same aerosol cloud side-by-side. The first operated under dry conditions by sampling downstream of desiccant dryers, the second operated under ambient conditions. Relative humidity was measured within the sampling systems to determine the difference in the aerosol water content between the two sampling trains. The water content of the bioaerosols was calculated from the twin APS units following Khlystov et al. 2005 [1]. Biological material is measured dried and wet and compared to laboratory curves of the same material. Lastly, theoretical curves are constructed from literature values for components of the bioaerosol material.

  11. Modeling and Experimental Tests of a Mechatronic Device to Measure Road Profiles Considering Impact Dynamics

    DEFF Research Database (Denmark)

    Souza, A.; Santos, Ilmar

    2002-01-01

    dynamics is led with help of a set of non-linear equations of motion obtained using Newton-Euler-Jourdain´s Method. Such a set of equation is numerically solved and the theoretical results are compared with experimental carried out with a laboratory prototype. Comparisons show that the theoretical model...... the mechanism components. By modeling impacts between a wheel and the road by Newton´s Law, the complete dynamics of the system can be predicted, and the operational range (velocity limits) of the mechanism can be defined based on the mathematical model. Key words: multibody dynamics, impact dynamics and road...

  12. Testing of models of stomatal ozone fluxes with field measurements in a mixed Mediterranean forest

    Czech Academy of Sciences Publication Activity Database

    Fares, S.; Matteucci, G.; Mugnozza, S.; Morani, A.; Calfapietra, Carlo; Salvatori, E.; Fusaro, L.; Manes, F.; Loreto, F.

    2013-01-01

    Roč. 67, MAR (2013), s. 242-251 ISSN 1352-2310 Institutional support: RVO:67179843 Keywords : Ozone fluxes * Stomatal conductance models * GPP * Mediterranean forest Subject RIV: EH - Ecology, Behaviour Impact factor: 3.062, year: 2013

  13. Improving measurement in health education and health behavior research using item response modeling: comparison with the classical test theory approach.

    Science.gov (United States)

    Wilson, Mark; Allen, Diane D; Li, Jun Corser

    2006-12-01

    This paper compares the approach and resultant outcomes of item response models (IRMs) and classical test theory (CTT). First, it reviews basic ideas of CTT, and compares them to the ideas about using IRMs introduced in an earlier paper. It then applies a comparison scheme based on the AERA/APA/NCME 'Standards for Educational and Psychological Tests' to compare the two approaches under three general headings: (i) choosing a model; (ii) evidence for reliability--incorporating reliability coefficients and measurement error--and (iii) evidence for validity--including evidence based on instrument content, response processes, internal structure, other variables and consequences. An example analysis of a self-efficacy (SE) scale for exercise is used to illustrate these comparisons. The investigation found that there were (i) aspects of the techniques and outcomes that were similar between the two approaches, (ii) aspects where the item response modeling approach contributes to instrument construction and evaluation beyond the classical approach and (iii) aspects of the analysis where the measurement models had little to do with the analysis or outcomes. There were no aspects where the classical approach contributed to instrument construction or evaluation beyond what could be done with the IRM approach. Finally, properties of the SE scale are summarized and recommendations made.

  14. Developing and Testing a Measure for the Ethical Culture of Organizations: The Corporate Ethical Virtues Model

    NARCIS (Netherlands)

    S.P. Kaptein (Muel)

    2007-01-01

    textabstractBased on four interlocking empirical studies, this paper initially validates and refines the Corporate Ethical Virtues Model which formulates normative criteria for the ethical culture of organizations. The findings of an exploratory factor analysis provide support for the existence of

  15. Testing models of children's self-regulation within educational contexts: implications for measurement.

    Science.gov (United States)

    Raver, C Cybele; Carter, Jocelyn Smith; McCoy, Dana Charles; Roy, Amanda; Ursache, Alexandra; Friedman, Allison

    2012-01-01

    Young children's self-regulation has increasingly been identified as an important predictor of their skills versus difficulties when navigating the social and academic worlds of early schooling. Recently, researchers have called for greater precision and more empirical rigor in defining what we mean when we measure, analyze, and interpret data on the role of children's self-regulatory skills for their early learning (Cole, Martin, & Dennis, 2004; Wiebe, Espy, & Charak, 2008). To address that call, this chapter summarizes our efforts to examine self-regulation in the context of early education with a clear emphasis on the need to consider the comprehensiveness and precision of measurement of self-regulation in order to best understand its role in early learning.

  16. Measurements of entanglement over a kilometric distance to test superluminal models of Quantum Mechanics: preliminary results.

    Science.gov (United States)

    Cocciaro, B.; Faetti, S.; Fronzoni, L.

    2017-08-01

    As shown in the EPR paper (Einstein, Podolsky e Rosen, 1935), Quantum Mechanics is a non-local Theory. The Bell theorem and the successive experiments ruled out the possibility of explaining quantum correlations using only local hidden variables models. Some authors suggested that quantum correlations could be due to superluminal communications that propagate isotropically with velocity vt > c in a preferred reference frame. For finite values of vt and in some special cases, Quantum Mechanics and superluminal models lead to different predictions. So far, no deviations from the predictions of Quantum Mechanics have been detected and only lower bounds for the superluminal velocities vt have been established. Here we describe a new experiment that increases the maximum detectable superluminal velocities and we give some preliminary results.

  17. DEVELOPMENT AND ADAPTATION OF VORTEX REALIZABLE MEASUREMENT SYSTEM FOR BENCHMARK TEST WITH LARGE SCALE MODEL OF NUCLEAR REACTOR

    Directory of Open Access Journals (Sweden)

    S. M. Dmitriev

    2017-01-01

    Full Text Available The last decades development of applied calculation methods of nuclear reactor thermal and hydraulic processes are marked by the rapid growth of the High Performance Computing (HPC, which contribute to the active introduction of Computational Fluid Dynamics (CFD. The use of such programs to justify technical and economic parameters and especially the safety of nuclear reactors requires comprehensive verification of mathematical models and CFD programs. The aim of the work was the development and adaptation of a measuring system having the characteristics necessary for its application in the verification test (experimental facility. It’s main objective is to study the processes of coolant flow mixing with different physical properties (for example, the concentration of dissolved impurities inside a large-scale reactor model. The basic method used for registration of the spatial concentration field in the mixing area is the method of spatial conductometry. In the course of the work, a measurement complex, including spatial conductometric sensors, a system of secondary converters and software, was created. Methods of calibration and normalization of measurement results are developed. Averaged concentration fields, nonstationary realizations of the measured local conductivity were obtained during the first experimental series, spectral and statistical analysis of the realizations were carried out.The acquired data are compared with pretest CFD-calculations performed in the ANSYS CFX program. A joint analysis of the obtained results made it possible to identify the main regularities of the process under study, and to demonstrate the capabilities of the designed measuring system to receive the experimental data of the «CFD-quality» required for verification.The carried out adaptation of spatial sensors allows to conduct a more extensive program of experimental tests, on the basis of which a databank and necessary generalizations will be created

  18. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus

    1984-01-01

    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  19. A model and diagnostic measures for response time series on tests of concentration: Historical background, conceptual framework, and some applications

    NARCIS (Netherlands)

    Breukelen, G.J.P. van; Roskam, E.E.C.I.; Eling, P.A.T.M.; Jansen, R.W.T.L.; Souren, D.A.P.B.; Ickenroth, J.G.M.

    1995-01-01

    Based upon classical hypotheses about accumulating mental fatigue and distraction and its effect on response times, put forward in late 19th and early 20th century papers, a mathematical model is proposed for response times on tests of speed and concentration. The model assumes the random occurrence

  20. Conformance Testing: Measurement Decision Rules

    Science.gov (United States)

    Mimbs, Scott M.

    2010-01-01

    The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to provide assurance to the customer that end products meet specifications. Measuring devices, often called measuring and test equipment (MTE), are used to provide the evidence of product conformity to specified requirements. Unfortunately, processes that employ MTE can become a weak link to the overall QMS if proper attention is not given to the measurement process design, capability, and implementation. Documented "decision rules" establish the requirements to ensure measurement processes provide the measurement data that supports the needs of the QMS. Measurement data are used to make the decisions that impact all areas of technology. Whether measurements support research, design, production, or maintenance, ensuring the data supports the decision is crucial. Measurement data quality can be critical to the resulting consequences of measurement-based decisions. Historically, most industries required simplistic, one-size-fits-all decision rules for measurements. One-size-fits-all rules in some cases are not rigorous enough to provide adequate measurement results, while in other cases are overly conservative and too costly to implement. Ideally, decision rules should be rigorous enough to match the criticality of the parameter being measured, while being flexible enough to be cost effective. The goal of a decision rule is to ensure that measurement processes provide data with a sufficient level of quality to support the decisions being made - no more, no less. This paper discusses the basic concepts of providing measurement-based evidence that end products meet specifications. Although relevant to all measurement-based conformance tests, the target audience is the MTE end-user, which is anyone using MTE other than calibration service providers. Topics include measurement fundamentals, the associated decision risks, verifying conformance to specifications, and basic measurement

  1. Comparison between traditional laboratory tests, permeability measurements and CT-based fluid flow modelling for cultural heritage applications.

    Science.gov (United States)

    De Boever, Wesley; Bultreys, Tom; Derluyn, Hannelore; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-06-01

    In this paper, we examine the possibility to use on-site permeability measurements for cultural heritage applications as an alternative for traditional laboratory tests such as determination of the capillary absorption coefficient. These on-site measurements, performed with a portable air permeameter, were correlated with the pore network properties of eight sandstones and one granular limestone that are discussed in this paper. The network properties of the 9 materials tested in this study were obtained from micro-computed tomography (μCT) and compared to measurements and calculations of permeability and the capillary absorption rate of the stones under investigation, in order to find the correlation between pore network characteristics and fluid management characteristics of these sandstones. Results show a good correlation between capillary absorption, permeability and network properties, opening the possibility of using on-site permeability measurements as a standard method in cultural heritage applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Comparison between traditional laboratory tests, permeability measurements and CT-based fluid flow modelling for cultural heritage applications

    Energy Technology Data Exchange (ETDEWEB)

    De Boever, Wesley, E-mail: Wesley.deboever@ugent.be [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium); Bultreys, Tom; Derluyn, Hannelore [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium); Van Hoorebeke, Luc [UGCT/Radiation Physics, Dept. of Physics & Astronomy, Ghent University, Proeftuinstraat 86, 9000 Ghent (Belgium); Cnudde, Veerle [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium)

    2016-06-01

    In this paper, we examine the possibility to use on-site permeability measurements for cultural heritage applications as an alternative for traditional laboratory tests such as determination of the capillary absorption coefficient. These on-site measurements, performed with a portable air permeameter, were correlated with the pore network properties of eight sandstones and one granular limestone that are discussed in this paper. The network properties of the 9 materials tested in this study were obtained from micro-computed tomography (μCT) and compared to measurements and calculations of permeability and the capillary absorption rate of the stones under investigation, in order to find the correlation between pore network characteristics and fluid management characteristics of these sandstones. Results show a good correlation between capillary absorption, permeability and network properties, opening the possibility of using on-site permeability measurements as a standard method in cultural heritage applications. - Highlights: • Measurements of capillary absorption are compared to in-situ permeability. • We obtain pore size distribution and connectivity by using micro-CT. • These properties explain correlation between permeability and capillarity. • Correlation between both methods is good to excellent. • Permeability measurements could be a good alternative to capillarity measurement.

  3. A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.

    Science.gov (United States)

    Benson, Jeri

    Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…

  4. Standardized Testing of Phasor Measurement Units

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Kenneth E.; Faris, Anthony J.; Hauer, John F.

    2006-05-31

    This paper describes a set of tests used to determine Phasor Measurement Unit (PMU) measurement characteristics under steady state and dynamic conditions. The methodology is repeatable, comparable among test facilities, and can be performed at any facility with commonly available relay and standard test equipment. The methodology is based upon using test signals that are mathematically generated from a signal model and played into the PMU with precise GPS synchronization. Timing flags included with the test signal provide correlate the test signals and the PMU output. This allows accurate comparison of the phasor model with the value estimated by the PMU for accurate performance analysis. The timing flags also facilitate programmed plot and report generation.

  5. Measurement of nu/sub e/ and anti nu/sub e/ elastic scattering as a test of the standard model

    International Nuclear Information System (INIS)

    Abe, K.; Taylor, F.E.; White, D.H.

    1982-01-01

    Various tests of standard SU(2) x U(1) model of weak interactions which can be performed by measurements of electron and muon neutrino-electron elastic scattering are reviewed. Electron neutrino-electron elastic scattering has both a neutral current part as well as a charged current part, and therefore offers a unique place to measure the interference of these two amplitudes. A measurement of the y-dependence of neutrino-electron elastic scattering can separately measure g/sub V/ and g/sub A/ as well as test for the presence of S, P, or T terms. Several measurable quantities involving cross sections and the interference term are derived from the standard model. Various design considerations for an experiment to determine the NC-CC interference term and the y-dependence of muon neutrino-electron elastic scattering are discussed

  6. Tests of the electroweak standard model and measurement of the weak mixing angle with the ATLAS detector

    International Nuclear Information System (INIS)

    Goebel, M.

    2011-09-01

    In this thesis the global Standard Model (SM) fit to the electroweak precision observables is revisted with respect to newest experimental results. Various consistency checks are performed showing no significant deviation from the SM. The Higgs boson mass is estimated by the electroweak fit to be M H =94 -24 +30 GeV without any information from direct Higgs searches at LEP, Tevatron, and the LHC and the result is M H =125 -10 +8 GeV when including the direct Higgs mass constraints. The strong coupling constant is extracted at fourth perturbative order as α s (M Z 2 )=0.1194±0.0028(exp)±0.0001 (theo). From the fit including the direct Higgs constraints the effective weak mixing angle is determined indirectly to be sin 2 θ l eff =0.23147 -0.00010 +0.00012 . For the W mass the value of M W =80.360 -0.011 +0.012 GeV is obtained indirectly from the fit including the direct Higgs constraints. The electroweak precision data is also exploited to constrain new physics models by using the concept of oblique parameters. In this thesis the following models are investigated: models with a sequential fourth fermion generation, the inert-Higgs doublet model, the littlest Higgs model with T-parity conservation, and models with large extra dimensions. In contrast to the SM, in these models heavy Higgs bosons are in agreement with the electroweak precision data. The forward-backward asymmetry as a function of the invariant mass is measured for pp→ Z/γ * →e + e - events collected with the ATLAS detector at the LHC. The data taken in 2010 at a center-of-mass energy of √(s)=7 TeV corresponding to an integrated luminosity of 37.4 pb -1 is analyzed. The measured forward-backward asymmetry is in agreement with the SM expectation. From the measured forward-backward asymmetry the effective weak mixing angle is extracted as sin 2 θ l eff =0.2204±.0071(stat) -0.0044 +0.0039 (syst). The impact of unparticles and large extra dimensions on the forward-backward asymmetry at large

  7. Tests of the electroweak standard model and measurement of the weak mixing angle with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Goebel, M.

    2011-09-15

    In this thesis the global Standard Model (SM) fit to the electroweak precision observables is revisted with respect to newest experimental results. Various consistency checks are performed showing no significant deviation from the SM. The Higgs boson mass is estimated by the electroweak fit to be M{sub H}=94{sub -24}{sup +30} GeV without any information from direct Higgs searches at LEP, Tevatron, and the LHC and the result is M{sub H}=125{sub -10}{sup +8} GeV when including the direct Higgs mass constraints. The strong coupling constant is extracted at fourth perturbative order as {alpha}{sub s}(M{sub Z}{sup 2})=0.1194{+-}0.0028(exp){+-}0.0001 (theo). From the fit including the direct Higgs constraints the effective weak mixing angle is determined indirectly to be sin{sup 2} {theta}{sup l}{sub eff}=0.23147{sub -0.00010}{sup +0.00012}. For the W mass the value of M{sub W}=80.360{sub -0.011}{sup +0.012} GeV is obtained indirectly from the fit including the direct Higgs constraints. The electroweak precision data is also exploited to constrain new physics models by using the concept of oblique parameters. In this thesis the following models are investigated: models with a sequential fourth fermion generation, the inert-Higgs doublet model, the littlest Higgs model with T-parity conservation, and models with large extra dimensions. In contrast to the SM, in these models heavy Higgs bosons are in agreement with the electroweak precision data. The forward-backward asymmetry as a function of the invariant mass is measured for pp{yields} Z/{gamma}{sup *}{yields}e{sup +}e{sup -} events collected with the ATLAS detector at the LHC. The data taken in 2010 at a center-of-mass energy of {radical}(s)=7 TeV corresponding to an integrated luminosity of 37.4 pb{sup -1} is analyzed. The measured forward-backward asymmetry is in agreement with the SM expectation. From the measured forward-backward asymmetry the effective weak mixing angle is extracted as sin{sup 2} {theta}{sup l

  8. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  9. Developing and testing a measurement tool for assessing predictors of breakfast consumption based on a health promotion model.

    Science.gov (United States)

    Dehdari, Tahereh; Rahimi, Tahereh; Aryaeian, Naheed; Gohari, Mahmood Reza; Esfeh, Jabiz Modaresi

    2014-01-01

    To develop an instrument for measuring Health Promotion Model constructs in terms of breakfast consumption, and to identify the constructs that were predictors of breakfast consumption among Iranian female students. A questionnaire on Health Promotion Model variables was developed and potential predictors of breakfast consumption were assessed using this tool. One hundred female students, mean age 13 years (SD ± 1.2 years). Two middle schools from moderate-income areas in Qom, Iran. Health Promotion Model variables were assessed using a 58-item questionnaire. Breakfast consumption was also measured. Internal consistency (Cronbach alpha), content validity index, content validity ratio, multiple linear regression using stepwise method, and Pearson correlation. Content validity index and content validity ratio scores of the developed scale items were 0.89 and 0.93, respectively. Internal consistencies (range, .74-.91) of subscales were acceptable. Prior related behaviors, perceived barriers, self-efficacy, and competing demand and preferences were 4 constructs that could predict 63% variance of breakfast frequency per week among subjects. The instrument developed in this study may be a useful tool for researchers to explore factors affecting breakfast consumption among students. Students with a high level of self-efficacy, more prior related behavior, fewer perceived barriers, and fewer competing demands were most likely to regularly consume breakfast. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  10. Hybrid choice model to disentangle the effect of awareness from attitudes: Application test of soft measures in medium size city

    DEFF Research Database (Denmark)

    Sottile, Eleonora; Meloni, Italo; Cherchi, Elisabetta

    2017-01-01

    ), carried out with the purpose of promoting the use of the light rail in Park and Ride mode. To account for all these effects in the choice between car and Park and Ride we estimate a Hybrid Choice Model where the discrete choice structure allows us to estimate the effect of awareness of environment......The need to reduce private vehicle use has led to the development of soft measures aimed at re-educating car users through information processes that raise their awareness about the benefits of environmentally friendly modes, encouraging them to voluntarily change their travel choice behaviour...... (level of services characteristics being equal). It has been observed that these measures can produce enduring changes, being the result of mindful decisions. It is important then to try and understand what contributes to shape individuals’ preferences in order to be able to define the best policy...

  11. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  12. Measurement invariance within and between individuals: a distinct problem in testing the equivalence of intra- and inter-individual model structures.

    Science.gov (United States)

    Adolf, Janne; Schuurman, Noémi K; Borkenau, Peter; Borsboom, Denny; Dolan, Conor V

    2014-01-01

    We address the question of equivalence between modeling results obtained on intra-individual and inter-individual levels of psychometric analysis. Our focus is on the concept of measurement invariance and the role it may play in this context. We discuss this in general against the background of the latent variable paradigm, complemented by an operational demonstration in terms of a linear state-space model, i.e., a time series model with latent variables. Implemented in a multiple-occasion and multiple-subject setting, the model simultaneously accounts for intra-individual and inter-individual differences. We consider the conditions-in terms of invariance constraints-under which modeling results are generalizable (a) over time within subjects, (b) over subjects within occasions, and (c) over time and subjects simultaneously thus implying an equivalence-relationship between both dimensions. Since we distinguish the measurement model from the structural model governing relations between the latent variables of interest, we decompose the invariance constraints into those that involve structural parameters and those that involve measurement parameters and relate to measurement invariance. Within the resulting taxonomy of models, we show that, under the condition of measurement invariance over time and subjects, there exists a form of structural equivalence between levels of analysis that is distinct from full structural equivalence, i.e., ergodicity. We demonstrate how measurement invariance between and within subjects can be tested in the context of high-frequency repeated measures in personality research. Finally, we relate problems of measurement variance to problems of non-ergodicity as currently discussed and approached in the literature.

  13. Tree-Based Global Model Tests for Polytomous Rasch Models

    Science.gov (United States)

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  14. Do different circadian typology measures modulate their relationship with personality? A test using the Alternative Five Factor Model.

    Science.gov (United States)

    Randler, Christoph; Gomà-i-Freixanet, Montserrat; Muro, Anna; Knauber, Christina; Adan, Ana

    2015-03-01

    The relationship between personality and circadian typology shows some inconsistent results and it has been hypothesized that the model used to measure personality might have a moderating effect on this relationship. However, it has never been explored if this inconsistency was dependent on the questionnaire used to measure differences in circadian rhythms as well. We explored this issue in a sample of 564 university students (32% men; 19-40 years) using the Zuckerman-Kuhlman Personality Questionnaire, which is based on an evolutionary-biological approach, in combination with the Composite Scale of Morningness (CSM) and the reduced Morningness-Eveningness Questionnaire (rMEQ). Both questionnaires detected differences between circadian typologies in Sociability (highest in evening types; ET) and Impulsive Sensation-Seeking scales (highest in ET), while the CSM also detected differences in Activity (lowest in ET) and Aggression-Hostility (highest in ET). Further, both questionnaires detected differences between circadian typologies in the subscales General Activity (morning types [MT] higher than ET), Impulsivity (ET highest) and Sensation-Seeking (highest in ET). Differences between circadian typologies/groups in the subscales Parties (highest in ET) and Isolation Intolerance (lowest in MT) were only detected by the rMEQ. The CSM clearly separated evening types from neither and morning types while the rMEQ showed that neither types are not intermediate but closer to evening types in General Activity and Isolation Intolerance, and closer to morning types in Impulsive Sensation-Seeking, Parties, Impulsivity and Sensation Seeking. The obtained results indicate that the relationship between circadian typology and personality may be dependent on the instrument used to assess circadian typology. This fact may help to explain some of the conflicting data available on the relationship between these two concepts.

  15. Dual-porosity modeling of groundwater recharge: testing a quick calibration using in situ moisture measurements, Areuse River Delta, Switzerland

    Science.gov (United States)

    Alaoui, Abdallah; Eugster, Werner

    A simple method for calibrating the dual-porosity MACRO model via in situ TDR measurements during a brief infiltration run (2.8 h) is proposed with the aim of estimating local groundwater recharge (GR). The recharge was modeled firstly by considering the entire 3 m of unsaturated soil, and secondly by considering only the topsoil to the zero-flux plane (0-0.70 m). The modeled recharge was compared against the GR obtained from field measurements. Measured GR was 313 mm during a 1-year period (15 October 1990-15 October 1991). The best simulation results were obtained when considering the entire unsaturated soil under equilibrium conditions excluding the macropore flow effect (330 mm), whereas under non-equilibrium conditions GR was overestimated (378 mm). Sensitivity analyses showed that the investigation of the topsoil is sufficient in estimating local GR in this case, since the water stored below this depth appears to be below the typical rooting depth of the vegetation and is not available for evapotranspiration. The modeled recharge under equilibrium conditions for the 0.7-m-topsoil layer was found to be 364 mm, which is in acceptable agreement with measurements. Une méthode simple pour la calibration du modèle à double porosité MACRO par des mesures TDR in situ durant un bref essai d'infiltration (2.8 h) a été proposée pour l'estimation locale de la recharge de la nappe (RN). La RN a été d'abord simulée en tenant compte de toute la zone non saturée (3 m) et ensuite, en considérant uniquement la couverture du sol entre zéro et le plan du flux nul (0.70 m). La RN simulée a été comparée à la RN observée. La RN mesurée durant une année (15 octobre 1990-15 octobre 1991) était de 313 mm. Les meilleures simulations ont été obtenues en tenant compte de toute la zone non saturée sous les conditions d'équilibre excluant le flux préférentiel (330 mm). Sous les conditions de non équilibre, la RN a été surestimée (378 mm). Les analyses de

  16. Testing a blowing snow model against distributed snow measurements at Upper Sheep Creek, Idaho, United States of America

    Science.gov (United States)

    Rajiv Prasad; David G. Tarboton; Glen E. Liston; Charles H. Luce; Mark S. Seyfried

    2001-01-01

    In this paper a physically based snow transport model (SnowTran-3D) was used to simulate snow drifting over a 30 m grid and was compared to detailed snow water equivalence (SWE) surveys on three dates within a small 0.25 km2 subwatershed, Upper Sheep Creek. Two precipitation scenarios and two vegetation scenarios were used to carry out four snow transport model runs in...

  17. Measuring Collective Efficacy: A Multilevel Measurement Model for Nested Data

    Science.gov (United States)

    Matsueda, Ross L.; Drakulich, Kevin M.

    2016-01-01

    This article specifies a multilevel measurement model for survey response when data are nested. The model includes a test-retest model of reliability, a confirmatory factor model of inter-item reliability with item-specific bias effects, an individual-level model of the biasing effects due to respondent characteristics, and a neighborhood-level…

  18. Home-cage anxiety levels in a transgenic rat model for Spinocerebellar ataxia type 17 measured by an approach-avoidance task: The light spot test.

    Science.gov (United States)

    Kyriakou, Elisavet I; Nguyen, Huu Phuc; Homberg, Judith R; Van der Harst, Johanneke E

    2017-08-18

    Measuring anxiety in a reliable manner is essential for behavioural phenotyping of rodent models such as the rat model for Spinocerebellar ataxia type 17 (SCA17) where anxiety is reported in patients. An automated tool for assessing anxiety within the home cage can minimize human intervention, stress of handling, transportation and novelty. We applied the anxiety test "light spot" (LS) (white led directed at the food-hopper) to our transgenic SCA17 rat model in the PhenoTyper 4500 ® to extend the knowledge of this automated tool for behavioural phenotyping and to verify an anxiety-like phenotype at three different disease stages for use in future therapeutic studies. Locomotor activity was increased in SCA17 rats at 6 and 9 months during the first 15min of the LS, potentially reflecting increased risk assessment. Both genotypes responded to the test with lower duration in the LS zone and higher time spent inside the shelter compared to baseline. We present the first data of a rat model subjected to the LS. The LS can be considered more biologically relevant than a traditional test as it measures anxiety in a familiar situation. The LS successfully evoked avoidance and shelter-seeking in rats. SCA17 rats showed a stronger approach-avoidance conflict reflected by increased activity in the area outside the LS. This home cage test, continuously monitoring pre- and post-effects, provides the opportunity for in-depth analysis, making it a potentially useful tool for detecting subtle or complex anxiety-related traits in rodents. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model: Measurement of Earth's dragging of inertial frames.

    Science.gov (United States)

    Ciufolini, Ignazio; Paolozzi, Antonio; Pavlis, Erricos C; Koenig, Rolf; Ries, John; Gurzadyan, Vahe; Matzner, Richard; Penrose, Roger; Sindoni, Giampiero; Paris, Claudio; Khachatryan, Harutyun; Mirzoyan, Sergey

    2016-01-01

    We present a test of general relativity, the measurement of the Earth's dragging of inertial frames. Our result is obtained using about 3.5 years of laser-ranged observations of the LARES, LAGEOS, and LAGEOS 2 laser-ranged satellites together with the Earth gravity field model GGM05S produced by the space geodesy mission GRACE. We measure [Formula: see text], where [Formula: see text] is the Earth's dragging of inertial frames normalized to its general relativity value, 0.002 is the 1-sigma formal error and 0.05 is our preliminary estimate of systematic error mainly due to the uncertainties in the Earth gravity model GGM05S. Our result is in agreement with the prediction of general relativity.

  20. A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model. Measurement of Earth's dragging of inertial frames

    Energy Technology Data Exchange (ETDEWEB)

    Ciufolini, Ignazio [Universita del Salento, Dipartimento Ingegneria dell' Innovazione, Lecce (Italy); Sapienza Universita di Roma, Scuola di Ingegneria Aerospaziale, Rome (Italy); Paolozzi, Antonio; Paris, Claudio [Sapienza Universita di Roma, Scuola di Ingegneria Aerospaziale, Rome (Italy); Museo della Fisica e Centro Studi e Ricerche Enrico Fermi, Rome (Italy); Pavlis, Erricos C. [University of Maryland, Joint Center for Earth Systems Technology (JCET), Baltimore County (United States); Koenig, Rolf [GFZ German Research Centre for Geosciences, Helmholtz Centre Potsdam, Potsdam (Germany); Ries, John [University of Texas at Austin, Center for Space Research, Austin (United States); Gurzadyan, Vahe; Khachatryan, Harutyun; Mirzoyan, Sergey [Alikhanian National Laboratory and Yerevan State University, Center for Cosmology and Astrophysics, Yerevan (Armenia); Matzner, Richard [University of Texas at Austin, Theory Center, Austin (United States); Penrose, Roger [University of Oxford, Mathematical Institute, Oxford (United Kingdom); Sindoni, Giampiero [Sapienza Universita di Roma, DIAEE, Rome (Italy)

    2016-03-15

    We present a test of general relativity, the measurement of the Earth's dragging of inertial frames. Our result is obtained using about 3.5 years of laser-ranged observations of the LARES, LAGEOS, and LAGEOS 2 laser-ranged satellites together with the Earth gravity field model GGM05S produced by the space geodesy mission GRACE. We measure μ = (0.994 ± 0.002) ± 0.05, where μ is the Earth's dragging of inertial frames normalized to its general relativity value, 0.002 is the 1-sigma formal error and 0.05 is our preliminary estimate of systematic error mainly due to the uncertainties in the Earth gravity model GGM05S. Our result is in agreement with the prediction of general relativity. (orig.)

  1. Recommendations for analysis of repeated-measures designs: testing and correcting for sphericity and use of manova and mixed model analysis.

    Science.gov (United States)

    Armstrong, Richard A

    2017-09-01

    A common experimental design in ophthalmic research is the repeated-measures design in which at least one variable is a within-subject factor. This design is vulnerable to lack of 'sphericity' which assumes that the variances of the differences among all possible pairs of within-subject means are equal. Traditionally, this design has been analysed using a repeated-measures analysis of variance (RM-anova) but increasingly more complex methods such as multivariate anova (manova) and mixed model analysis (MMA) are being used. This article surveys current practice in the analysis of designs incorporating different factors in research articles published in three optometric journals, namely Ophthalmic and Physiological Optics (OPO), Optometry and Vision Science (OVS), and Clinical and Experimental Optometry (CXO), and provides advice to authors regarding the analysis of repeated-measures designs. Of the total sample of articles, 66% used a repeated-measures design. Of those articles using a repeated-measures design, 59% and 8% analysed the data using RM-anova or manova respectively and 33% used MMA. The use of MMA relative to RM-anova has increased significantly since 2009/10. A further search using terms to select those papers testing and correcting for sphericity ('Mauchly's test', 'Greenhouse-Geisser', 'Huynh and Feld') identified 66 articles, 62% of which were published from 2012 to the present. If the design is balanced without missing data then manova should be used rather than RM-anova as it gives better protection against lack of sphericity. If the design is unbalanced or with missing data then MMA is the method of choice. However, MMA is a more complex analysis and can be difficult to set up and run, and care should be taken first, to define appropriate models to be tested and second, to ensure that sample sizes are adequate. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  2. Strain measurement based battery testing

    Science.gov (United States)

    Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk

    2017-05-23

    A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.

  3. Study of the effects of low-fluence laser irradiation on wall paintings: Test measurements on fresco model samples

    Science.gov (United States)

    Raimondi, Valentina; Cucci, Costanza; Cuzman, Oana; Fornacelli, Cristina; Galeotti, Monica; Gomoiu, Ioana; Lognoli, David; Mohanu, Dan; Palombi, Lorenzo; Picollo, Marcello; Tiano, Piero

    2013-11-01

    Laser-induced fluorescence is widely applied in several fields as a diagnostic tool to characterise organic and inorganic materials and could be also exploited for non-invasive remote investigation of wall paintings using the fluorescence lidar technique. The latter relies on the use of a low-fluence pulsed UV laser and a telescope to carry out remote spectroscopy on a given target. A first step to investigate the applicability of this technique is to assess the effects of low-fluence laser radiation on wall paintings. This paper presents a study devoted to investigate the effects of pulsed UV laser radiation on a set of fresco model samples prepared using different pigments. To irradiate the samples we used a tripled-frequency Q-switched Nd:YAG laser (emission wavelength: 355 nm; pulse width: 5 ns). We varied the laser fluence from 0.1 mJ/cm2 to 1 mJ/cm2 and the number of laser pulses from 1 to 500 shots. We characterised the investigated materials using several diagnostic and analytical techniques (colorimetry, optical microscopy, fibre optical reflectance spectroscopy and ATR-FT-IR microscopy) to compare the surface texture and their composition before and after laser irradiation. Results open good prospects for a non-invasive investigation of wall paintings using the fluorescence lidar technique.

  4. Study of the effects of low-fluence laser irradiation on wall paintings: Test measurements on fresco model samples

    International Nuclear Information System (INIS)

    Raimondi, Valentina; Cucci, Costanza; Cuzman, Oana; Fornacelli, Cristina; Galeotti, Monica; Gomoiu, Ioana; Lognoli, David; Mohanu, Dan; Palombi, Lorenzo; Picollo, Marcello; Tiano, Piero

    2013-01-01

    Laser-induced fluorescence is widely applied in several fields as a diagnostic tool to characterise organic and inorganic materials and could be also exploited for non-invasive remote investigation of wall paintings using the fluorescence lidar technique. The latter relies on the use of a low-fluence pulsed UV laser and a telescope to carry out remote spectroscopy on a given target. A first step to investigate the applicability of this technique is to assess the effects of low-fluence laser radiation on wall paintings. This paper presents a study devoted to investigate the effects of pulsed UV laser radiation on a set of fresco model samples prepared using different pigments. To irradiate the samples we used a tripled-frequency Q-switched Nd:YAG laser (emission wavelength: 355 nm; pulse width: 5 ns). We varied the laser fluence from 0.1 mJ/cm 2 to 1 mJ/cm 2 and the number of laser pulses from 1 to 500 shots. We characterised the investigated materials using several diagnostic and analytical techniques (colorimetry, optical microscopy, fibre optical reflectance spectroscopy and ATR-FT-IR microscopy) to compare the surface texture and their composition before and after laser irradiation. Results open good prospects for a non-invasive investigation of wall paintings using the fluorescence lidar technique.

  5. Study of the effects of low-fluence laser irradiation on wall paintings: Test measurements on fresco model samples

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, Valentina, E-mail: v.raimondi@ifac.cnr.it [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Cucci, Costanza [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Cuzman, Oana [Institute for the Conservation and Promotion of Cultural Heritage-National Research Council (CNR-ICVBC), Firenze (Italy); Fornacelli, Cristina [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Galeotti, Monica [Opificio delle Pietre Dure (OPD), Firenze (Italy); Gomoiu, Ioana [National University of Art, Bucharest (Romania); Lognoli, David [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Mohanu, Dan [National University of Art, Bucharest (Romania); Palombi, Lorenzo; Picollo, Marcello [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Tiano, Piero [Institute for the Conservation and Promotion of Cultural Heritage-National Research Council (CNR-ICVBC), Firenze (Italy)

    2013-11-01

    Laser-induced fluorescence is widely applied in several fields as a diagnostic tool to characterise organic and inorganic materials and could be also exploited for non-invasive remote investigation of wall paintings using the fluorescence lidar technique. The latter relies on the use of a low-fluence pulsed UV laser and a telescope to carry out remote spectroscopy on a given target. A first step to investigate the applicability of this technique is to assess the effects of low-fluence laser radiation on wall paintings. This paper presents a study devoted to investigate the effects of pulsed UV laser radiation on a set of fresco model samples prepared using different pigments. To irradiate the samples we used a tripled-frequency Q-switched Nd:YAG laser (emission wavelength: 355 nm; pulse width: 5 ns). We varied the laser fluence from 0.1 mJ/cm{sup 2} to 1 mJ/cm{sup 2} and the number of laser pulses from 1 to 500 shots. We characterised the investigated materials using several diagnostic and analytical techniques (colorimetry, optical microscopy, fibre optical reflectance spectroscopy and ATR-FT-IR microscopy) to compare the surface texture and their composition before and after laser irradiation. Results open good prospects for a non-invasive investigation of wall paintings using the fluorescence lidar technique.

  6. General measure of Enterprising Tendency test

    OpenAIRE

    Caird, Sally

    2013-01-01

    The General measure of Enterprising Tendency test (GET2) is a measure of enterprising tendency developed for educational use and self assessment. It measures five entrepreneurial attributes, namely Need for achievement, Need for Autonomy, Creative Tendency, Calculated Risk taking and Locus of control, providing interpretation for this enterprising attributes. Since 1998 there has been considerable worldwide interest in the test of General Enterprising Tendency (GET test) developed and tested ...

  7. Measuring and modelling concurrency

    Science.gov (United States)

    Sawers, Larry

    2013-01-01

    This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships): measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction) at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case. PMID:23406964

  8. Are CH2O measurements in the marine boundary layer suitable for testing the current understanding of CH4 photooxidation?: A model study

    Science.gov (United States)

    Wagner, V.; von Glasow, R.; Fischer, H.; Crutzen, P. J.

    2002-02-01

    On the basis of a data set collected during the Indian Ocean Experiment (INDOEX) campaign 1999, we investigated the formaldehyde (CH2O) budget in the southern Indian Ocean (SIO). With a photochemical box model we simulated the contribution of methane and nonmethane volatile organic compounds to the CH2O budget. To identify the reactions and model constraints that introduce the largest uncertainties in the modeled CH2O concentration, we carried out a local sensitivity analysis. Furthermore, a Monte Carlo method was used to assess the global error of the model predictions. According to this analysis the 2σ uncertainty in the modeled CH2O concentration is 49%. The deviation between observed (200 +/- 70 parts per trillion by volume (pptv) (2σ)) and modeled (224 +/- 110 pptv (2σ)) daily mean CH2O concentration is 12%. However, the combined errors of model and measurement are such that deviations as large as 65% are not significant at the 2σ level. Beyond the ``standard'' photochemistry we analyzed the impact of halogen and aerosol chemistry on the CH2O concentration and investigated the vertical distribution of CH2O in the marine boundary layer (MBL). Calculations with the Model of Chemistry Considering Aerosols indicate that, based on the current understanding, halogen chemistry and aerosol chemistry have no significant impact on the CH2O concentration under conditions encountered in the SIO. However, a detailed investigation including meteorological effects such as precipitation scavenging and convection reveals an uncertainty in state-of-the-art model predictions for CH2O in the MBL that is too large for a meaningful test of the current understanding of CH4 photooxidation.

  9. Testing a measure of cyberloafing.

    Science.gov (United States)

    Blau, Gary; Yang, Yang; Ward-Cook, Kory

    2006-01-01

    Using a primary sample of medical technologists (MTs) and a second validation sample, the results of this study showed initial support for a three-factor measure of cyberloafing. The three scales were labeled browsing-related, non-work-related e-mail, and interactive cyberloafing. MTs who perceived unfair treatment in their organization (i.e., lower organizational justice) were more likely to exhibit all three types of cyberloafing. MTs who did not care as much about punctuality and attendance (i.e., higher time abuse) were more likely to display browsing-related and non-work-related e-mail cyberloafing. Finally, MTs who perceived an inability to control their work environment (i.e., powerlessness) were more likely to display interactive cyberloafing. Study limitations and suggestions for future research are discussed.

  10. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  11. The influence of different measurement structures on NRTA test procedures

    International Nuclear Information System (INIS)

    Beedgen, R.

    1986-01-01

    The development of sequential statistical test procedures in the area of near real time material accountancy (NRTA) mostly assumed a fixed measurement model of a given model facility. In this paper different measurement models (dispersion matrices) for a sequence of balance periods are studied. They are used to compare the detection probabilities of three different sequential test procedures for losses of material. It is shown how different plant models have an influence on the sensitivity of specified tests. Great importance for that analysis have the optimal loss patterns in each measurement situation

  12. Validation of measured friction by process tests

    DEFF Research Database (Denmark)

    Eriksen, Morten; Henningsen, Poul; Tan, Xincai

    The objective of sub-task 3.3 is to evaluate under actual process conditions the friction formulations determined by simulative testing. As regards task 3.3 the following tests have been used according to the original project plan: 1. standard ring test and 2. double cup extrusion test. The task...... has, however, been extended to include a number of new developed process tests: 3. forward rod extrusion test, 4. special ring test at low normal pressure, 5. spike test (especially developed for warm and hot forging). Validation of the measured friction values in cold forming from sub-task 3.1 has...

  13. Ship Model Testing

    Science.gov (United States)

    2016-01-15

    zero degrees angle of attack than the conventional foil at eight degrees angle of attack . This increase in lift is believed to be limited to low...Bureau of Shipping (ABS) supported this effort through the purchase of the 60 specimens used in this thesis. Metal Shark boats also provided aluminum...strength of welded aluminum panels. Metal Shark Boats, again, provided the necessary test panels for this effort. The optical extensometer was not

  14. Accurate Laser Measurements of the Water Vapor Self-Continuum Absorption in Four Near Infrared Atmospheric Windows. a Test of the MT_CKD Model.

    Science.gov (United States)

    Campargue, Alain; Kassi, Samir; Mondelain, Didier; Romanini, Daniele; Lechevallier, Loïc; Vasilchenko, Semyon

    2017-06-01

    The semi empirical MT_CKD model of the absorption continuum of water vapor is widely used in atmospheric radiative transfer codes of the atmosphere of Earth and exoplanets but lacks of experimental validation in the atmospheric windows. Recent laboratory measurements by Fourier transform Spectroscopy have led to self-continuum cross-sections much larger than the MT_CKD values in the near infrared transparency windows. In the present work, we report on accurate water vapor absorption continuum measurements by Cavity Ring Down Spectroscopy (CRDS) and Optical-Feedback-Cavity Enhanced Laser Spectroscopy (OF-CEAS) at selected spectral points of the transparency windows centered around 4.0, 2.1 and 1.25 μm. The temperature dependence of the absorption continuum at 4.38 μm and 3.32 μm is measured in the 23-39 °C range. The self-continuum water vapor absorption is derived either from the baseline variation of spectra recorded for a series of pressure values over a small spectral interval or from baseline monitoring at fixed laser frequency, during pressure ramps. In order to avoid possible bias approaching the water saturation pressure, the maximum pressure value was limited to about 16 Torr, corresponding to a 75% humidity rate. After subtraction of the local water monomer lines contribution, self-continuum cross-sections, C_{S}, were determined with a few % accuracy from the pressure squared dependence of the spectra base line level. Together with our previous CRDS and OF-CEAS measurements in the 2.1 and 1.6 μm windows, the derived water vapor self-continuum provides a unique set of water vapor self-continuum cross-sections for a test of the MT_CKD model in four transparency windows. Although showing some important deviations of the absolute values (up to a factor of 4 at the center of the 2.1 μm window), our accurate measurements validate the overall frequency dependence of the MT_CKD2.8 model.

  15. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  16. Quantitative renal perfusion measurements in a rat model of acute kidney injury at 3T: testing inter- and intramethodical significance of ASL and DCE-MRI.

    Directory of Open Access Journals (Sweden)

    Fabian Zimmer

    Full Text Available OBJECTIVES: To establish arterial spin labelling (ASL for quantitative renal perfusion measurements in a rat model at 3 Tesla and to test the diagnostic significance of ASL and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI in a model of acute kidney injury (AKI. MATERIAL AND METHODS: ASL and DCE-MRI were consecutively employed on six Lewis rats, five of which had a unilateral ischaemic AKI. All measurements in this study were performed on a 3 Tesla MR scanner using a FAIR True-FISP approach and a TWIST sequence for ASL and DCE-MRI, respectively. Perfusion maps were calculated for both methods and the cortical perfusion of healthy and diseased kidneys was inter- and intramethodically compared using a region-of-interest based analysis. RESULTS/SIGNIFICANCE: Both methods produce significantly different values for the healthy and the diseased kidneys (P<0.01. The mean difference was 147±47 ml/100 g/min and 141±46 ml/100 g/min for ASL and DCE-MRI, respectively. ASL measurements yielded a mean cortical perfusion of 416±124 ml/100 g/min for the healthy and 316±102 ml/100 g/min for the diseased kidneys. The DCE-MRI values were systematically higher and the mean cortical renal blood flow (RBF was found to be 542±85 ml/100 g/min (healthy and 407±119 ml/100 g/min (AKI. CONCLUSION: Both methods are equally able to detect abnormal perfusion in diseased (AKI kidneys. This shows that ASL is a capable alternative to DCE-MRI regarding the detection of abnormal renal blood flow. Regarding absolute perfusion values, nontrivial differences and variations remain when comparing the two methods.

  17. Measurement of children's creativity by tests

    Directory of Open Access Journals (Sweden)

    Maksić Slavica B.

    2003-01-01

    Full Text Available After over a 50-year permanent development of tests designed to measure creativity and the results they produced, a question is raised if creativity can be measured by tests at all. A special problem are procedures for measuring creative potential in younger children because children, unlike adults, do not possess creative products that are a single reliable evidence of creativity in the real world. The paper considers test reliability and validity in measuring creativity as well as the dilemma: how much justifiable it is to measure children's creativity by tests if it is not clear what they measure and if there is not a significant relationship between creativity scores and creativity in life. Unsatisfactory creativity test reliability and validity does not mean those tests should be given up the majority of researchers agree. Of the tests of creativity administered in work with the young, the status of Urban-Jellen Test of Creative Thinking - Drawing Production (TCT-DP is given prominence due to the fact that over the past ten years or so it has been used in a larger number of studies as well as in some studies carried out in this country. In TCT-DP scoring is not based on statistical uncommonness of the figures produced but on a number of criteria derived from Gestalt psychology. The factor analyses of the defined criteria of creativity, applied on samples in various settings yielded that the test contains an essential factor of creativity "novelty".

  18. Environmental Measurements and Modeling

    Science.gov (United States)

    Environmental measurement is any data collection activity involving the assessment of chemical, physical, or biological factors in the environment which affect human health. Learn more about these programs and tools that aid in environmental decisions

  19. The C-Test: An Integrative Measure of Crystallized Intelligence

    Directory of Open Access Journals (Sweden)

    Purya Baghaei

    2015-05-01

    Full Text Available Crystallized intelligence is a pivotal broad ability factor in the major theories of intelligence including the Cattell-Horn-Carroll (CHC model, the three-stratum model, and the extended Gf-Gc (fluid intelligence-crystallized intelligence model and is usually measured by means of vocabulary tests and other verbal tasks. In this paper the C-Test, a text completion test originally proposed as a test of general proficiency in a foreign language, is introduced as an integrative measure of crystallized intelligence. Based on the existing evidence in the literature, it is argued that the construct underlying the C-Test closely matches the abilities underlying the language component of crystallized intelligence, as defined in the well-established theories of intelligence. It is also suggested that by carefully selecting texts from pertinent knowledge domains, the factual knowledge component of crystallized intelligence could also be measured by the C-Test.

  20. Use of the Oslo-Potsdam Solution to test the effect of an environmental education model on tangible measures of environmental protection

    Science.gov (United States)

    Short, Philip Craig

    The fundamental goals of environmental education include the creation of an environmentally literate citizenry possessing the knowledge, skills, and motivation to objectively analyze environmental issues and engage in responsible behaviors leading to issue resolution and improved or maintained environmental quality. No existing research, however, has linked educational practices and environmental protection. In an original attempt to quantify the pedagogy - environmental protection relationship, both qualitative and quantitative methods were used to investigate local environmental records and environmental quality indices that reflected the results of student actions. The data were analyzed using an educational adaptation of the "Oslo-Potsdam Solution for International Environmental Regime Effectiveness." The new model, termed the Environmental Education Performance Indicator (EEPI), was developed and evaluated as a quantitative tool for testing and fairly comparing the efficacy of student-initiated environmental projects in terms of environmental quality measures. Five case studies were developed from descriptions of student actions and environmental impacts as revealed by surveys and interviews with environmental education teachers using the IEEIA (Investigating and Evaluating Environmental Issues and Actions) curriculum, former students, community members, and agency officials. Archival information was also used to triangulate the data. In addition to evaluating case study data on the basis of the EEPI model, an expert panel of evaluators consisting of professionals from environmental education, natural sciences, environmental policy, and environmental advocacy provided subjective assessments on the effectiveness of each case study. The results from this study suggest that environmental education interventions can equip and empower students to act on their own conclusions in a manner that leads to improved or maintained environmental conditions. The EEPI model

  1. Phishing IQ Tests Measure Fear, Not Ability

    Science.gov (United States)

    Anandpara, Vivek; Dingman, Andrew; Jakobsson, Markus; Liu, Debin; Roinestad, Heather

    We argue that phishing IQ tests fail to measure susceptibility to phishing attacks. We conducted a study where 40 subjects were asked to answer a selection of questions from existing phishing IQ tests in which we varied the portion (from 25% to 100%) of the questions that corresponded to phishing emails. We did not find any correlation between the actual number of phishing emails and the number of emails that the subjects indicated were phishing. Therefore, the tests did not measure the ability of the subjects. To further confirm this, we exposed all the subjects to existing phishing education after they had taken the test, after which each subject was asked to take a second phishing test, with the same design as the first one, but with different questions. The number of stimuli that were indicated as being phishing in the second test was, again, independent of the actual number of phishing stimuli in the test. However, a substantially larger portion of stimuli was indicated as being phishing in the second test, suggesting that the only measurable effect of the phishing education (from the point of view of the phishing IQ test) was an increased concern—not an increased ability.

  2. What do educational test scores really measure?

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55 and tested in 1968. The procedure takes account of unobservable effects as well as excessive zeros in the data. The bulk of unobservable effects are uncorrelate...

  3. Direct friction measurement in draw bead testing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan Lasson

    2005-01-01

    have been reported in literature. A major drawback in all these studies is that friction is not directly measured, but requires repeated measurements of the drawing force with and without relative sliding between the draw beads and the sheet material. This implies two tests with a fixed draw bead tool......-in piezoelectric torque transducer. This technique results in a very sensitive measurement of friction, which furthermore enables recording of lubricant film breakdown as function of drawing distance. The proposed test is validated in an experimental investigation of the influence of lubricant viscosity...

  4. High-voltage test and measuring techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hauschild, Wolfgang; Lemke, Eberhard

    2014-04-01

    Reflects the unit of both HV testing and measuring technique. Intended as an ''application guide'' for the relevant IEC standards. Refers also to future trends in HV testing and measuring technique. With numerous illustrations. It is the intent of this book to combine high-voltage (HV) engineering with HV testing technique and HV measuring technique. Based on long-term experience gained by the authors as lecturer and researcher as well as member in international organizations, such as IEC and CIGRE, the book will reflect the state of the art as well as the future trends in testing and diagnostics of HV equipment to ensure a reliable generation, transmission and distribution of electrical energy. The book is intended not only for experts but also for students in electrical engineering and high-voltage engineering.

  5. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  6. Tests of the Standard Model and Constraints on New Physics from Measurements of Fermion-Pair Production at 189-209 GeV at LEP

    CERN Document Server

    Abbiendi, G.; Akesson, P.F.; Alexander, G.; Allison, John; Amaral, P.; Anagnostou, G.; Anderson, K.J.; Arcelli, S.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barillari, T.; Barlow, R.J.; Batley, R.J.; Bechtle, P.; Behnke, T.; Bell, Kenneth Watson; Bell, P.J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, Robert M.; Buesser, K.; Burckhart, H.J.; Campana, S.; Carnegie, R.K.; Caron, B.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Ciocca, C.; Csilling, A.; Cuffiani, M.; Dado, S.; De Roeck, A.; De Wolf, E.A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Furtjes, A.; Gagnon, P.; Gary, John William; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, Marina; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G.G.; Harel, A.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hill, J.C.; Hoffman, Kara Dion; Horvath, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Klein, K.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Kormos, Laura L.; Kramer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Layter, J.G.; Lellouch, D.; Lettso, J.; Levinson, L.; Lillich, J.; Lloyd, S.L.; Loebinger, F.K.; Lu, J.; Ludwig, A.; Ludwig, J.; Macpherson, A.; Mader, W.; Marcellini, S.; Martin, A.J.; Masetti, G.; Mashimo, T.; Mattig, Peter; McDonald, W.J.; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menges, W.; Merritt, F.S.; Mes, H.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oh, A.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pahl, C.; Pasztor, G.; Pater, J.R.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Pooth, O.; Przybycien, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Roney, J.M.; Rosati, S.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E.K.G.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schoerner-Sadenius, Thomas; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Sherwood, P.; Skuja, A.; Smith, A.M.; Sobie, R.; Soldner-Rembold, S.; Spano, F.; Stahl, A.; Stephens, K.; Strom, David M.; Strohmer, R.; Tarem, S.; Tasevsky, M.; Teuscher, R.; Thomson, M.A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Ujvari, B.; Vollmer, C.F.; Vannerem, P.; Vertesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Waller, D.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, D.; Wilson, G.W.; Wilson, J.A.; Wolf, G.; Wyatt, T.R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, Lidija

    2004-01-01

    Cross-section and angular distributions for hadronic and lepton-pair final states in e+e- collisions at centre-of-mass energies between 189 GeV and 209 GeV, measured with the OPAL detector at LEP, are presented and compared with the predictions of the Standard Model. The measurements are used to determine the electromagnetic coupling constant alphaem at LEP2 energies. In addition, the results are used together with OPAL measurements at 91-183 GeV within the S-matrix formalism to determine the gamma-Z interference term and to make an almost model-independent measurement of the Z mass. Limits on extensions to the Standard Model described by effective four-fermion contact interactions or the addition of a heavy Z boson are also presented.

  7. Superheater hydraulic model test plan

    Energy Technology Data Exchange (ETDEWEB)

    Gabler, M.; Oliva, R.M.

    1973-10-01

    The plan for conducting a hydraulic test on a full scale model of the AI Steam Generator Module design is presented. The model will incorporate all items necessary to simulate the hydraulic performance characteristics of the superheater but will utilize materials other than the 2-1/4 Cr - 1 Mo in its construction in order to minimize costs and expedite schedule. Testing will be performed in the Rockwell International Rocketdyne High Flow Test Facility which is capable of flowing up to 32,00 gpm of water at ambient temperatures. All necessary support instrumentation is also available at this facility.

  8. A test for ordinal measurement invariance

    NARCIS (Netherlands)

    Ligtvoet, R.; Millsap, R.E.; Bolt, D.M.; van der Ark, L.A.; Wang, W.-C.

    2015-01-01

    One problem with the analysis of measurement invariance is the reliance of the analysis on having a parametric model that accurately describes the data. In this paper an ordinal version of the property of measurement invariance is proposed, which relies only on nonparametric restrictions. This

  9. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} given \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document} and use it to extend the method of regression calibration to this class of measurement error models. We apply the model to dietary data and test whether self-reported dietary intake includes an interaction between true intake and body mass index. We also perform simulations to compare the model to simpler approximate calibration models. PMID:26530858

  10. A test of the factor structure equivalence of the 50-item IPIP Five-factor model measure across gender and ethnic groups.

    Science.gov (United States)

    Ehrhart, Karen Holcombe; Roesch, Scott C; Ehrhart, Mark G; Kilian, Britta

    2008-09-01

    Personality is frequently assessed in research and applied settings, in part due to evidence that scores on measures of the Five-factor model (FFM) of personality show predictive validity for a variety of outcomes. Although researchers are increasingly using the International Personality Item Pool (IPIP; Goldberg, 1999; International Personality Item Pool, 2007b) FFM measures, investigations of the psychometric properties of these measures are unfortunately sparse. The purpose of this study was to examine the factor structure equivalence of the 50-item IPIP FFM measure across gender and ethnic groups (i.e., Whites, Latinos, Asian Americans) using multigroup confirmatory factor analysis. Results from a sample of 1,727 college students generally support the invariance of the factor structure across groups, although there was some evidence of differences across gender and ethnic groups for model parameters. We discuss these findings and their implications.

  11. Testing for Distortions in Performance Measures

    DEFF Research Database (Denmark)

    Sloof, Randolph; Van Praag, Mirjam

    Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (2008) to detect whether the widely used class of Residual Income based performa......Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (2008) to detect whether the widely used class of Residual Income based...... performance measures —such as Economic Value Added (EVA)— is distorted, leading to unintended agent behavior. The paper uses a difference-in-differences approach to account for changes in economic circumstances and the self-selection of firms using EVA. Our findings indicate that EVA is a distorted...... performance measure that elicits the gaming response....

  12. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  13. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pbex measurements

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.

    2012-01-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ( 137 Cs) and excess lead-210 ( 210 Pb ex ) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from 137 Cs and 210 Pb ex measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. - Highlights: ► Soil erosion is an important threat to the long-term sustainability of agriculture.

  14. Using plot experiments to test the validity of mass balance models employed to estimate soil redistribution rates from 137Cs and 210Pb(ex) measurements.

    Science.gov (United States)

    Porto, Paolo; Walling, Des E

    2012-10-01

    Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Testing for Distortions in Performance Measures

    DEFF Research Database (Denmark)

    Sloof, Randolph; Van Praag, Mirjam

    2015-01-01

    Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (Review of Economics and Statistics, 90, 428-441) to detect whether the widely...... used class of residual income-based performance measures-such as economic value added (EVA)-is distorted, leading to unintended agent behavior. The paper uses a difference-in-differences approach to account for changes in economic circumstances and the self-selection of firms using EVA. Our findings...... indicate that EVA is a distorted performance measure that elicits the gaming response....

  16. Work zone performance measures pilot test.

    Science.gov (United States)

    2011-04-01

    Currently, a well-defined and validated set of metrics to use in monitoring work zone performance do not : exist. This pilot test was conducted to assist state DOTs in identifying what work zone performance : measures can and should be targeted, what...

  17. Model plant key measurement points

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The key measurement points for the model low enriched fuel fabrication plant are described as well as the measurement methods. These are the measurement points and methods that are used to complete the plant's formal material balance. The purpose of the session is to enable participants to: (1) understand the basis for each key measurement; and (2) understand the importance of each measurement to the overall plant material balance. The feed to the model low enriched uranium fuel fabrication plant is UF 6 and the product is finished light water reactor fuel assemblies. The waste discards are solid and liquid wastes. The plant inventory consists of unopened UF 6 cylinders, UF 6 heels, fuel assemblies, fuel rods, fuel pellets, UO 2 powder, U 3 O 8 powder, and various scrap materials. At the key measurement points the total plant material balance (flow and inventory) is measured. The two types of key measurement points-flow and inventory are described

  18. Transition Models with Measurement Errors

    OpenAIRE

    Magnac, Thierry; Visser, Michael

    1999-01-01

    In this paper, we estimate a transition model that allows for measurement errors in the data. The measurement errors arise because the survey design is partly retrospective, so that individuals sometimes forget or misclassify their past labor market transitions. The observed data are adjusted for errors via a measurement-error mechanism. The parameters of the distribution of the true data, and those of the measurement-error mechanism are estimated by a two-stage method. The results, based on ...

  19. NET model coil test possibilities

    International Nuclear Information System (INIS)

    Erb, J.; Gruenhagen, A.; Herz, W.; Jentzsch, K.; Komarek, P.; Lotz, E.; Malang, S.; Maurer, W.; Noether, G.; Ulbricht, A.; Vogt, A.; Zahn, G.; Horvath, I.; Kwasnitza, K.; Marinucci, C.; Pasztor, G.; Sborchia, C.; Weymuth, P.; Peters, A.; Roeterdink, A.

    1987-11-01

    A single full size coil for NET/INTOR represents an investment of the order of 40 MUC (Million Unit Costs). Before such an amount of money or even more for the 16 TF coils is invested as much risks as possible must be eliminated by a comprehensive development programme. In the course of such a programme a coil technology verification test should finally prove the feasibility of NET/INTOR TF coils. This study report is almost exclusively dealing with such a verification test by model coil testing. These coils will be built out of two Nb 3 Sn-conductors based on two concepts already under development and investigation. Two possible coil arrangements are discussed: A cluster facility, where two model coils out of the two Nb 3 TF-conductors are used, and the already tested LCT-coils producing a background field. A solenoid arrangement, where in addition to the two TF model coils another model coil out of a PF-conductor for the central PF-coils of NET/INTOR is used instead of LCT background coils. Technical advantages and disadvantages are worked out in order to compare and judge both facilities. Costs estimates and the time schedules broaden the base for a decision about the realisation of such a facility. (orig.) [de

  20. Tests of the Standard Model and Constraints on New Physics from Measurements of Fermion-pair Production at 130-172 GeV at LEP

    CERN Document Server

    Ackerstaff, K.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Bartoldus, R.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Beeston, C.; Behnke, T.; Bell, A.N.; Bell, Kenneth Watson; Bella, G.; Bentvelsen, S.; Bethke, S.; Biebel, O.; Biguzzi, A.; Bird, S.D.; Blobel, V.; Bloodworth, I.J.; Bloomer, J.E.; Bobinski, M.; Bock, P.; Bonacorsi, D.; Boutemeur, M.; Bouwens, B.T.; Braibant, S.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Burgard, C.; Burgin, R.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Clarke, P.E.L.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Cuffiani, M.; Dado, S.; Dallapiccola, C.; Dallavalle, G.Marco; Davies, R.; De Jong, S.; del Pozo, L.A.; Desch, K.; Dienes, B.; Dixit, M.S.; do Couto e Silva, E.; Doucet, M.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Eatough, D.; Edwards, J.E.G.; Estabrooks, P.G.; Evans, H.G.; Evans, M.; Fabbri, F.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fischer, H.M.; Fleck, I.; Folman, R.; Fong, D.G.; Foucher, M.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Geddes, N.I.; Geich-Gimbel, C.; Geralis, T.; Giacomelli, G.; Giacomelli, P.; Giacomelli, R.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Goodrick, M.J.; Gorn, W.; Grandi, C.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hajdu, C.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Hargrove, C.K.; Hart, P.A.; Hartmann, C.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hillier, S.J.; Hobson, P.R.; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Hutchcroft, D.E.; Igo-Kemenes, P.; Imrie, D.C.; Ingram, M.R.; Ishii, K.; Jawahery, A.; Jeffreys, P.W.; Jeremie, H.; Jimack, M.; Joly, A.; Jones, C.R.; Jones, G.; Jones, M.; Jost, U.; Jovanovic, P.; Junk, T.R.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kirk, J.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Koetke, D.S.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kress, T.; Krieger, P.; von Krogh, J.; Kyberd, P.; Lafferty, G.D.; Lahmann, R.; Lai, W.P.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Layter, J.G.; Lazic, D.; Lee, A.M.; Lefebvre, E.; Lellouch, D.; Letts, J.; Levinson, L.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Ludwig, J.; Macchiolo, A.; Macpherson, A.; Mannelli, M.; Marcellini, S.; Markus, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mikenberg, G.; Miller, D.J.; Mincer, A.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Morii, M.; Muller, U.; Mihara, S.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nellen, B.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oh, A.; Oldershaw, N.J.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Pearce, M.J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Poli, B.; Posthaus, A.; Rees, D.L.; Rigby, D.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Rooke, A.; Ros, E.; Rossi, A.M.; Routenburg, P.; Rozen, Y.; Runge, K.; Runolfsson, O.; Ruppel, U.; Rust, D.R.; Rylko, R.; Sachs, K.; Saeki, T.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharf, F.; Scharff-Hansen, P.; Schenk, P.; Schieck, J.; Schleper, P.; Schmitt, B.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schultz-Coulon, H.C.; Schumacher, M.; Schwick, C.; Scott, W.G.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skillman, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Springer, Robert Wayne; Sproston, M.; Stephens, K.; Steuerer, J.; Stockhausen, B.; Stoll, K.; Strom, David M.; Szymanski, P.; Tafirout, R.; Talbot, S.D.; Tanaka, S.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomson, M.A.; von Torne, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Utzat, P.; Van Kooten, Rick J.; Verzocchi, M.; Vikas, P.; Vokurka, E.H.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilkens, B.; Wilson, G.W.; Wilson, J.A.; Wolf, G.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1998-01-01

    Production of events with hadronic and leptonic final states has been measured in e^+e^- collisions at centre-of-mass energies of 130-172 GeV, using the OPAL detector at LEP. Cross-sections and leptonic forward-backward asymmetries are presented, both including and excluding the dominant production of radiative Z \\gamma events, and compared to Standard Model expectations. The ratio R_b of the cross-section for bb(bar) production to the hadronic cross-section has been measured. In a model-independent fit to the Z lineshape, the data have been used to obtain an improved precision on the measurement of \\gamma-Z interference. The energy dependence of \\alpha_em has been investigated. The measurements have also been used to obtain limits on extensions of the Standard Model described by effective four-fermion contact interactions, to search for t-channel contributions from new massive particles and to place limits on chargino pair production with subsequent decay of the chargino into a light gluino and a quark pair.

  1. Measuring Cardiac Output during Cardiopulmonary Exercise Testing.

    Science.gov (United States)

    Vignati, Carlo; Cattadori, Gaia

    2017-07-01

    Cardiac output is a key parameter in the assessment of cardiac function, and its measurement is fundamental to the diagnosis, treatment, and prognostic evaluation of all heart diseases. Until recently, cardiac output determination during exercise had been only possible through invasive methods, which were not practical in the clinical setting. Because [Formula: see text]o 2 is cardiac output times arteriovenous content difference, evaluation of cardiac output is usually included in its measurement. Because of the difficulty of directly measuring peak exercise cardiac output, indirect surrogate parameters have been proposed, but with only modest clinical usefulness. Direct measurement of cardiac output can now be made by several noninvasive techniques, such as rebreathing inert gases, impedance cardiology, thoracic bioreactance, estimated continuous cardiac output technology, and transthoracic echocardiography coupled to cardiopulmonary exercise testing, which allow more definitive results and better understanding of the underlying physiopathology.

  2. High-voltage test and measuring techniques

    CERN Document Server

    Hauschild, Wolfgang

    2014-01-01

    It is the intent of this book to combine high-voltage (HV) engineering with HV testing technique and HV measuring technique. Based on long-term experience gained by the authors as lecturer and researcher as well as member in international organizations, such as IEC and CIGRE, the book will reflect the state of the art as well as the future trends in testing and diagnostics of HV equipment to ensure a reliable generation, transmission and distribution of electrical energy. The book is intended not only for experts but also for students in electrical engineering and high-voltage engineering.

  3. Models Used for Measuring Customer Engagement

    Directory of Open Access Journals (Sweden)

    Mihai TICHINDELEAN

    2013-12-01

    Full Text Available The purpose of the paper is to define and measure the customer engagement as a forming element of the relationship marketing theory. In the first part of the paper, the authors review the marketing literature regarding the concept of customer engagement and summarize the main models for measuring it. One probability model (Pareto/NBD model and one parametric model (RFM model specific for the customer acquisition phase are theoretically detailed. The second part of the paper is an application of the RFM model; the authors demonstrate that there is no statistical significant variation within the clusters formed on two different data sets (training and test set if the cluster centroids of the training set are used as initial cluster centroids for the second test set.

  4. Educational Testing as an Accountability Measure

    DEFF Research Database (Denmark)

    Ydesen, Christian

    2013-01-01

    for continued use in contemporary educational settings. Accountability measures and practices serve as a way to govern schools; by analysing the history of accountability as the concept has been practised in the education sphere, the article will discuss both pros and cons of such a methodology, particularly......This article reveals perspectives based on experiences from twentieth-century Danish educational history by outlining contemporary, test-based accountability regime characteristics and their implications for education policy. The article introduces one such characteristic, followed by an empirical...... analysis of the origins and impacts of test-based accountability measures applying both top-down and bottom-up perspectives. These historical perspectives offer the opportunity to gain a fuller understanding of this contemporary accountability concept and its potential, appeal, and implications...

  5. Test Beam Measurements on Picosec Gaseous Detector.

    CERN Document Server

    Sohl, Lukas

    2017-01-01

    In the Picosec project micro pattern gaseous detectors with a time resolution of some ten picoseconds are developed. The detectors are based on Micromegas detectors. With a cherenkov window and a photocathode the time jitter from different position of the primary ionization clusters can be substituted. This reports describes the beam setup and measurements of different Picosec prototypes. A time resolution of under 30 ps has been measured during the test beam. This report gives an overview of my work as a Summer Student. I set up and operated a triple-GEM tracker and a trigger system for the beam. During the beam I measured different prototypes of Picosec detectors and analysed the data.

  6. Field testing of bioenergetic models

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1985-01-01

    Doubly labeled water provides a direct measure of the rate of carbon dioxide production by free-living animals. With appropriate conversion factors, based on chemical composition of the diet and assimilation efficiency, field metabolic rate (FMR), in units of energy expenditure, and field feeding rate can be estimated. Validation studies indicate that doubly labeled water measurements of energy metabolism are accurate to within 7% in reptiles, birds, and mammals. This paper discusses the use of doubly labeled water to generate empirical models for FMR and food requirements for a variety of animals

  7. Tests of the Standard Model and Constraints on New Physics from Measurements of Fermion-pair Production at 189 GeV at LEP

    CERN Document Server

    Abbiendi, G.; Alexander, G.; Allison, John; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Bloodworth, I.J.; Bock, P.; Bohme, J.; Boeriu, O.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couchman, J.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Dallison, S.; Davis, R.; De Jong, S.; de Roeck, A.; Dervan, P.; Desch, K.; Dienes, B.; Dixit, M.S.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Estabrooks, P.G.; Etzion, E.; Fabbri, F.; Fanfani, A.; Fanti, M.; Faust, A.A.; Feld, L.; Ferrari, P.; Fiedler, F.; Fierro, M.; Fleck, I.; Frey, A.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Graham, K.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hajdu, C.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Harel, A.; Hargrove, C.K.; Harin-Dirac, M.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hobson, P.R.; Hocker, James Andrew; Hoffman, Kara Dion; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klier, A.; Kobayashi, T.; Kobel, M.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kyberd, P.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Lauber, J.; Lawson, I.; Layter, J.G.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; Lillich, J.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Lu, J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Marchant, T.E.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Mendez-Lorenzo, P.; Merritt, F.S.; Mes, H.; Meyer, I.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Poli, B.; Polok, J.; Przybycien, M.; Quadt, A.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Rosati, S.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Spagnolo, S.; Sproston, M.; Stahl, A.; Stephens, K.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomas, J.; Thomson, M.A.; Torrence, E.; Towers, S.; Trefzger, T.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Wackerle, F.; Wagner, A.; Waller, D.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; Wetterling, D.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Zacek, V.; Zer-Zion, D.

    2000-01-01

    Cross-sections and angular distributions for hadronic and lepton pair final states in e+e- collisions at a centre-of-mass energy near 189 GeV, measured with the OPAL detector at LEP, are presented and compared with the predictions of the Standard Model. The results are used to measure the energy dependence of the electromagnetic coupling constant alpha_em, and to place limits on new physics as described by four-fermion contact interactions or by the exchange of a new heavy particle such as a sneutrino in supersymmetric theories with R-parity violation. A search for the indirect effects of the gravitational interaction in extra dimensions on the mu+mu- and tau+tau- final states is also presented.

  8. The Housing First Model (HFM) fidelity index: designing and testing a tool for measuring integrity of housing programs that serve active substance users.

    Science.gov (United States)

    Watson, Dennis P; Orwat, John; Wagner, Dana E; Shuman, Valery; Tolliver, Randi

    2013-05-03

    The Housing First Model (HFM) is an approach to serving formerly homeless individuals with dually diagnosed mental health and substance use disorders regardless of their choice to use substances or engage in other risky behaviors. The model has been widely diffused across the United States since 2000 as a result of positive findings related to consumer outcomes. However, a lack of clear fidelity guidelines has resulted in inconsistent implementation. The research team and their community partner collaborated to develop a HFM Fidelity Index. We describe the instrument development process and present results from its initial testing. The HFM Fidelity Index was developed in two stages: (1) a qualitative case study of four HFM organizations and (2) interviews with 14 HFM "users". Reliability and validity of the index were then tested through phone interviews with staff members of permanent housing programs. The final sample consisted of 51 programs (39 Housing First and 12 abstinence-based) across 35 states. The results provided evidence for the overall reliability and validity of the index. The results demonstrate the index's ability to discriminate between housing programs that employ different service approaches. Regarding practice, the index offers a guide for organizations seeking to implement the HFM.

  9. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  10. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Science.gov (United States)

    Yin, Peili; Wang, Jianhua; Lu, Chunxia

    2017-08-01

    Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI). The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI) to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  11. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yin Peili

    2017-08-01

    Full Text Available Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI. The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  12. Test model of WWER core

    International Nuclear Information System (INIS)

    Tikhomirov, A. V.; Gorokhov, A. K.

    2007-01-01

    The objective of this paper is creation of precision test model for WWER RP neutron-physics calculations. The model is considered as a tool for verification of deterministic computer codes that enables to reduce conservatism of design calculations and enhance WWER RP competitiveness. Precision calculations were performed using code MCNP5/1/ (Monte Carlo method). Engineering computer package Sapfir 9 5andRC V VER/2/ is used in comparative analysis of the results, it was certified for design calculations of WWER RU neutron-physics characteristic. The object of simulation is the first fuel loading of Volgodon NPP RP. Peculiarities of transition in calculation using MCNP5 from 2D geometry to 3D geometry are shown on the full-scale model. All core components as well as radial and face reflectors, automatic regulation in control and protection system control rod are represented in detail description according to the design. The first stage of application of the model is assessment of accuracy of calculation of the core power. At the second stage control and protection system control rod worth was assessed. Full scale RP representation in calculation using code MCNP5 is time consuming that calls for parallelization of computational problem on multiprocessing computer (Authors)

  13. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  14. Precise models deserve precise measures

    Directory of Open Access Journals (Sweden)

    Benjamin E. Hilbig

    2010-07-01

    Full Text Available The recognition heuristic (RH --- which predicts non-compensatory reliance on recognition in comparative judgments --- has attracted much research and some disagreement, at times. Most studies have dealt with whether or under which conditions the RH is truly used in paired-comparisons. However, even though the RH is a precise descriptive model, there has been less attention concerning the precision of the methods applied to measure RH-use. In the current work, I provide an overview of different measures of RH-use tailored to the paradigm of natural recognition which has emerged as a preferred way of studying the RH. The measures are compared with respect to different criteria --- with particular emphasis on how well they uncover true use of the RH. To this end, both simulations and a re-analysis of empirical data are presented. The results indicate that the adherence rate --- which has been pervasively applied to measure RH-use --- is a severely biased measure. As an alternative, a recently developed formal measurement model emerges as the recommended candidate for assessment of RH-use.

  15. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  16. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  17. Acoustic emission measurement during instrumented impact tests

    International Nuclear Information System (INIS)

    Crostack, H.A.; Engelhardt, A.H.

    1983-01-01

    Results of instrumented impact tests are discussed. On the one hand the development of the loading process at the hammer tup was recorded by means of a piezoelectric transducer. This instrumentation supplied a better representation of the load versus time than the conventional strain gauges. On the other hand the different types of acoustic emission occurring during a test could be separated. The acoustic emission released at the impact of the hammer onto the specimen is of lower frequency and its spectrum is strongly decreasing with increasing frequency. Plastic deformation also emits signals of lower frequency that are of quasi-continuous character. Both signal types can be discriminated by filtering. As a consequence typical burst signal were received afterwards that can be correlated with crack propagation. Their spectra exhibit considerable portions up to about 1.9 MHz. The development in time of the burst signals points to the kind of crack propagation resp. its sequence of appearance. However, definitive comparison between load and acoustic emission should become possible, only when the disadvantages of the common load measurement can be reduced, e.g. by determining the load directly at the specimen instead of the hammer tup

  18. Blast Testing and Modelling of Composite Structures

    DEFF Research Database (Denmark)

    Giversen, Søren

    The motivation for this work is based on a desire for finding light weight alternatives to high strength steel as the material to use for armouring in military vehicles. With the use of high strength steel, an increase in the level of armouring has a significant impact on the vehicle weight......-up proved functional and provided consistent data of the panel response. The tests reviled that the sandwich panels did not provide a decrease in panel deflection compared with the monolithic laminates, which was expected due to their higher flexural rigidity. This was found to be because membrane effects...... a pressure distribution on a selected surfaces and has been based on experimental pressure measurement data, and (ii) with a designed 3 step numerical load model, where the blast pressure and FSI (Fluid Structure Interaction) between the pressure wave and modelled panel is modelled numerically. The tested...

  19. Comparison of Angle of Attack Measurements for Wind Tunnel Testing

    Science.gov (United States)

    Jones, Thomas, W.; Hoppe, John C.

    2001-01-01

    Two optical systems capable of measuring model attitude and deformation were compared to inertial devices employed to acquire wind tunnel model angle of attack measurements during the sting mounted full span 30% geometric scale flexible configuration of the Northrop Grumman Unmanned Combat Air Vehicle (UCAV) installed in the NASA Langley Transonic Dynamics Tunnel (TDT). The overall purpose of the test at TDT was to evaluate smart materials and structures adaptive wing technology. The optical techniques that were compared to inertial devices employed to measure angle of attack for this test were: (1) an Optotrak (registered) system, an optical system consisting of two sensors, each containing a pair of orthogonally oriented linear arrays to compute spatial positions of a set of active markers; and (2) Video Model Deformation (VMD) system, providing a single view of passive targets using a constrained photogrammetric solution whose primary function was to measure wing and control surface deformations. The Optotrak system was installed for this test for the first time at TDT in order to assess the usefulness of the system for future static and dynamic deformation measurements.

  20. Model test of boson mappings

    International Nuclear Information System (INIS)

    Navratil, P.; Dobes, J.

    1992-01-01

    Methods of boson mapping are tested in calculations for a simple model system of four protons and four neutrons in single-j distinguishable orbits. Two-body terms in the boson images of the fermion operators are considered. Effects of the seniority v=4 states are thus included. The treatment of unphysical states and the influence of boson space truncation are particularly studied. Both the Dyson boson mapping and the seniority boson mapping as dictated by the similarity transformed Dyson mapping do not seem to be simply amenable to truncation. This situation improves when the one-body form of the seniority image of the quadrupole operator is employed. Truncation of the boson space is addressed by using the effective operator theory with a notable improvement of results

  1. Data Reconciliation and Gross Error Detection: A Filtered Measurement Test

    International Nuclear Information System (INIS)

    Himour, Y.

    2008-01-01

    Measured process data commonly contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, we will explore a predictor-corrector filter based on data reconciliation, and then a modified version of the measurement test is combined with the studied filter to detect probable outliers that can affect process measurements. The strategy presented is tested using dynamic simulation of an inverted pendulum

  2. Fundamental limits of measurement in telecommunications: Experimental and modeling studies in a test optical network on proposal for the reform of telecommunication quantitations

    International Nuclear Information System (INIS)

    Egan, James; McMillan, Normal; Denieffe, David

    2011-01-01

    Proposals for a review of the limits of measurement for telecommunications are made. The measures are based on adapting work from the area of chemical metrology for the field of telecommunications. Currie has introduced recommendations for defining the limits of measurement in chemical metrology and has identified three key fundamental limits of measurement. These are the critical level, the detection limit and the determination limit. Measurements on an optical system are used to illustrate the utility of these measures and discussion is given into the advantages of using these fundamental quantitations over existing methods.

  3. Fundamental limits of measurement in telecommunications: Experimental and modeling studies in a test optical network on proposal for the reform of telecommunication quantitations

    Energy Technology Data Exchange (ETDEWEB)

    Egan, James; McMillan, Normal; Denieffe, David, E-mail: eganj@itcarlow.ie [IT Carlow (Ireland)

    2011-08-17

    Proposals for a review of the limits of measurement for telecommunications are made. The measures are based on adapting work from the area of chemical metrology for the field of telecommunications. Currie has introduced recommendations for defining the limits of measurement in chemical metrology and has identified three key fundamental limits of measurement. These are the critical level, the detection limit and the determination limit. Measurements on an optical system are used to illustrate the utility of these measures and discussion is given into the advantages of using these fundamental quantitations over existing methods.

  4. Radiation budget measurement/model interface

    Science.gov (United States)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  5. Measuring Test Case Similarity to Support Test Suite Understanding

    NARCIS (Netherlands)

    Greiler, M.S.; Van Deursen, A.; Zaidman, A.E.

    2012-01-01

    Preprint of paper published in: TOOLS 2012 - Proceedings of the 50th International Conference, Prague, Czech Republic, May 29-31, 2012; doi:10.1007/978-3-642-30561-0_8 In order to support test suite understanding, we investigate whether we can automatically derive relations between test cases. In

  6. 46 CFR 154.431 - Model test.

    Science.gov (United States)

    2010-10-01

    ...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...

  7. Constructing three emotion knowledge tests from the invariant measurement approach

    Directory of Open Access Journals (Sweden)

    Ana R. Delgado

    2017-09-01

    Full Text Available Background Psychological constructionist models like the Conceptual Act Theory (CAT postulate that complex states such as emotions are composed of basic psychological ingredients that are more clearly respected by the brain than basic emotions. The objective of this study was the construction and initial validation of Emotion Knowledge measures from the CAT frame by means of an invariant measurement approach, the Rasch Model (RM. Psychological distance theory was used to inform item generation. Methods Three EK tests—emotion vocabulary (EV, close emotional situations (CES and far emotional situations (FES—were constructed and tested with the RM in a community sample of 100 females and 100 males (age range: 18–65, both separately and conjointly. Results It was corroborated that data-RM fit was sufficient. Then, the effect of type of test and emotion on Rasch-modelled item difficulty was tested. Significant effects of emotion on EK item difficulty were found, but the only statistically significant difference was that between “happiness” and the remaining emotions; neither type of test, nor interaction effects on EK item difficulty were statistically significant. The testing of gender differences was carried out after corroborating that differential item functioning (DIF would not be a plausible alternative hypothesis for the results. No statistically significant sex-related differences were found out in EV, CES, FES, or total EK. However, the sign of d indicate that female participants were consistently better than male ones, a result that will be of interest for future meta-analyses. Discussion The three EK tests are ready to be used as components of a higher-level measurement process.

  8. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  9. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  10. A Functional Test Platform for the Community Land Model

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yang [ORNL; Thornton, Peter E [ORNL; King, Anthony Wayne [ORNL; Steed, Chad A [ORNL; Gu, Lianhong [ORNL; Schuchart, Joseph [ORNL

    2014-01-01

    A functional test platform is presented to create direct linkages between site measurements and the process-based ecosystem model within the Community Earth System Models (CESM). The platform consists of three major parts: 1) interactive user interfaces, 2) functional test model and 3) observational datasets. It provides much needed integration interfaces for both field experimentalists and ecosystem modelers to improve the model s representation of ecosystem processes within the CESM framework without large software overhead.

  11. Measuring test coverage of SoA services

    NARCIS (Netherlands)

    Sneed, Harry M.; Verhoef, Chris

    2015-01-01

    One of the challenges of testing in a SoA environment is that testers do not have access to the source code of the services they are testing. Therefore they are not able to measure test coverage at the code level, as is done in conventional white-box testing. They are compelled to measure test

  12. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension

  13. Measurement Invariance Testing of a Three-Factor Model of Parental Warmth, Psychological Control, and Knowledge across European and Asian/Pacific Islander American Youth.

    Science.gov (United States)

    Luk, Jeremy W; King, Kevin M; McCarty, Carolyn A; Stoep, Ann Vander; McCauley, Elizabeth

    2016-06-01

    While the interpretation and effects of parenting on developmental outcomes may be different across European and Asian/Pacific Islander (API) American youth, measurement invariance of parenting constructs has rarely been examined. Utilizing multiple-group confirmatory factor analysis, we examined whether the latent structure of parenting measures are equivalent or different across European and API American youth. Perceived parental warmth, psychological control, and knowledge were reported by a community sample of 325 adolescents (242 Europeans and 83 APIs). Results indicated that one item did not load on mother psychological control for API American youth. After removing this item, we found metric invariance for all parenting dimensions, providing support for cross-cultural consistency in the interpretation of parenting items. Scalar invariance was found for father parenting, whereas three mother parenting items were non-invariant across groups at the scalar level. After taking into account several minor forms of measurement non-invariance, non-invariant factor means suggested that API Americans perceived lower parental warmth and knowledge but higher parental psychological control than European Americans. Overall, the degree of measurement non-invariance was not extensive and was primarily driven by a few parenting items. All but one parenting item included in this study may be used for future studies across European and API American youth.

  14. Experiments towards model-based testing using Plan 9: Labelled transition file systems, stacking file systems, on-the-fly coverage measuring

    NARCIS (Netherlands)

    Belinfante, Axel; Guardiola, G.; Soriano, E.; Ballesteros, F.J.

    2006-01-01

    We report on experiments that we did on Plan 9/Inferno to gain more experience with the file-system-as-tool-interface approach. We reimplemented functionality that we earlier worked on in Unix, trying to use Plan 9 file system interfaces. The application domain for those experiments was model-based

  15. Testing of constitutive models in LAME.

    Energy Technology Data Exchange (ETDEWEB)

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    Constitutive models for computational solid mechanics codes are in LAME--the Library of Advanced Materials for Engineering. These models describe complex material behavior and are used in our finite deformation solid mechanics codes. To ensure the correct implementation of these models, regression tests have been created for constitutive models in LAME. A selection of these tests is documented here. Constitutive models are an important part of any solid mechanics code. If an analysis code is meant to provide accurate results, the constitutive models that describe the material behavior need to be implemented correctly. Ensuring the correct implementation of constitutive models is the goal of a testing procedure that is used with the Library of Advanced Materials for Engineering (LAME) (see [1] and [2]). A test suite for constitutive models can serve three purposes. First, the test problems provide the constitutive model developer a means to test the model implementation. This is an activity that is always done by any responsible constitutive model developer. Retaining the test problem in a repository where the problem can be run periodically is an excellent means of ensuring that the model continues to behave correctly. A second purpose of a test suite for constitutive models is that it gives application code developers confidence that the constitutive models work correctly. This is extremely important since any analyst that uses an application code for an engineering analysis will associate a constitutive model in LAME with the application code, not LAME. Therefore, ensuring the correct implementation of constitutive models is essential for application code teams. A third purpose of a constitutive model test suite is that it provides analysts with example problems that they can look at to understand the behavior of a specific model. Since the choice of a constitutive model, and the properties that are used in that model, have an enormous effect on the results of an

  16. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  17. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  18. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing....... The paper presents an overview of tests reported in literature and gives examples on the authors own test results....

  19. Axial force measurement for esophageal function testing

    DEFF Research Database (Denmark)

    Gravesen, Flemming Holbæk; Funch-Jensen, Peter; Gregersen, Hans

    2009-01-01

    force (force in radial direction) whereas the bolus moves along the length of esophagus in a distal direction. Force measurements in the longitudinal (axial) direction provide a more direct measure of esophageal transport function. The technique used to record axial force has developed from external...... documented using imaging modalities such as radiography and scintigraphy. This inconsistency using manometry has also been documented by axial force recordings. This underlines the lack of information when diagnostics are based on manometry alone. Increasing the volume of a bag mounted on a probe...

  20. Mercury flow tests (first report). Wall friction factor measurement tests and future tests plan

    International Nuclear Information System (INIS)

    Kaminaga, Masanori; Kinoshita, Hidetaka; Haga, Katsuhiro; Hino, Ryutaro; Sudo, Yukio

    1999-07-01

    In the neutron science project at JAERI, we plan to inject a pulsed proton beam of a maximum power of 5 MW from a high intense proton accelerator into a mercury target in order to produce high energy neutrons of a magnitude of ten times or more than existing facilities. The neutrons produced by the facility will be utilized for advanced field of science such as the life sciences etc. An urgent issue in order to accomplish this project is the establishment of mercury target technology. With this in mind, a mercury experimental loop with the capacity to circulate mercury up to 15 L/min was constructed to perform thermal hydraulic tests, component tests and erosion characteristic tests. A measurement of the wall friction factor was carried out as a first step of the mercury flow tests, while testing the characteristic of components installed in the mercury loop. This report presents an outline of the mercury loop and experimental results of the wall friction factor measurement. From the wall friction factor measurement, it was made clear that the wettability of the mercury was improved with an increase of the loop operation time and at the same time the wall friction factors were increased. The measured wall friction factors were much lower than the values calculated by the Blasius equation at the beginning of the loop operation because of wall slip caused by a non-wetted condition. They agreed well with the values calculated by the Blasius equation within a deviation of 10% when the sum of the operation time increased more than 11 hours. This report also introduces technical problems with a mercury circulation and future tests plan indispensable for the development of the mercury target. (author)

  1. Geochemical Testing And Model Development - Residual Tank Waste Test Plan

    International Nuclear Information System (INIS)

    Cantrell, K.J.; Connelly, M.P.

    2010-01-01

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  2. GEOCHEMICAL TESTING AND MODEL DEVELOPMENT - RESIDUAL TANK WASTE TEST PLAN

    Energy Technology Data Exchange (ETDEWEB)

    CANTRELL KJ; CONNELLY MP

    2010-03-09

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  3. Atomic Action Refinement in Model Based Testing

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.

    2007-01-01

    In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation

  4. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  5. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  6. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  7. Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods

    Science.gov (United States)

    Merkle, Edgar C.; Zeileis, Achim

    2013-01-01

    The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…

  8. Interwell tracer testing for residual oil saturation measurement

    International Nuclear Information System (INIS)

    Tang, Joseph

    2004-01-01

    This research focuses mainly on the interpretation of partitioning tracer data for residual oil saturation measurement. As a secondary objective, depending on the progress of the project, it may also look into some commonly encountered phenomena related to tracer interaction with rock matrix such as adsorption and mass transfer into secondary pores. With advancement of interpretation techniques, interwell partitioning tracer tests have become popular in the industry for determining residual oil saturation to water flood or gas flood. With reported successes both in petroleum and environmental industry, it has gained wide recognition as a reliable method for measuring residual oil saturation, along with other standard techniques such as single well tracer testing, sponge coring and log-inject-log. Several levels of interpretation, depending on the degree of sophistication, are available to interpret the tracer data for residual oil saturation determination. These methods range from the simplest analytical methods namely chromatographic transformation and moment analysis to the most intricate finite difference or streamline simulation, with the semi-quantitative Brigham's Model being in between. The residual oil saturations measured by these methods are not necessarily identical. There arises a legitimate question as to what the residual oil saturation values from different methods mean. Brigham's Model has the advantage that it is semi-analytical and requires minimal effort to match the tracer data. Brigham's five spot model will be extended to model the propagation of partitioning tracer for residual oil saturation measurement. The limitation of using the model for irregular pattern will also be addressed. We will also try to construct a 7 spot, 9 spot and line drive based on Brigham's correlation. This model will also be used to study the effect of different Sor in different layers on chromatographic and moment analysis method. Other retention mechanisms such as

  9. Tin Whisker Testing and Modeling

    Science.gov (United States)

    2015-11-01

    Center for Advanced Life Cycle Engineering, University of Maryland CTE Coefficient of Thermal Expansion DAU Defense Acquisition University DI...below 2.0% PCB Printed Circuit Board synonymous with PWB PWB Printed Wiring Board synonymous with PCB PCTC Simulated power cycling thermal cycling ...DoD focused tin whisker risk assessments and whisker growth mechanisms (long term testing, corrosion/oxidation in humidity, and thermal cycling

  10. Test measurements on a secco white-lead containing model samples to assess the effects of exposure to low-fluence UV laser radiation

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, Valentina, E-mail: v.raimondi@ifac.cnr.it [‘Nello Carrara’ Applied Physics Institute - National Research Council of Italy (CNR-IFAC), Firenze (Italy); Andreotti, Alessia; Colombini, Maria Perla [Chemistry and Industrial Chemistry Department (DCCI) - University of Pisa, Pisa (Italy); Cucci, Costanza [‘Nello Carrara’ Applied Physics Institute - National Research Council of Italy (CNR-IFAC), Firenze (Italy); Cuzman, Oana [Institute for the Conservation and Promotion of Cultural Heritage - National Research Council (CNR-ICVBC), Firenze (Italy); Galeotti, Monica [Opificio delle Pietre Dure (OPD), Firenze (Italy); Lognoli, David; Palombi, Lorenzo; Picollo, Marcello [‘Nello Carrara’ Applied Physics Institute - National Research Council of Italy (CNR-IFAC), Firenze (Italy); Tiano, Piero [Institute for the Conservation and Promotion of Cultural Heritage - National Research Council (CNR-ICVBC), Firenze (Italy)

    2015-05-15

    Highlights: • A set of a secco model samples was prepared using white lead and four different organic binders (animal glue and whole egg, whole egg, skimmed milk, egg-oil tempera). • The samples were irradiated with low-fluence UV laser pulses (0.1–1 mJ/cm{sup 2}). • The effects of laser irradiation were analysed by using different techniques. • The analysis did not point out changes due to low-fluence laser irradiation. • High fluence (88 mJ/cm{sup 2}) laser radiation instead yielded a chromatic change ascribed to the inorganic component. - Abstract: Laser-induced fluorescence technique is widely used for diagnostic purposes in several applications and its use could be of advantage for non-invasive on-site characterisation of pigments or other compounds in wall paintings. However, it is well known that long-time exposure to UV and VIS radiation can cause damage to wall paintings. Several studies have investigated the effects of lighting, e.g., in museums: however, the effects of low-fluence laser radiation have not been studied much so far. This paper investigates the effects of UV laser radiation using fluences in the range of 0.1 mJ/cm{sup 2}–1 mJ/cm{sup 2} on a set of a secco model samples prepared with lead white and different type of binders (animal glue and whole egg, whole egg, skimmed milk, egg-oil tempera). The samples were irradiated using a Nd:YAG laser (emission wavelength at 355 nm; pulse width: 5 ns) by applying laser fluences between 0.1 mJ/cm{sup 2} and 1 mJ/cm{sup 2} and a number of laser pulses between 1 and 500. The samples were characterised before and after laser irradiation by using several techniques (colorimetry, optical microscopy, fibre optical reflectance spectroscopy, FT-IR spectroscopy Attenuated Total Reflectance microscopy and gas chromatography/mass spectrometry), to detect variations in the morphological and physico-chemical properties. The results did not point out significant changes in the sample properties after

  11. Measured and calculated isotopes for a gadolinia lead test assembly

    International Nuclear Information System (INIS)

    Hove, C.M.

    1990-01-01

    The US Department of Energy, Duke Power Company, and the B and W Fuel Company participated in an extended burnup project to develop, irradiate, and examine an advanced fuel assembly design for pressurized water reactors. The assembly uses a urania-gadolinia (UO 2 -Gd 2 O 3 ) burnable absorber fuel mixture along with other fuel performance and design features that enhance uranium utilization. Previous milestones in the gadolinia development of the extended burnup project include development and verification of a neutronics model, measurement of materials properties of gadolinia fuel, and a successful gadolinia lead test assembly (LTA) program. One LTA was discharged as planned after one cycle, four LTAs continued for two more cycles, and one LTA of these four underwent a fourth cycle and reached 58,310 MWd/ton U assembly-average burnup, a world record at the time. Hot-cell destructive examination of gadolinia and non-gadolinia fuel rods from the single-cycle LTA (406.2 effective full-power days irradiation) has been completed. The comparison of measured and calculated isotopics for this LTA is the subject of this paper. A comparison of measured and calculated power distributions is also given, because accurate prediction of core performance during power production is ultimately the most important test of a calculational model

  12. Testing the consistency between cosmological measurements of distance and age

    Directory of Open Access Journals (Sweden)

    Remya Nair

    2015-05-01

    Full Text Available We present a model independent method to test the consistency between cosmological measurements of distance and age, assuming the distance duality relation. We use type Ia supernovae, baryon acoustic oscillations, and observational Hubble data, to reconstruct the luminosity distance DL(z, the angle-averaged distance DV(z and the Hubble rate H(z, using Gaussian processes regression technique. We obtain estimate of the distance duality relation in the redshift range 0.1

  13. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  14. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  15. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  16. Parametric model measurement: reframing traditional measurement ideas in neuropsychological practice and research.

    Science.gov (United States)

    Brown, Gregory G; Thomas, Michael L; Patt, Virginie

    Neuropsychology is an applied measurement field with its psychometric work primarily built upon classical test theory (CTT). We describe a series of psychometric models to supplement the use of CTT in neuropsychological research and test development. We introduce increasingly complex psychometric models as measurement algebras, which include model parameters that represent abilities and item properties. Within this framework of parametric model measurement (PMM), neuropsychological assessment involves the estimation of model parameters with ability parameter values assuming the role of test 'scores'. Moreover, the traditional notion of measurement error is replaced by the notion of parameter estimation error, and the definition of reliability becomes linked to notions of item and test information. The more complex PMM approaches incorporate into the assessment of neuropsychological performance formal parametric models of behavior validated in the experimental psychology literature, along with item parameters. These PMM approaches endorse the use of experimental manipulations of model parameters to assess a test's construct representation. Strengths and weaknesses of these models are evaluated by their implications for measurement error conditional upon ability level, sensitivity to sample characteristics, computational challenges to parameter estimation, and construct validity. A family of parametric psychometric models can be used to assess latent processes of interest to neuropsychologists. By modeling latent abilities at the item level, psychometric studies in neuropsychology can investigate construct validity and measurement precision within a single framework and contribute to a unification of statistical methods within the framework of generalized latent variable modeling.

  17. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  18. Compensation of Unavailable Test Frequencies During Immunity Measurements

    Science.gov (United States)

    Gronwald, F.; Kebel, R.; Stadtler, T.

    2012-05-01

    Radiated immunity tests usually are performed in shielded test environments, such as anechoic chambers, GTEM-cells, and mode stirred chambers, for example. Then, if testing is performed in the frequency domain, the corresponding EMC-standards often specify test frequencies that have to be used. These requirements may become incompatible in case of large test objects, such as passenger aircraft, that cannot be placed in shielded test environments but only can be tested in open environments where, for regulatory reasons, not all required test frequencies can be applied. In this contribution it is investigated whether incomplete test procedures due to unavailable test frequencies can be compensated by alternative measurement setups.

  19. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  20. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  1. Experimental tests of the standard model

    International Nuclear Information System (INIS)

    Nodulman, L.

    1998-01-01

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of α EM in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G F , most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered

  2. Experimental tests of the standard model.

    Energy Technology Data Exchange (ETDEWEB)

    Nodulman, L.

    1998-11-11

    The title implies an impossibly broad field, as the Standard Model includes the fermion matter states, as well as the forces and fields of SU(3) x SU(2) x U(1). For practical purposes, I will confine myself to electroweak unification, as discussed in the lectures of M. Herrero. Quarks and mixing were discussed in the lectures of R. Aleksan, and leptons and mixing were discussed in the lectures of K. Nakamura. I will essentially assume universality, that is flavor independence, rather than discussing tests of it. I will not pursue tests of QED beyond noting the consistency and precision of measurements of {alpha}{sub EM} in various processes including the Lamb shift, the anomalous magnetic moment (g-2) of the electron, and the quantum Hall effect. The fantastic precision and agreement of these predictions and measurements is something that convinces people that there may be something to this science enterprise. Also impressive is the success of the ''Universal Fermi Interaction'' description of beta decay processes, or in more modern parlance, weak charged current interactions. With one coupling constant G{sub F}, most precisely determined in muon decay, a huge number of nuclear instabilities are described. The slightly slow rate for neutron beta decay was one of the initial pieces of evidence for Cabbibo mixing, now generalized so that all charged current decays of any flavor are covered.

  3. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  4. A gentle introduction to Rasch measurement models for metrologists

    International Nuclear Information System (INIS)

    Mari, Luca; Wilson, Mark

    2013-01-01

    The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models

  5. Biglan Model Test Based on Institutional Diversity.

    Science.gov (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  6. TESTING GARCH-X TYPE MODELS

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2017-01-01

    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...

  7. A Test for Cluster Bias: Detecting Violations of Measurement Invariance across Clusters in Multilevel Data

    Science.gov (United States)

    Jak, Suzanne; Oort, Frans J.; Dolan, Conor V.

    2013-01-01

    We present a test for cluster bias, which can be used to detect violations of measurement invariance across clusters in 2-level data. We show how measurement invariance assumptions across clusters imply measurement invariance across levels in a 2-level factor model. Cluster bias is investigated by testing whether the within-level factor loadings…

  8. Experimental Tests of the Algebraic Cluster Model

    Science.gov (United States)

    Gai, Moshe

    2018-02-01

    The Algebraic Cluster Model (ACM) of Bijker and Iachello that was proposed already in 2000 has been recently applied to 12C and 16O with much success. We review the current status in 12C with the outstanding observation of the ground state rotational band composed of the spin-parity states of: 0+, 2+, 3-, 4± and 5-. The observation of the 4± parity doublet is a characteristic of (tri-atomic) molecular configuration where the three alpha- particles are arranged in an equilateral triangular configuration of a symmetric spinning top. We discuss future measurement with electron scattering, 12C(e,e’) to test the predicted B(Eλ) of the ACM.

  9. Testing Expected Shortfall Models for Derivative Positions

    NARCIS (Netherlands)

    Kerkhof, F.L.J.; Melenberg, B.; Schumacher, J.M.

    2003-01-01

    In this paper we test several risk management models for computing expected shortfall for one-period hedge errors of hedged derivatives positions.Contrary to value-at-risk, expected shortfall cannot be tested using the standard binomial test, since we need information of the distribution in the

  10. The Couplex test cases: models and lessons

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeat, A. [Lyon-1 Univ., MCS, 69 - Villeurbanne (France); Kern, M. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Schumacher, S.; Talandier, J. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)

    2003-07-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  11. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  12. Test device for measuring permeability of a barrier material

    Science.gov (United States)

    Reese, Matthew; Dameron, Arrelaine; Kempe, Michael

    2014-03-04

    A test device for measuring permeability of a barrier material. An exemplary device comprises a test card having a thin-film conductor-pattern formed thereon and an edge seal which seals the test card to the barrier material. Another exemplary embodiment is an electrical calcium test device comprising: a test card an impermeable spacer, an edge seal which seals the test card to the spacer and an edge seal which seals the spacer to the barrier material.

  13. CARS temperature measurements in a hypersonic propulsion test facility

    Science.gov (United States)

    Jarrett, O., Jr.; Smith, M. W.; Antcliff, R. R.; Northam, G. B.; Cutler, A. D.

    1990-01-01

    Static-temperature measurements performed in a reacting vitiated air-hydrogen Mach-2 flow in a duct in Test Cell 2 at NASA LaRC by using a coherent anti-Stokes Raman spectroscopy (CARS) system are discussed. The hypersonic propulsion Test Cell 2 hardware is outlined with emphasis on optical access ports and safety features in the design of the Test Cell. Such design considerations as vibration, noise, contamination from flow field or atmospheric-borne dust, unwanted laser- and electrically-induced combustion, and movement of the sampling volume in the flow are presented. The CARS system is described, and focus is placed on the principle and components of system-to-monochromator signal coupling. Contour plots of scramjet combustor static temperature in a reacting-flow region are presented for three stations, and it is noted that the measurements reveal such features in the flow as maximum temperature near the model wall in the region of the injector footprint.

  14. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases......, scenarios, behavior, architecture, etc. In this paper we present a method that utilizes the formalism of timed automatons with formal and statistical model checking techniques to apply TD-MBSE to the modeling of system architecture and behavior. The results obtained from applying it to an industrial case...

  15. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  16. Heart Rate Measures of Flight Test and Evaluation

    National Research Council Canada - National Science Library

    Bonner, Malcolm A; Wilson, Glenn F

    2001-01-01

    .... Because flying is a complex task, several measures are required to derive the best evaluation. This article describes the use of heart rate to augment the typical performance and subjective measures used in test and evaluation...

  17. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following this mo...

  18. Model tests for prestressed concrete pressure vessels

    International Nuclear Information System (INIS)

    Stoever, R.

    1975-01-01

    Investigations with models of reactor pressure vessels are used to check results of three dimensional calculation methods and to predict the behaviour of the prototype. Model tests with 1:50 elastic pressure vessel models and with a 1:5 prestressed concrete pressure vessel are described and experimental results are presented. (orig.) [de

  19. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  20. Reducing measurement errors during functional capacity tests in elders.

    Science.gov (United States)

    da Silva, Mariane Eichendorf; Orssatto, Lucas Bet da Rosa; Bezerra, Ewertton de Souza; Silva, Diego Augusto Santos; Moura, Bruno Monteiro de; Diefenthaeler, Fernando; Freitas, Cíntia de la Rocha

    2017-08-23

    Accuracy is essential to the validity of functional capacity measurements. To evaluate the error of measurement of functional capacity tests for elders and suggest the use of the technical error of measurement and credibility coefficient. Twenty elders (65.8 ± 4.5 years) completed six functional capacity tests that were simultaneously filmed and timed by four evaluators by means of a chronometer. A fifth evaluator timed the tests by analyzing the videos (reference data). The means of most evaluators for most tests were different from the reference (p error of measurement between tests and evaluators. The Bland-Altman test showed difference in the concordance of the results between methods. Short duration tests showed higher technical error of measurement than longer tests. In summary, tests timed by a chronometer underestimate the real results of the functional capacity. Difference between evaluators' reaction time and perception to determine the start and the end of the tests would justify the errors of measurement. Calculation of the technical error of measurement or the use of the camera can increase data validity.

  1. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  2. A simple parametric model selection test

    OpenAIRE

    Susanne M. Schennach; Daniel Wilhelm

    2014-01-01

    We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...

  3. Aerosol behaviour modeling and measurements

    International Nuclear Information System (INIS)

    Gieseke, J.A.; Reed, L.D.

    1977-01-01

    Aerosol behavior within Liquid Metal Fast Breeder Reactor (LMFBR) containments is of critical importance since most of the radioactive species are expected to be associated with particulate forms and the mass of radiologically significant material leaked to the ambient atmosphere is directly related to the aerosol concentration airborne within the containment. Mathematical models describing the behavior of aerosols in closed environments, besides providing a direct means of assessing the importance of specific assumptions regarding accident sequences, will also serve as the basic tool with which to predict the consequences of various postulated accident situations. Consequently, considerable efforts have been recently directed toward the development of accurate and physically realistic theoretical aerosol behavior models. These models have accounted for various mechanisms affecting agglomeration rates of airborne particulate matter as well as particle removal rates from closed systems. In all cases, spatial variations within containments have been neglected and a well-mixed control volume has been assumed. Examples of existing computer codes formulated from the mathematical aerosol behavior models are the Brookhaven National Laboratory TRAP code, the PARDISEKO-II and PARDISEKO-III codes developed at Karlsruhe Nuclear Research Center, and the HAA-2, HAA-3, and HAA-3B codes developed by Atomics International. Because of their attractive short computation times, the HAA-3 and HAA-3B codes have been used extensively for safety analyses and are attractive candidates with which to demonstrate order of magnitude estimates of the effects of various physical assumptions. Therefore, the HAA-3B code was used as the nucleus upon which changes have been made to account for various physical mechanisms which are expected to be present in postulated accident situations and the latest of the resulting codes has been termed the HAARM-2 code. It is the primary purpose of the HAARM

  4. Results from laboratory and field testing of nitrate measuring spectrophotometers

    Science.gov (United States)

    Snazelle, Teri T.

    2015-01-01

    Five ultraviolet (UV) spectrophotometer nitrate analyzers were evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) during a two-phase evaluation. In Phase I, the TriOS ProPs (10-millimeter (mm) path length), Hach NITRATAX plus sc (5-mm path length), Satlantic Submersible UV Nitrate Analyzer (SUNA, 10-mm path length), and S::CAN Spectro::lyser (5-mm path length) were evaluated in the HIF Water-Quality Servicing Laboratory to determine the validity of the manufacturer's technical specifications for accuracy, limit of linearity (LOL), drift, and range of operating temperature. Accuracy specifications were met in the TriOS, Hach, and SUNA. The stock calibration of the S::CAN required two offset adjustments before the analyzer met the manufacturer's accuracy specification. Instrument drift was observed only in the S::CAN and was the result of leaching from the optical path insert seals. All tested models, except for the Hach, met their specified LOL in the laboratory testing. The Hach's range was found to be approximately 18 milligrams nitrogen per liter (mg-N/L) and not the manufacturer-specified 25 mg-N/L. Measurements by all of the tested analyzers showed signs of hysteresis in the operating temperature tests. Only the SUNA measurements demonstrated excessive noise and instability in temperatures above 20 degrees Celsius (°C). The SUNA analyzer was returned to the manufacturer at the completion of the Phase II field deployment evaluation for repair and recalibration, and the performance of the sensor improved significantly.

  5. Evaluating measurement of dynamic constructs: defining a measurement model of derivatives.

    Science.gov (United States)

    Estabrook, Ryne

    2015-03-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This article defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications, and future directions are discussed. (c) 2015 APA, all rights reserved).

  6. Laser shaft alignment measurement model

    Science.gov (United States)

    Mo, Chang-tao; Chen, Changzheng; Hou, Xiang-lin; Zhang, Guoyu

    2007-12-01

    Laser beam's track which is on photosensitive surface of the a receiver will be closed curve, when driving shaft and the driven shaft rotate with same angular velocity and rotation direction. The coordinate of arbitrary point which is on the curve is decided by the relative position of two shafts. Basing on the viewpoint, a mathematic model of laser alignment is set up. By using a data acquisition system and a data processing model of laser alignment meter with single laser beam and a detector, and basing on the installation parameter of computer, the state parameter between two shafts can be obtained by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated. This will instruct us to move the apparatus to align the shafts.

  7. Measurement properties of continuous text reading performance tests

    NARCIS (Netherlands)

    Verkerk-Brussee, T.; van Nispen, R.M.A.; van Rens, G.H.M.B.

    2014-01-01

    Purpose: Measurement properties of tests to assess reading acuity or reading performance have not been extensively evaluated. This study aims to provide an overview of the literature on available continuous text reading tests and their measurement properties. Methods: A literature search was

  8. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  9. A test of the International Personality Item Pool representation of the Revised NEO Personality Inventory and development of a 120-item IPIP-based measure of the five-factor model.

    Science.gov (United States)

    Maples, Jessica L; Guan, Li; Carter, Nathan T; Miller, Joshua D

    2014-12-01

    There has been a substantial increase in the use of personality assessment measures constructed using items from the International Personality Item Pool (IPIP) such as the 300-item IPIP-NEO (Goldberg, 1999), a representation of the Revised NEO Personality Inventory (NEO PI-R; Costa & McCrae, 1992). The IPIP-NEO is free to use and can be modified to accommodate its users' needs. Despite the substantial interest in this measure, there is still a dearth of data demonstrating its convergence with the NEO PI-R. The present study represents an investigation of the reliability and validity of scores on the IPIP-NEO. Additionally, we used item response theory (IRT) methodology to create a 120-item version of the IPIP-NEO. Using an undergraduate sample (n = 359), we examined the reliability, as well as the convergent and criterion validity, of scores from the 300-item IPIP-NEO, a previously constructed 120-item version of the IPIP-NEO (Johnson, 2011), and the newly created IRT-based IPIP-120 in comparison to the NEO PI-R across a range of outcomes. Scores from all 3 IPIP measures demonstrated strong reliability and convergence with the NEO PI-R and a high degree of similarity with regard to their correlational profiles across the criterion variables (rICC = .983, .972, and .976, respectively). The replicability of these findings was then tested in a community sample (n = 757), and the results closely mirrored the findings from Sample 1. These results provide support for the use of the IPIP-NEO and both 120-item IPIP-NEO measures as assessment tools for measurement of the five-factor model. (c) 2014 APA, all rights reserved.

  10. A Preliminary Field Test of an Employee Work Passion Model

    Science.gov (United States)

    Zigarmi, Drea; Nimon, Kim; Houson, Dobie; Witt, David; Diehl, Jim

    2011-01-01

    Four dimensions of a process model for the formulation of employee work passion, derived from Zigarmi, Nimon, Houson, Witt, and Diehl (2009), were tested in a field setting. A total of 447 employees completed questionnaires that assessed the internal elements of the model in a corporate work environment. Data from the measurements of work affect,…

  11. Testing Affine Term Structure Models in Case of Transaction Costs

    NARCIS (Netherlands)

    Driessen, J.J.A.G.; Melenberg, B.; Nijman, T.E.

    1999-01-01

    In this paper we empirically analyze the impact of transaction costs on the performance of affine interest rate models. We test the implied (no arbitrage) Euler restrictions, and we calculate the specification error bound of Hansen and Jagannathan to measure the extent to which a model is

  12. Fast recovery strain measurements in a nuclear test environment

    International Nuclear Information System (INIS)

    Kitchen, W.R.; Nauman, W.J.; Vollmer, D.W.

    1979-01-01

    The recovery of early-time (50 μs or less) strain gage data on structural response experiments in underground nuclear tests has been a continuing problem for experimenters at the Nevada Test Site. Strain measurement is one of the primary techniques used to obtain experimental data for model verification and correlation with predicted effects. Peak strains generally occur within 50 to 100 μs of the radiation exposure. Associated with the exposure is an intense electromagnetic impulse that produces potentials of kilovolts and currents of kiloamperes on the experimental structures. For successful operation, the transducer and associated recording system must recover from the initial noise overload and accurately track the strain response within about 50 μs of the nuclear detonation. A gaging and fielding technique and a recording system design that together accomplish these objectives are described. Areas discussed include: (1) noise source model; (2) experimental cassette design, gage application, grounding, and shielding; (3) cable design and shielding between gage and recorder; (4) recorder design including signal conditioner/amplifier, digital encoder, buffer memory, and uphole data transmission; and (5) samples of experimental data

  13. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  14. Measurement error in pressure-decay leak testing

    International Nuclear Information System (INIS)

    Robinson, J.N.

    1979-04-01

    The effect of measurement error in presssure-decay leak testing is considered, and examples are presented to demonstrate how it can be properly accomodated in analyzing data from such tests. Suggestions for more effective specification and conduct of leak tests are presented

  15. Designing and Testing a Database for the Qweak Measurement

    Science.gov (United States)

    Holcomb, Edward; Spayde, Damon; Pote, Tim

    2009-05-01

    The aim of the Qweak experiment is to make the most precise determination to date, aside from measurements at the Z-pole, of the Weinberg angle via a measurement of the proton's weak charge. The weak charge determines a particle's interaction with Z-type bosons. According to the Standard Model the value of the angle depends on the momentum of the exchanged Z boson and is well-determined. Deviations from the Standard Model would indicate new physics. During Qweak, bundles of longitudinally polarized electrons will be scattered from a proton target. Elastically scattered electrons will be detected in one of eight quartz bars via the emitted Cerenkov radiation. Periodically the helicity of these electrons will be reversed. The difference in the scattering rates of these two helicity states creates an asymmetry; the Weinberg angle can be calculated from this. Our role in the collaboration was the design, creation, and implementation of a database for the Qweak experiment. The purpose of this database is to store pertinent information, such as detector asymmetries and monitor calibrations, for later access. In my talk I plan to discuss the database design and the results of various tests.

  16. Intelligence is what the intelligence test measures. Seriously

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Kan, K.-J.; Borsboom, D.

    2014-01-01

    The mutualism model, an alternative for the g-factor model of intelligence, implies a formative measurement model in which "g" is an index variable without a causal role. If this model is accurate, the search for a genetic of brain instantiation of "g" is deemed useless. This also implies that the

  17. Modeling a Small Punch Testing Device

    Directory of Open Access Journals (Sweden)

    S. Habibi

    2014-04-01

    Full Text Available A small punch test of a sample in miniature is implemented in order to estimate the ultimate load of CrMoV ductile steel. The objective of this study is to model the ultimate tensile strength and ultimate load indentation according to the geometrical parameters of the SPT using experimental data. A comparison of the model obtained with the two models (European code of practice and method of Norris and Parker allows the design and dimensioning of an indentation device that meets the practical constraints. Implemented as a Matlab program, allows the investigation of new combinations of test variables.

  18. NedWind 25 Blade Testing at NREL for the European Standards Measurement and Testing Program

    Energy Technology Data Exchange (ETDEWEB)

    Larwood, S.; Musial, W.; Freebury, G.; Beattie, A.G.

    2001-04-19

    In the mid-90s the European community initiated the Standards, Measurements, and Testing (SMT) program to harmonize testing and measurement procedures in several industries. Within the program, a project was carried out called the European Wind Turbine Testing Procedure Development. The second part of that project, called Blade Test Methods and Techniques, included the United States and was devised to help blade-testing laboratories harmonize their testing methods. This report provides the results of those tests conducted by the National Renewable Energy Laboratory.

  19. Measurement control program at model facility

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    A measurement control program for the model plant is described. The discussion includes the technical basis for such a program, the application of measurement control principles to each measurement, and the use of special experiments to estimate measurement error parameters for difficult-to-measure materials. The discussion also describes the statistical aspects of the program, and the documentation procedures used to record, maintain, and process the basic data

  20. Applying the Implicit Association Test to Measure Intolerance of Uncertainty.

    Science.gov (United States)

    Mosca, Oriana; Dentale, Francesco; Lauriola, Marco; Leone, Luigi

    2016-08-01

    Intolerance of Uncertainty (IU) is a key trans-diagnostic personality construct strongly associated with anxiety symptoms. Traditionally, IU is measured through self-report measures that are prone to bias effects due to impression management concerns and introspective difficulties. Moreover, self-report scales are not able to intercept the automatic associations that are assumed to be main determinants of several spontaneous responses (e.g., emotional reactions). In order to overcome these limitations, the Implicit Association Test (IAT) was applied to measure IU, with a particular focus on reliability and criterion validity issues. The IU-IAT and the Intolerance of Uncertainty Inventory (IUI) were administered to an undergraduate student sample (54 females and 10 males) with a mean age of 23 years (SD = 1.7). Successively, participants were asked to provide an individually chosen uncertain event from their own lives that may occur in the future and were requested to identify a number of potential negative consequences of it. Participants' responses in terms of cognitive thoughts (i.e., cognitive appraisal) and worry reactions toward these events were assessed using the two subscales of the Worry and Intolerance of Uncertainty Beliefs Questionnaire. The IU-IAT showed an adequate level of internal consistency and a not significant correlation with the IUI. A path analysis model, accounting for 35% of event-related worry, revealed that IUI had a significant indirect effect on the dependent variable through event-related IU thoughts. By contrast, as expected, IU-IAT predicted event-related worry independently from IU thoughts. In accordance with dual models of social cognition, these findings suggest that IU can influence event-related worry through two different processing pathways (automatic vs. deliberative), supporting the criterion and construct validity of the IU-IAT. The potential role of the IU-IAT for clinical applications was discussed. © The Author

  1. The emperor’s new measurement model

    NARCIS (Netherlands)

    Zand Scholten, A.; Maris, G.; Borsboom, D.

    2011-01-01

    In this article the author discusses professor Stephen M. Humphry's critical attitude with respect to psychometric modeling. The author criticizes Humphry's model stating that the model is theoretically interesting but cannot be tested as it is not identified. The author also states that Humphry's

  2. Surface moisture measurement system hardware acceptance test procedure

    International Nuclear Information System (INIS)

    Ritter, G.A.

    1996-01-01

    The purpose of this acceptance test procedure is to verify that the mechanical and electrical features of the Surface Moisture Measurement System are operating as designed and that the unit is ready for field service. This procedure will be used in conjunction with a software acceptance test procedure, which addresses testing of software and electrical features not addressed in this document. Hardware testing will be performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. These systems were developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks

  3. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  4. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.; NASA/Fermilab Astrophysics Center, Batavia, IL)

    1987-01-01

    Theoretical prejudice and inflationary models of the very early universe strongly favor the flat, Einstein-de Sitter model of the universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the universe which posses a smooth component of energy density. The kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings is studied in detail. The observational tests which can be used to discriminate between these models are also discussed. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations. 58 references

  5. Kinematic tests of exotic flat cosmological models

    International Nuclear Information System (INIS)

    Charlton, J.C.; Turner, M.S.

    1986-05-01

    Theoretical prejudice and inflationary models of the very early Universe strongly favor the flat, Einstein-deSitter model of the Universe. At present the observational data conflict with this prejudice. This conflict can be resolved by considering flat models of the Universe which possess a smooth component by energy density. We study in detail the kinematics of such models, where the smooth component is relativistic particles, a cosmological term, a network of light strings, or fast-moving, light strings. We also discuss the observational tests which can be used to discriminate between these models. These tests include the magnitude-redshift, lookback time-redshift, angular size-redshift, and comoving volume-redshift diagrams and the growth of density fluctuations

  6. Quality testing as a reserve for rationalization measures

    International Nuclear Information System (INIS)

    Babic, H.G.

    1979-01-01

    The increase in large-scale planning activities and their realization in all fields of quality testing as opposed to the requirements for a reduction of costs emphasizes the necessity of rationaling measures. Especially during the planning phases there are a number of possibilities for solving this problem: the determination of reference dates for production devices based on the preparation of dates with regard to machines and testing, the assignment of the testing frequency in dependence of the quotient from deviations in production and characteristics tolerances, the application of methods of operation planning following different priority rules. Apart from already existing systems for normal and computeraided planning of testing material application by the help of which the most optimal testing material is selected for certain testing jobs systematic representations and programs for the planning of the testing procedure and the testing precision will enrich the joint complex of the organization of testing procedures. (orig./RW) [de

  7. Shear punch and microhardness tests for strength and ductility measurements

    International Nuclear Information System (INIS)

    Lucas, G.E.; Odette, G.R.; Sheckherd, J.W.

    1983-01-01

    In response to the requirements of the fusion reactor materials development program for small-scale mechanical property tests, two techniques have been developed, namely ball microhardness and shear punch tests. The ball microhardness test is based on the repeated measurement at increasing loads of the chordal diameter of an impression made by a spherical penetrator. A correlation has been developed to predict the constitutive relation of the test material from these data. In addition, the indentation pile-up geometry can be analyzed to provide information on the homogeneity of plastic flow in the test material. The shear punch test complements the microhardness test. It is based on blanking a circular disk from a fixed sheet metal specimen. The test is instrumented to provide punch load-displacement data, and these data can be used to determine flow properties of the test material such as yield stress, ultimate tensile strength, work-hardening exponent, and reduction of area

  8. Validation of theoretical models through measured pavement response

    DEFF Research Database (Denmark)

    Ullidtz, Per

    1999-01-01

    mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...

  9. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  10. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  11. Molecular Sieve Bench Testing and Computer Modeling

    Science.gov (United States)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  12. 76 FR 1136 - Electroshock Weapons Test and Measurement Workshop

    Science.gov (United States)

    2011-01-07

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Institute of Standards and Technology Electroshock Weapons Test and Measurement Workshop AGENCY: National Institute of Standards and Technology (NIST), United States Department of Commerce. ACTION: Notice...

  13. Instructions for 104-SX liquid level measurement field tests

    International Nuclear Information System (INIS)

    Webb, R.H.

    1994-01-01

    This document provides detailed instructions for field testing a suggested solution of inserting a liner inside the 104-SX failed Liquid Observation Well to gain access for making temporary Liquid Level Measurement until a permanent solution has been provided

  14. Simulation of thermohydraulic phenomena and model test for FBR

    International Nuclear Information System (INIS)

    Satoh, Kazuziro

    1994-01-01

    This paper summarizes the major thermohydraulic phenomena of FBRs and the conventional ways of their model tests, and introduces the recent findings regarding measurement technology and computational science. In the future commercial stage of FBRs, the design optimization will becomes important to improve economy and safety more and more. It is indispensable to use computational science to the plant design and safety evaluation. The most of the model tests will be replaced by the simulation analyses based on computational science. The measurement technology using ultrasonic and the numerical simulation with super parallel computing are considered to be the key technology to realize the design by analysis method. (author)

  15. Direct tests of measurement uncertainty relations: what it takes.

    Science.gov (United States)

    Busch, Paul; Stevens, Neil

    2015-02-20

    The uncertainty principle being a cornerstone of quantum mechanics, it is surprising that, in nearly 90 years, there have been no direct tests of measurement uncertainty relations. This lacuna was due to the absence of two essential ingredients: appropriate measures of measurement error (and disturbance) and precise formulations of such relations that are universally valid and directly testable. We formulate two distinct forms of direct tests, based on different measures of error. We present a prototype protocol for a direct test of measurement uncertainty relations in terms of value deviation errors (hitherto considered nonfeasible), highlighting the lack of universality of these relations. This shows that the formulation of universal, directly testable measurement uncertainty relations for state-dependent error measures remains an important open problem. Recent experiments that were claimed to constitute invalidations of Heisenberg's error-disturbance relation, are shown to conform with the spirit of Heisenberg's principle if interpreted as direct tests of measurement uncertainty relations for error measures that quantify distances between observables.

  16. Accuracy tests of the tessellated SLBM model

    International Nuclear Information System (INIS)

    Ramirez, A L; Myers, S C

    2007-01-01

    We have compared the Seismic Location Base Model (SLBM) tessellated model (version 2.0 Beta, posted July 3, 2007) with the GNEMRE Unified Model. The comparison is done on a layer/depth-by-layer/depth and layer/velocity-by-layer/velocity comparison. The SLBM earth model is defined on a tessellation that spans the globe at a constant resolution of about 1 degree (Ballard, 2007). For the tests, we used the earth model in file ''unified( ) iasp.grid''. This model contains the top 8 layers of the Unified Model (UM) embedded in a global IASP91 grid. Our test queried the same set of nodes included in the UM model file. To query the model stored in memory, we used some of the functionality built into the SLBMInterface object. We used the method get InterpolatedPoint() to return desired values for each layer at user-specified points. The values returned include: depth to the top of each layer, layer velocity, layer thickness and (for the upper-mantle layer) velocity gradient. The SLBM earth model has an extra middle crust layer whose values are used when Pg/Lg phases are being calculated. This extra layer was not accessed by our tests. Figures 1 to 8 compare the layer depths, P velocities and P gradients in the UM and SLBM models. The figures show results for the three sediment layers, three crustal layers and the upper mantle layer defined in the UM model. Each layer in the models (sediment1, sediment2, sediment3, upper crust, middle crust, lower crust and upper mantle) is shown on a separate figure. The upper mantle P velocity and gradient distribution are shown on Figures 7 and 8. The left and center images in the top row of each figure is the rendering of depth to the top of the specified layer for the UM and SLBM models. When a layer has zero thickness, its depth is the same as that of the layer above. The right image in the top row is the difference between in layer depth for the UM and SLBM renderings. The left and center images in the bottom row of the figures are

  17. A Human Capital Model of Educational Test Scores

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55 and tested in 1968. The procedure takes account of unobservable effects as well as excessive zeros in the data. The bulk of unobservable effects are uncorrelated...... with observable parental attributes and, thus, are environmental rather than genetic in origin. We show that the test scores measure manifest or measured ability as it has evolved over the life of the respondent and is, thus, more a product of the human capital formation process than some latent or fundamental...... measure of pure cognitive ability. We find that variables which are not closely associated with traditional notions of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture, attitudes...

  18. Measuring Teacher Quality with Value-Added Modeling

    Science.gov (United States)

    Marder, Michael

    2012-01-01

    Using computers to evaluate teachers based on student test scores is more difficult than it seems. Value-added modeling is a genuinely serious attempt to grapple with the difficulties. Value-added modeling carries the promise of measuring teacher quality automatically and objectively, and improving school systems at minimal cost. The essence of…

  19. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  20. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  1. Radio propagation measurement and channel modelling

    CERN Document Server

    Salous, Sana

    2013-01-01

    While there are numerous books describing modern wireless communication systems that contain overviews of radio propagation and radio channel modelling, there are none that contain detailed information on the design, implementation and calibration of radio channel measurement equipment, the planning of experiments and the in depth analysis of measured data. The book would begin with an explanation of the fundamentals of radio wave propagation and progress through a series of topics, including the measurement of radio channel characteristics, radio channel sounders, measurement strategies

  2. The "Test of Financial Literacy": Development and Measurement Characteristics

    Science.gov (United States)

    Walstad, William B.; Rebeck, Ken

    2017-01-01

    The "Test of Financial Literacy" (TFL) was created to measure the financial knowledge of high school students. Its content is based on the standards and benchmarks stated in the "National Standards for Financial Literacy" (Council for Economic Education 2013). The test development process involved extensive item writing and…

  3. Measuring Intelligence with the Goodenough-Harris Drawing Test.

    Science.gov (United States)

    Scott, Linda Howard

    1981-01-01

    Critically evaluates the literature through 1977 on the Goodenough-Harris Drawing Test. Areas reviewed are administration and standardization of the man and woman scales, test ceiling, sex differences, the Quality scale, reliability, criterion validity, validity with measures of academic achievement, cultural variables, and use with the learning…

  4. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  5. Modelling of wetting tests for a natural pyroclastic soil

    Directory of Open Access Journals (Sweden)

    Moscariello Mariagiovanna

    2016-01-01

    Full Text Available The so-called wetting-induced collapse is one of the most common problems associated with unsaturated soils. This paper applies the Modified Pastor-Zienkiewicz model (MPZ to analyse the wetting behaviour of undisturbed specimens of an unsaturated air-fall volcanic (pyroclastic soil originated from the explosive activity of the Somma-Vesuvius volcano (Southern Italy. Both standard oedometric tests, suction-controlled oedometeric tests and suction-controlled isotropic tests are considered. The results of the constitutive modelling show a satisfactory capability of the MPZ to simulate the variations of soil void ratio upon wetting, with negligible differences among the measured and the computed values.

  6. Indoor MIMO Channel Measurement and Modeling

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Andersen, Jørgen Bach

    2005-01-01

    Forming accurate models of the multiple input multiple output (MIMO) channel is essential both for simulation as well as understanding of the basic properties of the channel. This paper investigates different known models using measurements obtained with a 16x32 MIMO channel sounder for the 5.8GHz...... band. The measurements were carried out in various indoor scenarios including both temporal and spatial aspects of channel changes. The models considered include the so-called Kronecker model, a model proposed by Weichselberger et. al., and a model involving the full covariance matrix, the most...... accurate model for Gaussian channels. For each of the environments different sizes of both the transmitter and receiver antenna arrays are investigated, 2x2 up to 16x32. Generally it was found that in terms of capacity cumulative distribution functions (CDFs) all models fit well for small array sizes...

  7. Test Station for Measuring Aluminum Tube Geometrical Parameters

    CERN Document Server

    Oansea, D; Gongadze, A L; Gostkin, M I; Dedovich, D V; Evtoukhovitch, P G; Comanescu, B; Kotov, S A; Necsoiu, T; Potrap, I N; Rogalev, E V; Tskhadadze, E G; Chelkov, G A

    2001-01-01

    A test station for quality control of aluminum tube outer diameter and wall thickness is presented. The tested tubes are used for drift detector assembly of ATLAS (LHC, CERN) muon system. The outer diameter and wall thickness of aluminium tubes are measured by means of noncontact optical and ultrasonic methods respectively with the accuracy of 3 {\\mu}m. The testing process is automatic and interacts with the production data base.

  8. Tritium transfer in pigs - A model test

    Energy Technology Data Exchange (ETDEWEB)

    Melintescu, A.; Galeriu, D. [Horia Hulubei National Inst. for Physics and Nuclear Engineering, Dept. of Life and Environmental Physics, 407 Atomistilor St., Bucharest-Magurele, RO-077125 (Romania)

    2008-07-15

    In the frame of IAEA EMRAS (Environmental Modelling for Radiation Safety) programme, there was developed a scenario for models ' testing starting with unpublished data for a sow fed with OBT for 84 days. The scenario includes model predictions for the dynamics of tritium in urine and faeces and HTO and OBT in organs at sacrifice. There have been done two inter-comparison exercises and most of the models succeeded to give predictions better than a factor 3 to 5, excepting faeces. There has been done an analysis of models' structure, performance and limits in order to be able to build a model of moderate complexity with a reliable predictive power, able to be applied for human dosimetry, also, when OBT data are missing. (authors)

  9. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    Ductile damage modeling within the Small Punch Test (SPT) is extensively investigated. The capabilities ofthe SPT to reliably estimate fracture and damage properties are thoroughly discussed and emphasis isplaced on the use of notched specimens. First, different notch profiles are analyzed....... Furthermore,Gurson-Tvergaard-Needleman model predictions from a top-down approach are employed to gain insightinto the mechanisms governing crack initiation and subsequent propagation in small punch experiments.An accurate assessment of micromechanical toughness parameters from the SPT...

  10. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  11. A hinged-pad test structure for sliding friction measurement in micromachining

    Energy Technology Data Exchange (ETDEWEB)

    Boer, M.P. de; Redmond, J.M.; Michalske, T.A.

    1998-08-01

    The authors describe the design, modeling, fabrication and initial testing of a new test structure for friction measurement in MEMS. The device consists of a cantilevered forked beam and a friction pad attached via a hinge. Compared to previous test structures, the proposed structure can measure friction over much larger pressure ranges, yet occupies one hundred times less area. The placement of the hinge is crucial to obtaining a well-known and constant pressure distribution in the device. Static deflections on the device were measured and modeled numerically, Preliminary results indicate that friction pad slip is sensitive to friction pad normal force.

  12. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  13. Validating Grammaticality Judgment Tests: Evidence from Two New Psycholinguistic Measures

    Science.gov (United States)

    Vafaee, Payman; Suzuki, Yuichi; Kachisnke, Ilina

    2017-01-01

    Several previous factor-analytic studies on the construct validity of grammaticality judgment tests (GJTs) concluded that untimed GJTs measure explicit knowledge (EK) and timed GJTs measure implicit knowledge (IK) (Bowles, 2011; R. Ellis, 2005; R. Ellis & Loewen, 2007). It has also been shown that, irrespective of the time condition chosen,…

  14. Radiometric instrumentation and measurements guide for photovoltaic performance testing

    Energy Technology Data Exchange (ETDEWEB)

    Myers, D.

    1997-04-01

    The Photovoltaic Module and Systems Performance and Engineering Project at the National Renewable Energy Laboratory performs indoor and outdoor standardization, testing, and monitoring of the performance of a wide range of photovoltaic (PV) energy conversion devices and systems. The PV Radiometric Measurements and Evaluation Team (PVSRME) within that project is responsible for measurement and characterization of natural and artificial optical radiation which stimulates the PV effect. The PV manufacturing and research and development community often approaches project members for technical information and guidance. A great area of interest is radiometric instrumentation, measurement techniques, and data analysis applied to understanding and improving PV cell, module, and system performance. At the Photovoltaic Radiometric Measurements Workshop conducted by the PVSRME team in July 1995, the need to communicate knowledge of solar and optical radiometric measurements and instrumentation, gained as a result of NREL`s long-term experiences, was identified as an activity that would promote improved measurement processes and measurement quality in the PV research and manufacturing community. The purpose of this document is to address the practical and engineering need to understand optical and solar radiometric instrument performance, selection, calibration, installation, and maintenance applicable to indoor and outdoor radiometric measurements for PV calibration, performance, and testing applications. An introductory section addresses radiometric concepts and definitions. Next, concepts essential to spectral radiometric measurements are discussed. Broadband radiometric instrumentation and measurement concepts are then discussed. Each type of measurement serves as an important component of the PV cell, module, and system performance measurement and characterization process.

  15. Effective UV radiation from model calculations and measurements

    Science.gov (United States)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  16. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  17. INFORMATION-MEASURING TEST SYSTEM OF DIESEL LOCOMOTIVE HYDRAULIC TRANSMISSIONS

    Directory of Open Access Journals (Sweden)

    I. V. Zhukovytskyy

    2015-08-01

    Full Text Available Purpose. The article describes the process of developing the information-measuring test system of diesel locomotives hydraulic transmission, which gives the possibility to obtain baseline data to conduct further studies for the determination of the technical condition of diesel locomotives hydraulic transmission. The improvement of factory technology of post-repair tests of hydraulic transmissions by automating the existing hydraulic transmission test stands according to the specifications of the diesel locomotive repair enterprises was analyzed. It is achieved based on a detailed review of existing foreign information-measuring test systems for hydraulic transmission of diesel locomotives, BelAZ earthmover, aircraft tug, slag car, truck, BelAZ wheel dozer, some brands of tractors, etc. The problem for creation the information-measuring test systems for diesel locomotive hydraulic transmission is being solved, starting in the first place from the possibility of automation of the existing test stand of diesel locomotives hydraulic transmission at Dnipropetrovsk Diesel Locomotive Repair Plant "Promteplovoz". Methodology. In the work the researchers proposed the method to create a microprocessor automated system of diesel locomotives hydraulic transmission stand testing in the locomotive plant conditions. It acts by justifying the selection of the necessary sensors, as well as the application of the necessary hardware and software for information-measuring systems. Findings. Based on the conducted analysis there was grounded the necessity of improvement the plant hydraulic transmission stand testing by creating a microprocessor testing system, supported by the experience of developing such systems abroad. Further research should be aimed to improve the accuracy and frequency of data collection by adopting the more modern and reliable sensors in tandem with the use of filtering software for electromagnetic and other interference. Originality. The

  18. Robust Design of Reliability Test Plans Using Degradation Measures.

    Energy Technology Data Exchange (ETDEWEB)

    Lane, Jonathan Wesley; Lane, Jonathan Wesley; Crowder, Stephen V.; Crowder, Stephen V.

    2014-10-01

    With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus, it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. Generally, the assumption is made that the error associated with a degradation measure follows a known distribution, usually normal, although in practice cases may arise where that assumption is not valid. In this paper, we examine such degradation measures, both simulated and real, and present non-parametric methods to demonstrate reliability and to develop reliability test plans for the future production of components with this form of degradation.

  19. Performance tests for instruments measuring radon activity concentration

    International Nuclear Information System (INIS)

    Beck, T.R.; Buchroeder, H.; Schmidt, V.

    2009-01-01

    Performance tests of electronic instruments measuring the activity concentration of 222 Rn have been carried out with respect to the standard IEC 61577-2. In total, 9 types of instrument operating with ionization chambers or electrostatic collection have been tested for the influence of different climatic and radiological factors on the measurement characteristics. It is concluded that all types of instrument, which are commercially available, are suitable for indoor radon measurements. Because of the dependence on climatic conditions, the outdoor use is partly limited.

  20. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  1. Testing of a steel containment vessel model

    International Nuclear Information System (INIS)

    Luk, V.K.; Hessheimer, M.F.; Matsumoto, T.; Komine, K.; Costello, J.F.

    1997-01-01

    A mixed-scale containment vessel model, with 1:10 in containment geometry and 1:4 in shell thickness, was fabricated to represent an improved, boiling water reactor (BWR) Mark II containment vessel. A contact structure, installed over the model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. This paper describes the pretest preparations and the conduct of the high pressure test of the model performed on December 11-12, 1996. 4 refs., 2 figs

  2. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...

  3. Binomial test models and item difficulty

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1979-01-01

    In choosing a binomial test model, it is important to know exactly what conditions are imposed on item difficulty. In this paper these conditions are examined for both a deterministic and a stochastic conception of item responses. It appears that they are more restrictive than is generally

  4. Shallow foundation model tests in Europe

    Czech Academy of Sciences Publication Activity Database

    Feda, Jaroslav; Simonini, P.; Arslan, U.; Georgiodis, M.; Laue, J.; Pinto, I.

    1999-01-01

    Roč. 2, č. 4 (1999), s. 447-475 ISSN 1436-6517. [Int. Conf. on Soil - Structure Interaction in Urban Civ. Engineering. Darmstadt, 08.10.1999-09.10.1999] R&D Projects: GA MŠk OC C7.10 Keywords : shallow foundations * model tests * sandy subsoil * bearing capacity * settlement Subject RIV: JM - Building Engineering

  5. Testing spatial heterogeneity with stock assessment models

    DEFF Research Database (Denmark)

    Jardim, Ernesto; Eero, Margit; Silva, Alexandra

    2018-01-01

    This paper describes a methodology that combines meta-population theory and stock assessment models to gain insights about spatial heterogeneity of the meta-population in an operational time frame. The methodology was tested with stochastic simulations for different degrees of connectivity betwee...

  6. Turbulence Modeling Validation, Testing, and Development

    Science.gov (United States)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  7. Preliminary results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Matsumoto, T.; Komine, K.; Arai, S.

    1997-01-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented

  8. Testing measurement invariance of composites using partial least squares

    NARCIS (Netherlands)

    Henseler, Jörg; Ringle, Christian M.; Sarstedt, Marko

    2016-01-01

    Purpose Research on international marketing usually involves comparing different groups of respondents. When using structural equation modeling (SEM), group comparisons can be misleading unless researchers establish the invariance of their measures. While methods have been proposed to analyze

  9. Model SH intelligent instrument for thickness measuring

    International Nuclear Information System (INIS)

    Liu Juntao; Jia Weizhuang; Zhao Yunlong

    1995-01-01

    The authors introduce Model SH Intelligent Instrument for thickness measuring by using principle of beta back-scattering and its application range, features, principle of operation, system design, calibration and specifications

  10. Deformation modeling and the strain transient dip test

    International Nuclear Information System (INIS)

    Jones, W.B.; Rohde, R.W.; Swearengen, J.C.

    1980-01-01

    Recent efforts in material deformation modeling reveal a trend toward unifying creep and plasticity with a single rate-dependent formulation. While such models can describe actual material deformation, most require a number of different experiments to generate model parameter information. Recently, however, a new model has been proposed in which most of the requisite constants may be found by examining creep transients brought about through abrupt changes in creep stress (strain transient dip test). The critical measurement in this test is the absence of a resolvable creep rate after a stress drop. As a consequence, the result is extraordinarily sensitive to strain resolution as well as machine mechanical response. This paper presents the design of a machine in which these spurious effects have been minimized and discusses the nature of the strain transient dip test using the example of aluminum. It is concluded that the strain transient dip test is not useful as the primary test for verifying any micromechanical model of deformation. Nevertheless, if a model can be developed which is verifiable by other experimentts, data from a dip test machine may be used to generate model parameters

  11. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  12. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  13. Modal test - Measurement and analysis requirements. [for Viking Orbiter

    Science.gov (United States)

    Wada, B. K.

    1975-01-01

    Data from the Viking Orbiter Modal Test Program are used to illustrate modal test measurement and analysis requirements. The test was performed using a multiple shake dwell technique where data were acquired one channel at a time and recorded on paper tape. Up to ten shakers were used simultaneously, with a complete set of data consisting of 290 strain-gage readings and 125 accelerometer readings. The data analysis provided information sufficient to minimize errors in the data. The list of analyses in order of value is orthogonality, residual mass, frequency sweep, data checks to assure good test data, multilevel trends, global kinetic energy, and global strain energy.

  14. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  15. Smart Kinesthetic Measurement Model in Dance Composision

    OpenAIRE

    Triana, Dinny Devi

    2017-01-01

    This research aimed to discover a model of assessment that could measure kinesthetic intelligence in arranging a dance from several related variable, both direct variable and indirect variable. The research method used was a qualitative method using path analysis to determine the direct and indirect variable; therefore, the dominant variable that supported the measurement model of kinesthetic intelligence in arranging dance could be discovered. The population used was the students of the art ...

  16. Test-retest reliability for aerodynamic measures of voice.

    Science.gov (United States)

    Awan, Shaheen N; Novaleski, Carolyn K; Yingling, Julie R

    2013-11-01

    The purpose of this study was to investigate the intrasubject reliability of aerodynamic characteristics of the voice within typical/normal speakers across testing sessions using the Phonatory Aerodynamic System (PAS 6600; KayPENTAX, Montvale, NJ). Participants were 60 healthy young adults (30 males and 30 females) between the ages 18 and 31 years with perceptually typical voice. Participants were tested using the PAS 6600 (Phonatory Aerodynamic System) on two separate days with approximately 1 week between each session at approximately the same time of day. Four PAS protocols were conducted (vital capacity, maximum sustained phonation, comfortable sustained phonation, and voicing efficiency) and measures of expiratory volume, maximum phonation time, mean expiratory airflow (during vowel production) and target airflow (obtained via syllable repetition), peak air pressure, aerodynamic power, aerodynamic resistance, and aerodynamic efficiency were obtained during each testing session. Associated acoustic measures of vocal intensity and frequency were also collected. All phonations were elicited at comfortable pitch and loudness. All aerodynamic and associated variables evaluated in this study showed useable test-retest reliability (ie, intraclass correlation coefficients [ICCs] ≥ 0.60). A high degree of mean test-retest reliability was found across all subjects for aerodynamic and associated acoustic measurements of vital capacity, maximum sustained phonation, glottal resistance, and vocal intensity (all with ICCs > 0.75). Although strong ICCs were observed for measures of glottal power and mean expiratory airflow in males, weaker overall results for these measures (ICC range: 0.60-0.67) were observed in females subjects and sizable coefficients of variation were observed for measures of power, resistance, and efficiency in both men and women. Differences in degree of reliability from measure to measure were revealed in greater detail using methods such as ICCs and

  17. Prospective Tests on Biological Models of Acupuncture

    Directory of Open Access Journals (Sweden)

    Charles Shang

    2009-01-01

    Full Text Available The biological effects of acupuncture include the regulation of a variety of neurohumoral factors and growth control factors. In science, models or hypotheses with confirmed predictions are considered more convincing than models solely based on retrospective explanations. Literature review showed that two biological models of acupuncture have been prospectively tested with independently confirmed predictions: The neurophysiology model on the long-term effects of acupuncture emphasizes the trophic and anti-inflammatory effects of acupuncture. Its prediction on the peripheral effect of endorphin in acupuncture has been confirmed. The growth control model encompasses the neurophysiology model and suggests that a macroscopic growth control system originates from a network of organizers in embryogenesis. The activity of the growth control system is important in the formation, maintenance and regulation of all the physiological systems. Several phenomena of acupuncture such as the distribution of auricular acupuncture points, the long-term effects of acupuncture and the effect of multimodal non-specific stimulation at acupuncture points are consistent with the growth control model. The following predictions of the growth control model have been independently confirmed by research results in both acupuncture and conventional biomedical sciences: (i Acupuncture has extensive growth control effects. (ii Singular point and separatrix exist in morphogenesis. (iii Organizers have high electric conductance, high current density and high density of gap junctions. (iv A high density of gap junctions is distributed as separatrices or boundaries at body surface after early embryogenesis. (v Many acupuncture points are located at transition points or boundaries between different body domains or muscles, coinciding with the connective tissue planes. (vi Some morphogens and organizers continue to function after embryogenesis. Current acupuncture research suggests a

  18. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  19. Port Adriano, 2D-Model tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Meinert, Palle; Andersen, Thomas Lykke

    the crown wall have been measured. The model has been subjected to irregular waves corresponding to typical conditions offshore from the intended prototype location. Characteristic situations have been video recorded. The stability of the toe has been investigated. The wave-generated forces on the caisson...

  20. Parametric Testing of Launch Vehicle FDDR Models

    Science.gov (United States)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  1. Measured data from the Avery Island Site C heater test

    International Nuclear Information System (INIS)

    Waldman, H.; Stickney, R.G.

    1984-11-01

    Over the past six years, a comprehensive field testing program was conducted in the Avery Island salt mine. Three single canister heater tests were included in the testing program. Specifically, electric heaters, which simulate canisters of heat-generating nuclear waste, were placed in the floor of the Avery Island salt mine, and measurements were made of the response of the salt to heating. These tests were in operation by June 1978. One of the three heater tests, Site C, operated for a period of 1858 days and was decommissioned during July and August 1983. This data report presents the temperature and displacement data gathered during the operation and decommissioning of the Site C heater test. The purpose of this data report is to transmit the data to the scientific community. Rigorous analysis and interpretation of the data are considered beyond the scope of a data report. 6 references, 21 figures, 1 table

  2. Overview of the Standard Model Measurements with the ATLAS Detector

    CERN Document Server

    Liu, Yanwen; The ATLAS collaboration

    2017-01-01

    The ATLAS Collaboration is engaged in precision measurement of fundamental Standard Model parameters, such as the W boson mass, the weak-mixing angle or the strong coupling constant. In addition, the production cross-sections of a large variety of final states involving high energetic jets, photons as well as single and multi vector bosons are measured multi differentially at several center of mass energies. This allows to test perturbative QCD calculations to highest precision. In addition, these measurements allow also to test models beyond the SM, e.g. those leading to anomalous gauge couplings. In this talk, we give a broad overview of the Standard Model measurement campaign of the ATLAS collaboration, where selected topics will be discussed in more detail.

  3. Promoting target models by potential measures

    OpenAIRE

    Dubiel, Joerg

    2010-01-01

    Direct marketers use target models in order to minimize the spreading loss of sales efforts. The application of target models has become more widespread with the increasing range of sales efforts. Target models are relevant for offline marketers sending printed mails as well as for online marketers who have to avoid intensity. However business has retained its evaluation since the late 1960s. Marketing decision-makers still prefer managerial performance measures of the economic benefit of a t...

  4. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  5. Testing of Tools for Measurement Vibration in Car

    Directory of Open Access Journals (Sweden)

    Martin JURÁNEK

    2009-06-01

    Full Text Available This work is specialized on testing of several sensors for measurement vibration, that be applicable for measurement on vehicles also behind running. These sensors are connected to PC and universal mobile measuring system cRIO (National Instruments with analog I/O module for measurement vibration, that is described in diploma work: [JURÁNEK 2008]. This system has upped mechanical and heat imunity, small proportions and is therefore acceptable also measurement behind ride vehicles. It compose from two head parts. First is measuring part, composite from instruments cRIO. First part is controlled and monitored by PDA there is connected of wireless (second part hereof system. To system cRIO is possible connect sensors by four BNC connector or after small software change is possible add sensor to other analog modul cRIO. Here will be test several different types of accelerometers (USB sensor company Phidgets, MEMS sensor company Freescale, piezoresistiv and Delta Tron accelerometers company Brüel&Kjær. These sensors is attach to stiff board, board is attach to vibrator and excite by proper signal. Testing will realized with reference to using for measurement in cars. Results will be compared with professional signal analyser LabShop pulse from company Brüel&Kjær.

  6. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  7. Temperature Buffer Test. Final THM modelling

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel

    2012-01-01

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  8. Multiple indicators, multiple causes measurement error models.

    Science.gov (United States)

    Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J

    2014-11-10

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.

  9. 36Cl bomb peak: comparison of modeled and measured data

    Directory of Open Access Journals (Sweden)

    A. Eichler

    2009-06-01

    Full Text Available The extensive nuclear bomb testing of the fifties and sixties and the final tests in the seventies caused a strong 36Cl peak that has been observed in ice cores world-wide. The measured 36Cl deposition fluxes in eight ice cores (Dye3, Fiescherhorn, Grenzgletscher, Guliya, Huascarán, North GRIP, Inylchek (Tien Shan and Berkner Island were compared with an ECHAM5-HAM general circulation model simulation (1952–1972. We find a good agreement between the measured and the modeled 36Cl fluxes assuming that the bomb test produced global 36Cl input was ~80 kg. The model simulation indicates that the fallout of the bomb test produced 36Cl is largest in the subtropics and mid-latitudes due to the strong stratosphere-troposphere exchange. In Greenland the 36Cl bomb signal is quite large due to the relatively high precipitation rate. In Antarctica the 36Cl bomb peak is small but is visible even in the driest areas. The model suggests that the large bomb tests in the Northern Hemisphere are visible around the globe but the later (end of sixties and early seventies smaller tests in the Southern Hemisphere are much less visible in the Northern Hemisphere. The question of how rapidly and to what extent the bomb produced 36Cl is mixed between the hemispheres depends on the season of the bomb test. The model results give an estimate of the amplitude of the bomb peak around the globe.

  10. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  11. Gas temperature measurements in short duration turbomachinery test facilities

    Science.gov (United States)

    Cattafesta, L. N.; Epstein, A. H.

    1988-07-01

    Thermocouple rakes for use in short-duration turbomachinery test facilities have been developed using very fine thermocouples. Geometry variations were parametrically tested and showed that bare quartz junction supports (76 microns in diameter) yielded superior performance, and were rugged enough to survive considerable impact damage. Using very low cost signal conditioning electronics, temperature accuracies of 0.3 percent were realized yielding turbine efficiency measurements at the 1-percent level. Ongoing work to improve this accuracy is described.

  12. Business model stress testing : A practical approach to test the robustness of a business model

    NARCIS (Netherlands)

    Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.

    Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The

  13. Preliminary Test for Constitutive Models of CAP

    International Nuclear Information System (INIS)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2010-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  14. Preliminary Test for Constitutive Models of CAP

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Joon; Hong, Soon Joon; Hwang, Su Hyun; Lee, Keo Hyung; Kim, Min Ki; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. As a part of this project, CAP (Containment Analysis Package) code has been developing for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (vapor, continuous liquid and dispersed drop) for the assessment of containment specific phenomena, and is featured by assessment capabilities in multi-dimensional and lumped parameter thermal hydraulic cell. Thermal hydraulics solver was developed and has a significant progress now. Implementation of the well proven constitutive models and correlations are essential in other for a containment code to be used with the generalized or optimized purposes. Generally, constitutive equations are composed of interfacial and wall transport models and correlations. These equations are included in the source terms of the governing field equations. In order to develop the best model and correlation package of the CAP code, various models currently used in major containment analysis codes, such as GOTHIC, CONTAIN2.0 and CONTEMPT-LT are reviewed. Several models and correlations were incorporated for the preliminary test of CAP's performance and test results and future plans to improve the level of execution besides will be discussed in this paper

  15. Divergence-based tests for model diagnostic

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Esteban, M. D.; Morales, D.; Marhuenda, Y.

    2008-01-01

    Roč. 78, č. 13 (2008), s. 1702-1710 ISSN 0167-7152 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MTM2006-05693 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness of fit * devergence statistics * GLM * model checking * bootstrap Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.445, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/hobza-divergence-based%20tests%20for%20model%20diagnostic.pdf

  16. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  17. A comparison of measured and predicted test flow in an expansion tube with air and oxygen test gases

    Science.gov (United States)

    Aaggard, K. V.; Goad, W. K.

    1975-01-01

    Simultaneous time-resolved measurements of temperature, density, pitot pressure, and wall pressure in both air and O2 test gases were obtained in the Langley pilot model expansion tube. These tests show nonequilibrium chemical and vibrational relaxation significantly affect the test-flow condition. The use of an electromagnetic device to preopen the secondary diaphragm before the arrival of the primary shock wave resulted in an improvement in the agreement between the measured pitot pressure and the value inferred from measured density and interface velocity. Boundary-layer splitter plates used to reduce the wall boundary layer show that this disagreement in the measured and inferred pitot pressures is not a result of boundary-layer effects.

  18. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  19. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    Science.gov (United States)

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  20. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  1. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  2. A stochastic model for quantum measurement

    International Nuclear Information System (INIS)

    Budiyono, Agung

    2013-01-01

    We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)

  3. Alternative test models for skin aging research.

    Science.gov (United States)

    Nakamura, Motoki; Haarmann-Stemmann, Thomas; Krutmann, Jean; Morita, Akimichi

    2018-02-25

    Increasing ethical concerns regarding animal experimentation have led to the development of various alternative methods based on the 3Rs (Refinement, Reduction, and Replacement), first described by Russell and Burch in 1959. Cosmetic and skin aging research are particularly susceptible to concerns related to animal testing. In addition to animal welfare reasons, there are scientific and economic reasons to reduce and avoid animal experiments. Importantly, animal experiments may not reflect findings in humans mainly because of the differences of architectures and immune responses between animal skin and human skin. Here we review the shift from animal testing to the development and application of alternative non-animal based methods and the necessity and benefits of this shift. Some specific alternatives to animal models are discussed, including biochemical approaches, two-dimensional and three-dimensional cell cultures, and volunteer studies, as well as future directions, including genome-based research and the development of in silico computer simulations of skin models. Among the in vitro methods, three-dimensional reconstructed skin models are highly popular and useful alternatives to animal models however still have many limitations. With careful selection and skillful handling, these alternative methods will become indispensable for modern dermatology and skin aging research. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. BIOMOVS test scenario model comparison using BIOPATH

    International Nuclear Information System (INIS)

    Grogan, H.A.; Van Dorp, F.

    1986-07-01

    This report presents the results of the irrigation test scenario, presented in the BIOMOVS intercomparison study, calculated by the computer code BIOPATH. This scenario defines a constant release of Tc-99 and Np-237 into groundwater that is used for irrigation. The system of compartments used to model the biosphere is based upon an area in northern Switzerland and is essentially the same as that used in Projekt Gewaehr to assess the radiological impact of a high level waste repository. Two separate irrigation methods are considered, namely ditch and overhead irrigation. Their influence on the resultant activities calculated in the groundwater, soil and different foodproducts, as a function of time, is evaluated. The sensitivity of the model to parameter variations is analysed which allows a deeper understanding of the model chain. These results are assessed subjectively in a first effort to realistically quantify the uncertainty associated with each calculated activity. (author)

  5. Thermal modelling of Advanced LIGO test masses

    International Nuclear Information System (INIS)

    Wang, H; Dovale Álvarez, M; Mow-Lowry, C M; Freise, A; Blair, C; Brooks, A; Kasprzack, M F; Ramette, J; Meyers, P M; Kaufer, S; O’Reilly, B

    2017-01-01

    High-reflectivity fused silica mirrors are at the epicentre of today’s advanced gravitational wave detectors. In these detectors, the mirrors interact with high power laser beams. As a result of finite absorption in the high reflectivity coatings the mirrors suffer from a variety of thermal effects that impact on the detectors’ performance. We propose a model of the Advanced LIGO mirrors that introduces an empirical term to account for the radiative heat transfer between the mirror and its surroundings. The mechanical mode frequency is used as a probe for the overall temperature of the mirror. The thermal transient after power build-up in the optical cavities is used to refine and test the model. The model provides a coating absorption estimate of 1.5–2.0 ppm and estimates that 0.3 to 1.3 ppm of the circulating light is scattered onto the ring heater. (paper)

  6. Statistical tests of simple earthquake cycle models

    Science.gov (United States)

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM ~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  7. Model of ASTM Flammability Test in Microgravity: Iron Rods

    Science.gov (United States)

    Steinberg, Theodore A; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)

    2000-01-01

    There is extensive qualitative results from burning metallic materials in a NASA/ASTM flammability test system in normal gravity. However, this data was shown to be inconclusive for applications involving oxygen-enriched atmospheres under microgravity conditions by conducting tests using the 2.2-second Lewis Research Center (LeRC) Drop Tower. Data from neither type of test has been reduced to fundamental kinetic and dynamic systems parameters. This paper reports the initial model analysis for burning iron rods under microgravity conditions using data obtained at the LERC tower and modeling the burning system after ignition. Under the conditions of the test the burning mass regresses up the rod to be detached upon deceleration at the end of the drop. The model describes the burning system as a semi-batch, well-mixed reactor with product accumulation only. This model is consistent with the 2.0-second duration of the test. Transient temperature and pressure measurements are made on the chamber volume. The rod solid-liquid interface melting rate is obtained from film records. The model consists of a set of 17 non-linear, first-order differential equations which are solved using MATLAB. This analysis confirms that a first-order rate, in oxygen concentration, is consistent for the iron-oxygen kinetic reaction. An apparent activation energy of 246.8 kJ/mol is consistent for this model.

  8. On the Behaviour of Information Measures for Test Selection

    NARCIS (Netherlands)

    Sent, D.; van der Gaag, L.C.; Bellazzi, R; Abu-Hanna, A; Hunter, J

    2007-01-01

    tests that are expected to yield the largest decrease in the uncertainty about a patient’s diagnosis. For capturing diagnostic uncertainty, often an information measure is used. In this paper, we study the Shannon entropy, the Gini index, and the misclassification error for this purpose. We argue

  9. Modernization of laboratories of test of electric measurer

    International Nuclear Information System (INIS)

    Cuervo, Luis Felipe

    1999-01-01

    The paper presents to the companies that possess test laboratories and calibration of electric measurer, an economic alternative for their modernization, using the repontentiation like an economic solution that it liberates resources to be used in other areas that they want it

  10. Measuring Literary Reading Motivation: Questionnaires Design and Pilot Testing

    Science.gov (United States)

    Chrysos, Michail

    2017-01-01

    This study aims to present the design and pilot testing procedures of the two specific self-report questionnaires were used to measure the two key aspects of reading motivation, self-efficacy and intrinsic motivation in the field of literary (narrative) reading, and the partial factors that jointly shape them. These instruments were outlined in…

  11. Reliability and Validity Testing of the Physical Resilience Measure

    Science.gov (United States)

    Resnick, Barbara; Galik, Elizabeth; Dorsey, Susan; Scheve, Ann; Gutkin, Susan

    2011-01-01

    Objective: The purpose of this study was to test reliability and validity of the Physical Resilience Scale. Methods: A single-group repeated measure design was used and 130 older adults from three different housing sites participated. Participants completed the Physical Resilience Scale, Hardy-Gill Resilience Scale, 14-item Resilience Scale,…

  12. Validation Testing for Automated Solubility Measurement Equipment Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Lachut, J. S. [Washington River Protection Solutions LLC, Richland, WA (United States)

    2016-01-11

    Laboratory tests have been completed to test the validity of automated solubility measurement equipment using sodium nitrate and sodium chloride solutions (see test plan WRPS-1404441, “Validation Testing for Automated Solubility Measurement Equipment”). The sodium nitrate solution results were within 2-3% of the reference values, so the experiment is considered successful using the turbidity meter. The sodium chloride test was done by sight, as the turbidity meter did not work well using sodium chloride. For example, the “clear” turbidity reading was 53 FNU at 80 °C, 107 FNU at 55 °C, and 151 FNU at 20 °C. The sodium chloride did not work because it is granular and large; as the solution was stirred, the granules stayed to the outside of the reactor and just above the stir bar level, having little impact on the turbidity meter readings as the meter was aimed at the center of the solution. Also, the turbidity meter depth has an impact. The salt tends to remain near the stir bar level. If the meter is deeper in the slurry, it will read higher turbidity, and if the meter is raised higher in the slurry, it will read lower turbidity (possibly near zero) because it reads the “clear” part of the slurry. The sodium chloride solution results, as measured by sight rather than by turbidity instrument readings, were within 5-6% of the reference values.

  13. Detection measures in real-life criminal guilty knowledge tests.

    Science.gov (United States)

    Elaad, E; Ginton, A; Jungman, N

    1992-10-01

    The present study provides a first attempt to compare the validity of the respiration line length (RLL) and skin resistance response (SRR) amplitude in real-life criminal guilty knowledge tests (GKTs). GKT records of 40 innocent and 40 guilty Ss, for whom actual truth was established by confession, were assessed for their accuracy. When a predefined decision rule was used and inconclusive decisions were excluded, 97.4% of the innocent Ss and 53.3% of the guilty Ss were correctly classified with the SRR measure. For the RLL measure, the respective results were 97.2% and 53.1%. The combination of both measures improved detection of guilty Ss to 75.8% and decreased detection of innocent Ss to 94.1%. The combined measure seems to be a more useful means of identifying guilty suspects than each physiological measure alone. The results elaborate and extend those obtained in a previous field study conducted by Elaad (1990).

  14. Measurements of rope elongation or deflection in impact destructive testing

    Directory of Open Access Journals (Sweden)

    Adam Szade

    2015-01-01

    Full Text Available The computation of energy dissipation in mechanical protective systems and the corresponding determination of their safe use in mine shafts, requires a precise description of their bending and elongation, for instance, in conditions of dynamic, transverse loading induced by the falling of mass. The task aimed to apply a fast parallactic rangefinder and then to mount it on a test stand, which is an original development of the Central Mining Institute's Laboratory of Rope Testing in Katowice. In the solution presented in this paper, the measuring method and equipment in which the parallactic laser rangefinder, provided with a fast converter and recording system, ensures non-contact measurement of elongation, deflection or deformation of the sample (construction during impact loading. The structure of the unit, and metrological parameters are also presented. Additionally, the method of calibration and examples of the application in the impact tests of steel wire ropes are presented. The measurement data obtained will provide a basis for analysis, the prediction of the energy of events and for applying the necessary means to maintain explosion-proofness in the case of destructive damage to mechanical elements in the mine atmosphere. What makes these measurements novel is the application of a fast and accurate laser rangefinder to the non-contact measurement of crucial impact parameters of dynamic events that result in the destruction of the sample. In addition, the method introduces a laser scanning vibrometer with the aim of evaluating the parameters of the samples before and after destruction.

  15. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Cost. G=Gamma. CV=Cross Validation. MCC=Matthew Correlation Coefficient. Test 1: C G CV Accuracy TP TN FP FN ... Conclusion: Without considering the MirTif negative dataset for training Model A and B classifiers, our Model A and B ...

  16. Standard test method for conducting potentiodynamic polarization resistance measurements

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1997-01-01

    1.1 This test method covers an experimental procedure for polarization resistance measurements which can be used for the calibration of equipment and verification of experimental technique. The test method can provide reproducible corrosion potentials and potentiodynamic polarization resistance measurements. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  17. Testing Overall and Subpopulation Treatment Effects with Measurement Errors.

    Science.gov (United States)

    Ma, Yanyuan; Yin, Guosheng

    2013-07-01

    There is a growing interest in the discovery of important predictors from many potential biomarkers for therapeutic use. In particular, a biomarker has predictive value for treatment if the treatment is only effective for patients whose biomarker values exceed a certain threshold. However, biomarker expressions are often subject to measurement errors, which may blur the biomarker's predictive capability in patient classification and, as a consequence, may lead to inappropriate treatment decisions. By taking into account the measurement errors, we propose a new testing procedure for the overall and subpopulation treatment effects in the multiple testing framework. The proposed method bypasses the permutation or other resampling procedures that become computationally infeasible in the presence of measurement errors. We conduct simulation studies to examine the performance of the proposed method, and illustrate it with a data example.

  18. Failing Tests: Commentary on "Adapting Educational Measurement to the Demands of Test-Based Accountability"

    Science.gov (United States)

    Thissen, David

    2015-01-01

    In "Adapting Educational Measurement to the Demands of Test-Based Accountability" Koretz takes the time-honored engineering approach to educational measurement, identifying specific problems with current practice and proposing minimal modifications of the system to alleviate those problems. In response to that article, David Thissen…

  19. THE BUSINESS MODEL AND FINANCIAL ASSETS MEASUREMENT

    OpenAIRE

    NICULA Ileana

    2012-01-01

    The paper work analyses some aspects regarding the implementation of IFRS 9, the relationship between the business model approach and the assets classification and measurement. It does not discuss the cash flows characteristics, another important aspect of assets classification, or the reclassifications. The business model is related to some characteristics of the banks (opaqueness, leverage ratio, compliance to capital, sound liquidity requirements and risk management) and to Special Purpose...

  20. A measurement model of multiple intelligence profiles of management graduates

    Science.gov (United States)

    Krishnan, Heamalatha; Awang, Siti Rahmah

    2017-05-01

    In this study, developing a fit measurement model and identifying the best fitting items to represent Howard Gardner's nine intelligences namely, musical intelligence, bodily-kinaesthetic intelligence, mathematical/logical intelligence, visual/spatial intelligence, verbal/linguistic intelligence, interpersonal intelligence, intrapersonal intelligence, naturalist intelligence and spiritual intelligence are the main interest in order to enhance the opportunities of the management graduates for employability. In order to develop a fit measurement model, Structural Equation Modeling (SEM) was applied. A psychometric test which is the Ability Test in Employment (ATIEm) was used as the instrument to measure the existence of nine types of intelligence of 137 University Teknikal Malaysia Melaka (UTeM) management graduates for job placement purposes. The initial measurement model contains nine unobserved variables and each unobserved variable is measured by ten observed variables. Finally, the modified measurement model deemed to improve the Normed chi-square (NC) = 1.331; Incremental Fit Index (IFI) = 0.940 and Root Mean Square of Approximation (RMSEA) = 0.049 was developed. The findings showed that the UTeM management graduates possessed all nine intelligences either high or low. Musical intelligence, mathematical/logical intelligence, naturalist intelligence and spiritual intelligence contributed highest loadings on certain items. However, most of the intelligences such as bodily kinaesthetic intelligence, visual/spatial intelligence, verbal/linguistic intelligence interpersonal intelligence and intrapersonal intelligence possessed by UTeM management graduates are just at the borderline.

  1. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  2. Magnetic measurement of creep damage: modeling and measurement

    Science.gov (United States)

    Sablik, Martin J.; Jiles, David C.

    1996-11-01

    Results of inspection of creep damage by magnetic hysteresis measurements on Cr-Mo steel are presented. It is shown that structure-sensitive parameters such as coercivity, remanence and hysteresis loss are sensitive to creep damage. Previous metallurgical studies have shown that creep changes the microstructure of he material by introducing voids, dislocations, and grain boundary cavities. As cavities develop, dislocations and voids move out to grain boundaries; therefore, the total pinning sources for domain wall motion are reduced.This, together with the introduction of a demagnetizing field due to the cavities, results in the decrease of both coercivity, remanence and hence, concomitantly, hysteresis loss. Incorporating these structural effects into a magnetomechanical hysteresis model developed previously by us produces numerical variations of coercivity, remanence and hysteresis loss consistent with what is measured. The magnetic model has therefore been used to obtain appropriately modified magnetization curves for each element of creep-damaged material in a finite element (FE) calculation. The FE calculation has been used to simulate magnetic detection of non-uniform creep damage around a seam weld in a 2.25 Cr 1Mo steam pipe. In particular, in the simulation, a magnetic C-core with primary and secondary coils was placed with its pole pieces flush against the specimen in the vicinity of the weld. The secondary emf was shown to be reduced when creep damage was present inside the pipe wall at the cusp of the weld and in the vicinity of the cusp. The calculation showed that the C- core detected creep damage best if it spanned the weld seam width and if the current in the primary was such that the C- core was not magnetically saturated. Experimental measurements also exhibited the dip predicted in emf, but the measurements are not yet conclusive because the effects of magnetic property changes of weld materials, heat- affected material, and base material have

  3. Testing limits to airflow perturbation device (APD measurements

    Directory of Open Access Journals (Sweden)

    Jamshidi Shaya

    2008-10-01

    Full Text Available Abstract Background The Airflow Perturbation Device (APD is a lightweight, portable device that can be used to measure total respiratory resistance as well as inhalation and exhalation resistances. There is a need to determine limits to the accuracy of APD measurements for different conditions likely to occur: leaks around the mouthpiece, use of an oronasal mask, and the addition of resistance in the respiratory system. Also, there is a need for resistance measurements in patients who are ventilated. Method Ten subjects between the ages of 18 and 35 were tested for each station in the experiment. The first station involved testing the effects of leaks of known sizes on APD measurements. The second station tested the use of an oronasal mask used in conjunction with the APD during nose and mouth breathing. The third station tested the effects of two different resistances added in series with the APD mouthpiece. The fourth station tested the usage of a flexible ventilator tube in conjunction with the APD. Results All leaks reduced APD resistance measurement values. Leaks represented by two 3.2 mm diameter tubes reduced measured resistance by about 10% (4.2 cmH2O·sec/L for control and 3.9 cm H2O·sec/L for the leak. This was not statistically significant. Larger leaks given by 4.8 and 6.4 mm tubes reduced measurements significantly (3.4 and 3.0 cm cmH2O·sec/L, respectively. Mouth resistance measured with a cardboard mouthpiece gave an APD measurement of 4.2 cm H2O·sec/L and mouth resistance measured with an oronasal mask was 4.5 cm H2O·sec/L; the two were not significantly different. Nose resistance measured with the oronasal mask was 7.6 cm H2O·sec/L. Adding airflow resistances of 1.12 and 2.10 cm H2O·sec/L to the breathing circuit between the mouth and APD yielded respiratory resistance values higher than the control by 0.7 and 2.0 cm H2O·sec/L. Although breathing through a 52 cm length of flexible ventilator tubing reduced the APD

  4. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  5. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......-0042. Furthermore, Task IA will give the design diagram for Tetrapod breakwaters without a superstructure. The more complete research results on Dolosse can certainly give some insight into the behaviour of Tetrapods armour layer of the breakwaters with superstructure. The main part of the experiment...

  6. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  7. Discrete Spectral Local Measurement Method for Testing Solar Concentrators

    Directory of Open Access Journals (Sweden)

    Huifu Zhao

    2012-01-01

    Full Text Available In order to compensate for the inconvenience and instability of outdoor photovoltaic concentration test system which are caused by the weather changes, we design an indoor concentration test system with a large caliber and a high parallelism, and then verify its feasibility and scientificity. Furthermore, we propose a new concentration test method: the discrete spectral local measurement method. A two-stage Fresnel concentration system is selected as the test object. The indoor and the outdoor concentration experiments are compared. The results show that the outdoor concentration efficiency of the two-stage Fresnel concentration system is 85.56%, while the indoor is 85.45%. The two experimental results are so close that we can verify the scientificity and feasibility of the indoor concentration test system. The light divergence angle of the indoor concentration test system is 0.267° which also matches with sunlight divergence angle. The indoor concentration test system with large diameter (145 mm, simple structure, and low cost will have broad applications in solar concentration field.

  8. Standard test method for measurement of fatigue crack growth rates

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    1.1 This test method covers the determination of fatigue crack growth rates from near-threshold to Kmax controlled instability. Results are expressed in terms of the crack-tip stress-intensity factor range (ΔK), defined by the theory of linear elasticity. 1.2 Several different test procedures are provided, the optimum test procedure being primarily dependent on the magnitude of the fatigue crack growth rate to be measured. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength so long as specimens are of sufficient thickness to preclude buckling and of sufficient planar size to remain predominantly elastic during testing. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size is variable to be adjusted for yield strength and applied force. Specimen thickness may be varied independent of planar size. 1.5 The details of the various specimens and test configurations are shown in Annex A1-Annex A3. Specimen configurations other than t...

  9. Summary of Numerical Modeling for Underground Nuclear Test Monitoring Symposium

    International Nuclear Information System (INIS)

    Taylor, S.R.; Kamm, J.R.

    1993-01-01

    This document contains the Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium held in Durango, Colorado on March 23-25, 1993. The symposium was sponsored by the Office of Arms Control and Nonproliferation of the United States Department of Energy and hosted by the Source Region Program of Los Alamos National Laboratory. The purpose of the meeting was to discuss state-of-the-art advances in numerical simulations of nuclear explosion phenomenology for the purpose of test ban monitoring. Another goal of the symposium was to promote discussion between seismologists and explosion source-code calculators. Presentation topics include the following: numerical model fits to data, measurement and characterization of material response models, applications of modeling to monitoring problems, explosion source phenomenology, numerical simulations and seismic sources

  10. Causal Measurement Models: Can Criticism Stimulate Clarification?

    Science.gov (United States)

    Markus, Keith A.

    2016-01-01

    In their 2016 work, Aguirre-Urreta et al. provided a contribution to the literature on causal measurement models that enhances clarity and stimulates further thinking. Aguirre-Urreta et al. presented a form of statistical identity involving mapping onto the portion of the parameter space involving the nomological net, relationships between the…

  11. Experimental Measurement, Analysis and Modelling of Dependency ...

    African Journals Online (AJOL)

    We propose a direct method of measurement of the total emissivity of opaque samples on a range of temperature around the ambient one. The method rests on the modulation of the temperature of the sample and the infra-red signal processing resulting from the surface of the sample we model the total emissivity obtained ...

  12. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  13. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    The effect of receiver coil alignment errors δ on the response of electromagnetic measurements in a layered earth model is studied. The statistics of generalized least square inverse was employed to analyzed the errors on three different geophysical applications. The following results were obtained: (i) The FEM ellipiticity is ...

  14. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads to s...... method involving differences between orthogonal projections onto subspaces generated by within-subject models....

  15. Standards for measurements and testing of wind turbine power quality

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, P. [Risoe National Lab., Roskilde (Denmark); Gerdes, G.; Klosse, R.; Santjer, F. [DEWI, Wilhelmshaven (Germany); Robertson, N.; Davy, W. [NEL, Glasgow (United Kingdom); Koulouvari, M.; Morfiadakis, E. [CRES, Pikermi (Greece); Larsson, Aa. [Chalmers Univ. of Technology, Goeteborg (Sweden)

    1999-03-01

    The present paper describes the work done in power quality sub-task of the project `European Wind Turbine Testing Procedure Developments` funded by the EU SMT program. The objective of the power quality sub-task has been to make analyses and new recommendation(s) for the standardisation of measurement and verification of wind turbine power quality. The work has been organised in three major activities. The first activity has been to propose measurement procedures and to verify existing and new measurement procedures. This activity has also involved a comparison of the measurements and data processing of the participating partners. The second activity has been to investigate the influence of terrain, grid properties and wind farm summation on the power quality of wind turbines with constant rotor speed. The third activity has been to investigate the influence of terrain, grid properties and wind farm summation on the power quality of wind turbines with variable rotor speed. (au)

  16. submitter Experimental temperature measurements for the energy amplifier test

    CERN Document Server

    Calero, J; Gallego, E; Gálvez, J; García Tabares, L; González, E; Jaren, J; López, C; Lorente, A; Martínez Val, J M; Oropesa, J; Rubbia, C; Rubio, J A; Saldana, F; Tamarit, J; Vieira, S

    1996-01-01

    A uranium thermometer has been designed and built in order to make local power measurements in the First Energy Amplifier Test (FEAT). Due to the experimental conditions power measurements of tens to hundreds of nW were required, implying a sensitivity in the temperature change measurements of the order of 1 mK. A uranium thermometer accurate enough to match that sensitivity has been built. The thermometer is able to determine the absolute energetic gain obtained in a tiny subcritical uranium assembly exposed to a proton beam of kinetic energies between 600 MeV and 2.75 GeV. In addition, the thermometer measurements have provided information about the spatial power distribution and the shape of the neutron spallation cascade.

  17. Model-independent tests of cosmic gravity.

    Science.gov (United States)

    Linder, Eric V

    2011-12-28

    Gravitation governs the expansion and fate of the universe, and the growth of large-scale structure within it, but has not been tested in detail on these cosmic scales. The observed acceleration of the expansion may provide signs of gravitational laws beyond general relativity (GR). Since the form of any such extension is not clear, from either theory or data, we adopt a model-independent approach to parametrizing deviations to the Einstein framework. We explore the phase space dynamics of two key post-GR functions and derive a classification scheme, and an absolute criterion on accuracy necessary for distinguishing classes of gravity models. Future surveys will be able to constrain the post-GR functions' amplitudes and forms to the required precision, and hence reveal new aspects of gravitation.

  18. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  19. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  20. Modeling Displacement Measurement using Vibration Transducers

    Directory of Open Access Journals (Sweden)

    AGOSTON Katalin

    2014-05-01

    Full Text Available This paper presents some aspects regarding to small displacement measurement using vibration transducers. Mechanical faults, usages, slackness’s, cause different noises and vibrations with different amplitude and frequency against the normal sound and movement of the equipment. The vibration transducers, accelerometers and microphone are used for noise and/or sound and vibration detection with fault detection purpose. The output signal of the vibration transducers or accelerometers is an acceleration signal and can be converted to either velocity or displacement, depending on the preferred measurement parameter. Displacement characteristics are used to indicate when the machine condition has changed. There are many problems using accelerometers to measure position or displacement. It is important to determine displacement over time. To determinate the movement from acceleration a double integration is needed. A transfer function and Simulink model was determinate for accelerometers with capacitive sensing element. Using these models the displacement was reproduced by low frequency input.

  1. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of temperature and pressure. Reliable experimental solubility measurements under conditions similar to those found in reality will help the development of strong and consistent models. Chapter 1 is a short introduction to the problem of scale formation, the model chosen to study it, and the experiments performed......). Chapters 8 and 9 focus on the experimental part of this dissertation, analyzing different experimental procedures to determine salt solubility at high temperature and pressure, and developing a setup to perform those measurements. The motivation behind both parts of the Ph.D. project is the problem...... of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range...

  2. Measurements of temperature on LHC thermal models

    CERN Document Server

    Darve, C

    2001-01-01

    Full-scale thermal models for the Large Hadron Collider (LHC) accelerator cryogenic system have been studied at CERN and at Fermilab. Thermal measurements based on two different models permitted us to evaluate the performance of the LHC dipole cryostats as well as to validate the LHC Interaction Region (IR) inner triplet cooling scheme. The experimental procedures made use of temperature sensors supplied by industry and assembled on specially designed supports. The described thermal models took the advantage of advances in cryogenic thermometry which will be implemented in the future LHC accelerator to meet the strict requirements of the LHC for precision, accuracy, reliability, and ease-of-use. The sensors used in the temperature measurement of the superfluid (He II) systems are the primary focus of this paper, although some aspects of the LHC control system and signal conditioning are also reviewed. (15 refs).

  3. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    Science.gov (United States)

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  4. Model tests in RAMONA and NEPTUN

    International Nuclear Information System (INIS)

    Hoffmann, H.; Ehrhard, P.; Weinberg, D.; Carteciano, L.; Dres, K.; Frey, H.H.; Hayafune, H.; Hoelle, C.; Marten, K.; Rust, K.; Thomauske, K.

    1995-01-01

    In order to demonstrate passive decay heat removal (DHR) in an LMR such as the European Fast Reactor, the RAMONA and NEPTUN facilities, with water as a coolant medium, were used to measure transient flow data corresponding to a transition from forced convection (under normal operation) to natural convection under DHR conditions. The facilities were 1:20 and 1:5 models, respectively, of a pool-type reactor including the IHXs, pumps, and immersed coolers. Important results: The decay heat can be removed from all parts of the primary system by natural convection, even if the primary fluid circulation through the IHX is interrupted. This result could be transferred to liquid metal cooling by experiments in models with thermohydraulic similarity. (orig.)

  5. Design and testing of an innovative solar radiation measurement device

    International Nuclear Information System (INIS)

    Badran, Omar; Al-Salaymeh, Ahmed; El-Tous, Yousif; Abdala, Wasfi

    2010-01-01

    After review of studies conducted on the solar radiation measuring systems, a new innovative instrument that would help in measuring the accurate solar radiation on horizontal surfaces has been designed and tested. An advanced instrument with ease of use and high precision that would enable the user to take the readings in terms of solar intensity (W/m 2 ) has been tested. Also, the innovative instrument can record instantaneous readings of the solar intensities as well as the averages value of the solar radiation flux during certain periods of time. The instrument based in its design on being programmed by programmable interfacing controller (PIC). Furthermore, the power supply circuit is fed by the solar energy cells and does not need an external power source.

  6. Infrared thermography for temperature measurement and non-destructive testing.

    Science.gov (United States)

    Usamentiaga, Rubén; Venegas, Pablo; Guerediaga, Jon; Vega, Laura; Molleda, Julio; Bulnes, Francisco G

    2014-07-10

    The intensity of the infrared radiation emitted by objects is mainly a function of their temperature. In infrared thermography, this feature is used for multiple purposes: as a health indicator in medical applications, as a sign of malfunction in mechanical and electrical maintenance or as an indicator of heat loss in buildings. This paper presents a review of infrared thermography especially focused on two applications: temperature measurement and non-destructive testing, two of the main fields where infrared thermography-based sensors are used. A general introduction to infrared thermography and the common procedures for temperature measurement and non-destructive testing are presented. Furthermore, developments in these fields and recent advances are reviewed.

  7. Infrared Thermography for Temperature Measurement and Non-Destructive Testing

    Science.gov (United States)

    Usamentiaga, Rubèn; Venegas, Pablo; Guerediaga, Jon; Vega, Laura; Molleda, Julio; Bulnes, Francisco G.

    2014-01-01

    The intensity of the infrared radiation emitted by objects is mainly a function of their temperature. In infrared thermography, this feature is used for multiple purposes: as a health indicator in medical applications, as a sign of malfunction in mechanical and electrical maintenance or as an indicator of heat loss in buildings. This paper presents a review of infrared thermography especially focused on two applications: temperature measurement and non-destructive testing, two of the main fields where infrared thermography-based sensors are used. A general introduction to infrared thermography and the common procedures for temperature measurement and non-destructive testing are presented. Furthermore, developments in these fields and recent advances are reviewed. PMID:25014096

  8. Infrared Thermography for Temperature Measurement and Non-Destructive Testing

    Directory of Open Access Journals (Sweden)

    Rubén Usamentiaga

    2014-07-01

    Full Text Available The intensity of the infrared radiation emitted by objects is mainly a function of their temperature. In infrared thermography, this feature is used for multiple purposes: as a health indicator in medical applications, as a sign of malfunction in mechanical and electrical maintenance or as an indicator of heat loss in buildings. This paper presents a review of infrared thermography especially focused on two applications: temperature measurement and non-destructive testing, two of the main fields where infrared thermography-based sensors are used. A general introduction to infrared thermography and the common procedures for temperature measurement and non-destructive testing are presented. Furthermore, developments in these fields and recent advances are reviewed.

  9. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...

  10. Test of Flow Characteristics in Tubular Fuel Assembly I - Establishment of test loop and measurement validation test

    International Nuclear Information System (INIS)

    Park, Jong Hark; Chae, H. T.; Park, C.; Kim, H.

    2005-12-01

    Tubular type fuel has been developed as one of candidates for Advanced HANARO Reactor(AHR). It is necessary to test the flow characteristics such as velocity in each flow channels and pressure drop of tubular type fuel. A hydraulic test-loop to examine the hydraulic characteristics for a tubular type fuel has been designed and constructed. It consists of three parts; a) piping-loop including pump and motor, magnetic flow meter and valves etc, b) test-section part where a simulated tubular type fuel is located, and 3) data acquisition system to get reading signals from sensors or instruments. In this report, considerations during the design and installation of the facility and the selection of data acquisition sensors and instruments are described in detail. Before doing the experiment to measure the flow velocities in flow channels, a preliminary tests have been done for measuring the coolant velocities using pitot-tube and for validating the measurement accuracy as well. Local velocities of the radial direction in circular tubes are measured at regular intervals of 60 degrees by three pitot-tubes. Flow rate inside the circular flow channel can be obtained by integrating the velocity distribution in radial direction. The measured flow rate was compared to that of magnetic flow meter. According to the results, two values had a good agreement, which means that the measurement of coolant velocity by using pitot-tube and the flow rate measured by the magnetic flow meter are reliable. Uncertainty analysis showed that the error of velocity measurement by pitot-tube is less than ±2.21%. The hydraulic test-loop also can be adapted to others such as HANARO 18 and 36 fuel, in-pile system of FTL(Fuel Test Loop), etc

  11. Clinical outcome measurement: Models, theory, psychometrics and practice.

    Science.gov (United States)

    McClimans, Leah; Browne, John; Cano, Stefan

    In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term 'psychometrics'. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers. Copyright © 2017. Published by Elsevier Ltd.

  12. Measuring vulnerability to depression: The Serbian scrambled sentences test - SSST

    Directory of Open Access Journals (Sweden)

    Novović Zdenka

    2014-01-01

    Full Text Available The goal of this study was to establish whether the SSST, a Serbian language scrambled sentences instrument, is a reliable measure of depressive cognitive bias, and whether it captures the suppression tendency as participants exert the additional cognitive effort of memorizing a six-digit number while completing the task. The sample consisted of 1071 students, randomly assigned into two groups. They completed the SSST divided into two blocks of 28 sentences, together with additional cognitive task during either the first or second block, and after that a number of instruments to establish validity of the SSST. The test was shown to be a reliable instrument of depressive cognitive bias. As a measure of suppression the SSST performed partly as expected, only when load was applied in the second half of the test, and fatigue and cognitive effort enhanced suppression. The advantages of the test versus self-description measures were discussed. [Projekat Ministarstva nauke Republike Srbije, br. 179006: Hereditary, environmental, and psychological factors of mental health

  13. Solutions for acceleration measurement in vehicle crash tests

    Science.gov (United States)

    Dima, D. S.; Covaciu, D.

    2017-10-01

    Crash tests are useful for validating computer simulations of road traffic accidents. One of the most important parameters measured is the acceleration. The evolution of acceleration versus time, during a crash test, form a crash pulse. The correctness of the crash pulse determination depends on the data acquisition system used. Recommendations regarding the instrumentation for impact tests are given in standards, which are focused on the use of accelerometers as impact sensors. The goal of this paper is to present the device and software developed by authors for data acquisition and processing. The system includes two accelerometers with different input ranges, a processing unit based on a 32-bit microcontroller and a data logging unit with SD card. Data collected on card, as text files, is processed with a dedicated software running on personal computers. The processing is based on diagrams and includes the digital filters recommended in standards.

  14. The role of test-retest reliability in measuring individual and group differences in executive functioning.

    Science.gov (United States)

    Paap, Kenneth R; Sawi, Oliver

    2016-12-01

    Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Model tests on single piles in soft clay

    Energy Technology Data Exchange (ETDEWEB)

    Pan, J.L. [Durham Univ., Durham, (United Kingdom). School of Engineering; Goh, A.T.C.; Wong, K.S.; Teh, C.I. [Nanyang Technological Univ., (Singapore). Geotechnical Research Centre

    2000-08-04

    The behaviour of single stainless steel piles subjected to lateral soft clay soil movement was investigated in laboratory model tests in an effort to determine the ultimate soil pressure acting along the pile shaft. A custom designed apparatus was manufactured and calibrated for the test which measured the limiting soil pressures acting along the model pile shaft. The ultimate soil pressure was determined based on the maximum value of this measurement. The results show that the ultimate soil pressure for single passive piles was about 10 times the undrained shear strength, and the magnitude of the soil translation needed to fully mobilize the ultimate soil pressure on the single passive piles was about half the pile width. Further experimental study is needed to examine the effects of the pile end fixity, flexibility and shape and to confirm the effects of sample size and the disturbance due to soil sample preparation. 17 refs., 10 figs.

  16. Smart kinesthetic measurement model in dance composision

    Directory of Open Access Journals (Sweden)

    Dinny Devi Triana

    2017-06-01

    Full Text Available This research aimed to discover a model of assessment that could measure kinesthetic intelligence in arranging a dance from several related variable, both direct variable and indirect variable. The research method used was a qualitative method using path analysis to determine the direct and indirect variable; therefore, the dominant variable that supported the measurement model of kinesthetic intelligence in arranging dance could be discovered. The population used was the students of the art of dance department and were chosen by using purposive sampling technique so that the kinesthetic intelligence could be well measured. The result of this research was that the correlation between the ability in perceiving movement and the ability in conveying movement was 3.8048. The correlation between the ability in perceiving movement and kinesthetic intelligence was 0.3137. The correlation between the ability in perceiving movement and arranging a dance was -0.3751. The correlation between conveying movement and kinesthetic intelligence was 0.1333. The correlation between conveying movement and arranging a dance was -0.2399. The correlation between kinesthetic intelligence and arranging a dance was 0.8529. These result proved that kinesthetic intelligence has significant influence to the ability in arranging a dance. It could be concluded that a smart assessment model of kinesthetic intelligence in arranging a dance that was needed should measure the kinesthetic intelligence first while the ability to perceive and convey movement became the supporting element to strengthen the kinesthetic intelligence in arranging a dance.

  17. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  18. Standard Test Method for Measuring Binocular Disparity in Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the amount of binocular disparity that is induced by transparent parts such as aircraft windscreens, canopies, HUD combining glasses, visors, or goggles. This test method may be applied to parts of any size, shape, or thickness, individually or in combination, so as to determine the contribution of each transparent part to the overall binocular disparity present in the total “viewing system” being used by a human operator. 1.2 This test method represents one of several techniques that are available for measuring binocular disparity, but is the only technique that yields a quantitative figure of merit that can be related to operator visual performance. 1.3 This test method employs apparatus currently being used in the measurement of optical angular deviation under Method F 801. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not con...

  19. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  20. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  1. Modified SPC for short run test and measurement process in multi-stations

    Science.gov (United States)

    Koh, C. K.; Chin, J. F.; Kamaruddin, S.

    2018-03-01

    Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.

  2. Mathematical model of radon activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paschuk, Sergei A.; Correa, Janine N.; Kappke, Jaqueline; Zambianchi, Pedro, E-mail: sergei@utfpr.edu.br, E-mail: janine_nicolosi@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil); Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe, Curitiba, PR (Brazil)

    2015-07-01

    Present work describes a mathematical model that quantifies the time dependent amount of {sup 222}Rn and {sup 220}Rn altogether and their activities within an ionization chamber as, for example, AlphaGUARD, which is used to measure activity concentration of Rn in soil gas. The differential equations take into account tree main processes, namely: the injection of Rn into the cavity of detector by the air pump including the effect of the traveling time Rn takes to reach the chamber; Rn release by the air exiting the chamber; and radioactive decay of Rn within the chamber. Developed code quantifies the activity of {sup 222}Rn and {sup 220}Rn isotopes separately. Following the standard methodology to measure Rn activity in soil gas, the air pump usually is turned off over a period of time in order to avoid the influx of Rn into the chamber. Since {sup 220}Rn has a short half-life time, approximately 56s, the model shows that after 7 minutes the activity concentration of this isotope is null. Consequently, the measured activity refers to {sup 222}Rn, only. Furthermore, the model also addresses the activity of {sup 220}Rn and {sup 222}Rn progeny, which being metals represent potential risk of ionization chamber contamination that could increase the background of further measurements. Some preliminary comparison of experimental data and theoretical calculations is presented. Obtained transient and steady-state solutions could be used for planning of Rn in soil gas measurements as well as for accuracy assessment of obtained results together with efficiency evaluation of chosen measurements procedure. (author)

  3. Testing Software Development Project Productivity Model

    Science.gov (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  4. Implementation of Moderator Circulation Test Temperature Measurement System

    International Nuclear Information System (INIS)

    Lim, Yeong Muk; Hong, Seok Boong; Kim, Min Seok; Choi, Hwa Rim; Kim, Hyung Shin

    2016-01-01

    Moderator Circulation Test(MCT) facility is 1/4 scale facility designed to reproduce the important characteristics of moderator circulation in a CANDU6 calandria under a range of operating conditions. MCT is an equipment with 380 acrylic pipes instead of the heater rods and a preliminary measurement of velocity field using PIV(Particle Image Velocimetry) is performed under the iso-thermal test conditions. The Korea Atomic Energy Research Institute (KAERI) started implementation of MCT Temperature Measurement System (TMS) using multiple infrared sensors. To control multiple infrared sensors, MCT TMS is implemented using National Instruments (NI) LabVIEW programming language. The MCT TMS is implemented to measure sensor data of multiple infrared sensors using the LabVIEW. The 35 sensor pipes of MCT TMS are divided into 2 ports to meet the minimum measurement time of 0.2 seconds. The software of MCT TMS is designed using collection function and processing function. The MCT TMS has the function of monitoring the states of multiple infrared sensors. The GUI screen of MCT TMS is composed of sensor pipe categories for user

  5. Implementation of Moderator Circulation Test Temperature Measurement System

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Yeong Muk; Hong, Seok Boong; Kim, Min Seok; Choi, Hwa Rim [KAERI, Daejeon (Korea, Republic of); Kim, Hyung Shin [Chungnam University, Daejeon (Korea, Republic of)

    2016-05-15

    Moderator Circulation Test(MCT) facility is 1/4 scale facility designed to reproduce the important characteristics of moderator circulation in a CANDU6 calandria under a range of operating conditions. MCT is an equipment with 380 acrylic pipes instead of the heater rods and a preliminary measurement of velocity field using PIV(Particle Image Velocimetry) is performed under the iso-thermal test conditions. The Korea Atomic Energy Research Institute (KAERI) started implementation of MCT Temperature Measurement System (TMS) using multiple infrared sensors. To control multiple infrared sensors, MCT TMS is implemented using National Instruments (NI) LabVIEW programming language. The MCT TMS is implemented to measure sensor data of multiple infrared sensors using the LabVIEW. The 35 sensor pipes of MCT TMS are divided into 2 ports to meet the minimum measurement time of 0.2 seconds. The software of MCT TMS is designed using collection function and processing function. The MCT TMS has the function of monitoring the states of multiple infrared sensors. The GUI screen of MCT TMS is composed of sensor pipe categories for user.

  6. Development and testing of a community flood resilience measurement tool

    Science.gov (United States)

    Keating, Adriana; Campbell, Karen; Szoenyi, Michael; McQuistan, Colin; Nash, David; Burer, Meinrad

    2017-01-01

    Given the increased attention on resilience strengthening in international humanitarian and development work, there is a growing need to invest in its measurement and the overall accountability of resilience strengthening initiatives. The purpose of this article is to present our framework and tool for measuring community-level resilience to flooding and generating empirical evidence and to share our experience in the application of the resilience concept. At the time of writing the tool is being tested in 75 communities across eight countries. Currently 88 potential sources of resilience are measured at the baseline (initial state) and end line (final state) approximately 2 years later. If a flood occurs in the community during the study period, resilience outcome measures are recorded. By comparing pre-flood characteristics to post-flood outcomes, we aim to empirically verify sources of resilience, something which has never been done in this field. There is an urgent need for the continued development of theoretically anchored, empirically verified, and practically applicable disaster resilience measurement frameworks and tools so that the field may (a) deepen understanding of the key components of disaster resilience in order to better target resilience-enhancing initiatives, and (b) enhance our ability to benchmark and measure disaster resilience over time, and (c) compare how resilience changes as a result of different capacities, actions and hazards.

  7. Numerical Modeling and Experimental Testing of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Zurkinden, Andrew Stephen; Kramer, Morten; Ferri, Francesco

    numerical values for comparison with the experimental test results which were carried out in the same time. It is for this reason why Chapter 4 does consist exclusively of numerical values. Experimental values and measured time series of wave elevations have been used throughout the report in order to a......) validate the numerical model and b) preform stochastic analysis. The latter technique is introduced in order to optimize the control parameters of the power take off system....

  8. Development of an Upper Extremity Function Measurement Model.

    Science.gov (United States)

    Hong, Ickpyo; Simpson, Annie N; Li, Chih-Ying; Velozo, Craig A

    This study demonstrated the development of a measurement model for gross upper-extremity function (GUE). The dependent variable was the Rasch calibration of the 27 ICF-GUE test items. The predictors were object weight, lifting distance from floor, carrying, and lifting. Multiple regression was used to investigate the contribution that each independent variable makes to the model with 203 outpatients. Object weight and lifting distance were the only statistically and clinically significant independent variables in the model, accounting for 83% of the variance (p model indicates that, with each one pound increase in object weight, item challenge increases by 0.16 (p measurement model for the ICF-GUE can be explained by object weight and distance lifted from the floor.

  9. Performance measurements at the fast flux test facility

    International Nuclear Information System (INIS)

    In 1984, Fast Flux Test Facility (FFTF) management recognized the need to develop a measurement system that would quantify the operational performance of the FFTF and the human resources needed to operate it. Driven by declining budgets and the need to safely manage a manpower rampdown at FFTF, an early warning system was developed. Although the initiating event for the early warning system was the need to safely manage a manpower rampdown, many related uses have evolved. The initial desired objective for the FFTF performance measurements was to ensure safety and control of key performance trends. However, the early warning system has provided a more quantitative, supportable basis upon which to make decisions. From this initial narrow focus, efforts in the FFTF plant and supporting organizations are leading to measurement of and, subsequently, improvements in productivity. Pilot projects utilizing statistical process control have started with longer range productivity improvement

  10. The direct measurement of dose enhancement in gamma test facilities

    International Nuclear Information System (INIS)

    Burke, E.A.; Snowden, D.P.; Cappelli, J.R.; Mittleman, S.; Lowe, L.F.

    1989-01-01

    The design and use of a dual cavity ionization chamber for routine measurement of dose enhancement factors in Co-60 gamma test facilities is described. The enhancement factors can be derived directly from the chamber measurements without recourse to reference data that may be difficult to obtain. It is shown, in agreement with earlier work, that the maximum dose enhancement factors can be altered by a factor two as a result of Compton scatter from relatively small amounts of low or high atomic number materials next to the target. The dual chamber permits the ready detection of such effects. This relatively simple device reliably reproduced earlier results obtained by more involved equipment and procedures. Measured enhancement factors are reported for new material combinations not previously examined and compared with recent calculations

  11. Flavor release measurement from gum model system

    DEFF Research Database (Denmark)

    Ovejero-López, I.; Haahr, Anne-Mette; van den Berg, Frans W.J.

    2004-01-01

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio...... composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory...

  12. AULA virtual reality test as an attention measure: convergent validity with Conners' Continuous Performance Test.

    Science.gov (United States)

    Díaz-Orueta, Unai; Garcia-López, Cristina; Crespo-Eguílaz, Nerea; Sánchez-Carpintero, Rocío; Climent, Gema; Narbona, Juan

    2014-01-01

    The majority of neuropsychological tests used to evaluate attention processes in children lack ecological validity. The AULA Nesplora (AULA) is a continuous performance test, developed in a virtual setting, very similar to a school classroom. The aim of the present study is to analyze the convergent validity between the AULA and the Continuous Performance Test (CPT) of Conners. The AULA and CPT were administered correlatively to 57 children, aged 6-16 years (26.3% female) with average cognitive ability (IQ mean = 100.56, SD = 10.38) who had a diagnosis of attention deficit/hyperactivity disorder (ADHD) according to DSM-IV-TR criteria. Spearman correlations analyses were conducted among the different variables. Significant correlations were observed between both tests in all the analyzed variables (omissions, commissions, reaction time, and variability of reaction time), including for those measures of the AULA based on different sensorial modalities, presentation of distractors, and task paradigms. Hence, convergent validity between both tests was confirmed. Moreover, the AULA showed differences by gender and correlation to Perceptual Reasoning and Working Memory indexes of the WISC-IV, supporting the relevance of IQ measures in the understanding of cognitive performance in ADHD. In addition, the AULA (but not Conners' CPT) was able to differentiate between ADHD children with and without pharmacological treatment for a wide range of measures related to inattention, impulsivity, processing speed, motor activity, and quality of attention focus. Additional measures and advantages of the AULA versus Conners' CPT are discussed.

  13. Model tests of geosynthetic reinforced slopes in a geotechnical centrifuge

    International Nuclear Information System (INIS)

    Aklik, P.

    2012-01-01

    Geosynthetic-reinforced slopes and walls became very popular in recent years because of their financial, technical, and ecological advantages. Centrifuge modelling is a powerful tool for physical modelling of reinforced slopes and offers the advantage to observe the failure mechanisms of the slopes. In order to replicate the gravity induced stresses of a prototype structure in a geometrically 1/N reduced model, it is necessary to test the model in a gravitational field N times larger than that of the prototype structure. In this dissertation, geotextile-reinforced slope models were tested in a geotechnical centrifuge to identify the possible failure mechanisms. Slope models were tested by varying slope inclination, tensile strengths of the geotextiles, and overlapping lengths. Photographs of the geotextile reinforced slope models in flight were taken with a digital camera and the soil deformations of geotextile reinforced slopes were evaluated with Particle Image Velocimetry (PIV). The experimental results showed that failure of the centrifuge models initiated at midheight of the slope, and occurred due to geotextile breakage instead of pullout. The location of the shear surface is independent of the tensile strength of the geotextile; it is dependent on the shear strength of the soil. It is logical to see that the required acceleration of the centrifuge at slope failure was decreased with increasing slope inclination. An important contribution to the stability of the slope models was provided by the overlapping of the geotextile layers. It has a secondary reinforcement effect when it was prolonged and passed through the shear surface. Moreover, the location of the shear surface observed with PIV analysis exactly matches the tears of the retrieved geotextiles measured carefully after the centrifuge testing. It is concluded that PIV is an efficient tool to instrument the slope failures in a geotechnical centrifuge.(author) [de

  14. Thermal tests for laser Doppler perfusion measurements in Raynaud's syndrome

    Science.gov (United States)

    Kacprzak, Michal; Skora, A.; Obidzinska, J.; Zbiec, A.; Maniewski, Roman; Staszkiewicz, W.

    2004-07-01

    The laser Doppler method offers a non-invasive, real time technique for monitoring of blood perfusion in microcirculation. In practical measurements the perfusion index is given only in relative values. Thus, accurate and reproducible results can be only obtained when using a well controlled stimulation test. The aim of this study was evaluation of the thermal stimulation test, which is frequently used to investigate microcirculation in patients with Raynaud's syndrome. Three types of thermal tests, in which air or water with temperature in range 5°C - 40°C were used. Ten normal volunteers and fifteen patients with clinical symptoms of the primary Raynaud's syndrome were enrolled in this study. To estimate skin microcirculation changes during the thermal test, the multichannel laser Doppler system and laser Doppler scanner were used. The obtained results were analyzed from the point of view of the efficiency of these methods and the thermal provocative tests in differentiation of normal subjects and patient with Raynaud's syndrome.

  15. ACCURACY TEST OF MICROSOFT KINECT FOR HUMAN MORPHOLOGIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    B. Molnár

    2012-08-01

    Full Text Available The Microsoft Kinect sensor, a popular gaming console, is widely used in a large number of applications, including close-range 3D measurements. This low-end device is rather inexpensive compared to similar active imaging systems. The Kinect sensors include an RGB camera, an IR projector, an IR camera and an audio unit. The human morphologic measurements require high accuracy with fast data acquisition rate. To achieve the highest accuracy, the depth sensor and the RGB camera should be calibrated and co-registered to achieve high-quality 3D point cloud as well as optical imagery. Since this is a low-end sensor, developed for different purpose, the accuracy could be critical for 3D measurement-based applications. Therefore, two types of accuracy test are performed: (1 for describing the absolute accuracy, the ranging accuracy of the device in the range of 0.4 to 15 m should be estimated, and (2 the relative accuracy of points depending on the range should be characterized. For the accuracy investigation, a test field was created with two spheres, while the relative accuracy is described by sphere fitting performance and the distance estimation between the sphere center points. Some other factors can be also considered, such as the angle of incidence or the material used in these tests. The non-ambiguity range of the sensor is from 0.3 to 4 m, but, based on our experiences, it can be extended up to 20 m. Obviously, this methodology raises some accuracy issues which make accuracy testing really important.

  16. Functional testing using rapid prototyped components and optical measurement.

    Science.gov (United States)

    Wykes, Catherine; Buckberry, Clive; Dale, Martin; Reeves, Mark; Towers, David

    1999-11-01

    Rapid prototyping manufacturing methods such as stereo-lithography, fused deposition modelling, enable real parts to be produced very quickly from CAD models but because the parts are produced in materials which are different from the final component, these cannot readily be used for assessing structural integrity. Electronic speckle pattern interferometry (ESPI) enables full-field measurement of surface displacements, both static and dynamic to be made rapidly. This paper proposes the use of these two techniques together to enable the response of parts to static and dynamic loading to be assessed early on in the design process. It should be possible to make a qualitative assessment by observing the form of the deformation or vibration pattern produced and it may also be possible to make quantitative measurement by developing suitable scaling methods. Some initial experiments have been made looking at the vibration of flat plates and further proposed work is outlined.

  17. Model tests on overall forces on the SSG pilot plant

    DEFF Research Database (Denmark)

    Margheritini, Lucia; Morris, Alex

    This report presents the results on overall forces acting on the SSG structure in 3D wave conditions. This study was done according to the Co-operation agreement between WEVEnergy AS (Norway) and Aalborg University, Department of Civil Engineering of which the present report is part of Phase 5....... The tests have been realized at the Department of civil Engineering, AAU, in the 3D deep water tank with a scale model 1:60 to prototype and a reproduced bathymetry of the selected location at the time of the experiments. Overall forces and moments have been measured during the tests. The results are given...

  18. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  19. Standard test method for measuring pH of soil for use in corrosion testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1995-01-01

    1.1 This test method covers a procedure for determining the pH of a soil in corrosion testing. The principle use of the test is to supplement soil resistivity measurements and thereby identify conditions under which the corrosion of metals in soil may be accentuated (see G 57 - 78 (1984)). 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  20. Emotional Intelligence Measured in a Highly Competitive Testing Situation

    OpenAIRE

    Sjöberg, Lennart

    2001-01-01

    This is a study in which emotional intelligence (EI) as well as several other personality dimensions were studied in a real, high-stakes, selection situation, N=190. Forty-one trait oriented personality scales were measured and factor analyzed. A factor pattern with four secondary factors was found: EI, emotional stability, rigidity/perfectionism and energy/dominance. These factors were related to standard FFM (Five Factor Model) dimensions, to Hogan's Development Survey ("the dark side of pe...

  1. Efficiency of Switch-Mode Power Audio Amplifiers - Test Signals and Measurement Techniques

    DEFF Research Database (Denmark)

    Iversen, Niels Elkjær; Knott, Arnold; Andersen, Michael A. E.

    2016-01-01

    Switch-mode technology is greatly used for audio amplification. This is mainly due to the great efficiency this technology offers. Normally the efficiency of a switch-mode audio amplifier is measured using a sine wave input. However this paper shows that sine waves represent real audio very poorly....... An alternative signal is proposed for test purposes. The efficiency of a switch-mode power audio amplifier is modelled and measured with both sine wave and the proposed test signal as inputs. The results show that the choice of switching devices with low on resistances are unfairly favored when measuring...

  2. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Experimental test of models of radio-frequency plasma sheaths

    International Nuclear Information System (INIS)

    Sobolewski, M.A.

    1997-01-01

    The ion current and sheath impedance were measured at the radio-frequency-powered electrode of an asymmetric, capacitively coupled plasma reactor, for discharges in argon at 1.33 endash 133 Pa. The measurements were used to test the models of the radio frequency sheath derived by Lieberman [IEEE Trans. Plasma Sci. 17, 338 (1989)] and Godyak and Sternberg [Phys. Rev. A 42, 2299 (1990)], and establish the range of pressure and sheath voltage in which they are valid. copyright 1997 American Institute of Physics

  4. Relevant criteria for testing the quality of turbulence models

    DEFF Research Database (Denmark)

    Frandsen, Sten Tronæs; Ejsing Jørgensen, Hans; Sørensen, J.D.

    2007-01-01

    Seeking relevant criteria for testing the quality of turbulence models, the scale of turbulence and the gust factor have been estimated from data and compared with predictions from first-order models of these two quantities. It is found that the mean of the measured length scales is approx. 10......% smaller than the IEC model, for wind turbine hub height levels. The mean is only marginally dependent on trends in time series. It is also found that the coefficient of variation of the measured length scales is about 50%. 3sec and 10sec pre-averaging of wind speed data are relevant for MW-size wind...... turbines when seeking wind characteristics that correspond to one blade and the entire rotor, respectively. For heights exceeding 50-60m the gust factor increases with wind speed. For heights larger the 60-80m, present assumptions on the value of the gust factor are significantly conservative, both for 3...

  5. Modeling measurement error in tumor characterization studies

    Directory of Open Access Journals (Sweden)

    Marjoram Paul

    2011-07-01

    Full Text Available Abstract Background Etiologic studies of cancer increasingly use molecular features such as gene expression, DNA methylation and sequence mutation to subclassify the cancer type. In large population-based studies, the tumor tissues available for study are archival specimens that provide variable amounts of amplifiable DNA for molecular analysis. As molecular features measured from small amounts of tumor DNA are inherently noisy, we propose a novel approach to improve statistical efficiency when comparing groups of samples. We illustrate the phenomenon using the MethyLight technology, applying our proposed analysis to compare MLH1 DNA methylation levels in males and females studied in the Colon Cancer Family Registry. Results We introduce two methods for computing empirical weights to model heteroscedasticity that is caused by sampling variable quantities of DNA for molecular analysis. In a simulation study, we show that using these weights in a linear regression model is more powerful for identifying differentially methylated loci than standard regression analysis. The increase in power depends on the underlying relationship between variation in outcome measure and input DNA quantity in the study samples. Conclusions Tumor characteristics measured from small amounts of tumor DNA are inherently noisy. We propose a statistical analysis that accounts for the measurement error due to sampling variation of the molecular feature and show how it can improve the power to detect differential characteristics between patient groups.

  6. Comparison Testings between Two High-temperature Strain Measurement Systems

    Science.gov (United States)

    Lei, J.-F.; Castelli, M. G.; Androjna, D.; Blue, C.; Blue, R.; Lin, R. Y.

    1996-01-01

    An experimental evaluation was conducted at NASA Lewis Research Center to compare and contrast the performance of a newly developed resistance strain gage, the PdCr temperature-compensated wire strain gage, to that of a conventional high-temperature extensometry. The evaluation of the two strain measurement systems was conducted through the application of various thermal and mechanical loading spectra using a high-temperature thermomechanical uniaxial testing system equipped with quartz lamp heating. The purpose of the testing was not only to compare and contrast the two strain sensors but also to investigate the applicability of the PdCr strain gage to the testing environment typically employed when characterizing the high-temperature mechanical behavior of structural materials. Strain measurement capabilities to 8OO C were investigated with a nickel base superalloy IN100 substrate material, and application to titanium matrix composite (TMC) materials was examined with the SCS-6/Ti-15-3 08 system. PdCr strain gages installed by three attachment techniques, namely, flame spraying, spot welding and rapid infrared joining were investigated.

  7. Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure

    Directory of Open Access Journals (Sweden)

    Wahyu Widhiarso

    2015-08-01

    Full Text Available Many researchers have assumed that different methods could be substituted to measure the same attributes in assessment. Various models have been developed to accommodate the amount of variance attributable to the methods but these models application in empirical research is rare. The present study applied one of those models to examine whether method effects were presents in synonym and antonym tests. Study participants were 3,469 applicants to graduate school. The instrument used was the Graduate Academic Potential Test (PAPS, which includes synonym and antonym questions to measure verbal abilities. Our analysis showed that measurement models that using correlated trait–correlated methods minus one, CT-C(M–1, that separated trait and method effect into distinct latent constructs yielded slightly better values for multiple goodness-of-fit indices than one factor model. However, either for the synonym or antonym items, the proportion of variance accounted for by the method is smaller than trait variance. The correlation between factor scores of both methods is high (r = 0.994. These findings confirm that synonym and antonym tests represent the same attribute so that both tests cannot be treated as two unique methods for measuring verbal ability.

  8. Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure.

    Science.gov (United States)

    Widhiarso, Wahyu; Haryanta

    2015-08-01

    Many researchers have assumed that different methods could be substituted to measure the same attributes in assessment. Various models have been developed to accommodate the amount of variance attributable to the methods but these models application in empirical research is rare. The present study applied one of those models to examine whether method effects were presents in synonym and antonym tests. Study participants were 3,469 applicants to graduate school. The instrument used was the Graduate Academic Potential Test (PAPS), which includes synonym and antonym questions to measure verbal abilities. Our analysis showed that measurement models that using correlated trait-correlated methods minus one, CT-C(M-1), that separated trait and method effect into distinct latent constructs yielded slightly better values for multiple goodness-of-fit indices than one factor model. However, either for the synonym or antonym items, the proportion of variance accounted for by the method is smaller than trait variance. The correlation between factor scores of both methods is high (r = 0.994). These findings confirm that synonym and antonym tests represent the same attribute so that both tests cannot be treated as two unique methods for measuring verbal ability.

  9. Measured, modeled, and causal conceptions of fitness

    Science.gov (United States)

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  10. Reliable real-time applications - and how to use tests to model and understand

    DEFF Research Database (Denmark)

    Jensen, Peter Krogsgaard

    Test and analysis of real-time applications, where temporal properties are inspected, analyzed, and verified in a model developed from timed traces originating from measured test result on a running application......Test and analysis of real-time applications, where temporal properties are inspected, analyzed, and verified in a model developed from timed traces originating from measured test result on a running application...

  11. Test-Retest Reliability of Measures Commonly Used to Measure Striatal Dysfunction across Multiple Testing Sessions: A Longitudinal Study.

    Science.gov (United States)

    Palmer, Clare E; Langbehn, Douglas; Tabrizi, Sarah J; Papoutsi, Marina

    2017-01-01

    Cognitive impairment is common amongst many neurodegenerative movement disorders such as Huntington's disease (HD) and Parkinson's disease (PD) across multiple domains. There are many tasks available to assess different aspects of this dysfunction, however, it is imperative that these show high test-retest reliability if they are to be used to track disease progression or response to treatment in patient populations. Moreover, in order to ensure effects of practice across testing sessions are not misconstrued as clinical improvement in clinical trials, tasks which are particularly vulnerable to practice effects need to be highlighted. In this study we evaluated test-retest reliability in mean performance across three testing sessions of four tasks that are commonly used to measure cognitive dysfunction associated with striatal impairment: a combined Simon Stop-Signal Task; a modified emotion recognition task; a circle tracing task; and the trail making task. Practice effects were seen between sessions 1 and 2 across all tasks for the majority of dependent variables, particularly reaction time variables; some, but not all, diminished in the third session. Good test-retest reliability across all sessions was seen for the emotion recognition, circle tracing, and trail making test. The Simon interference effect and stop-signal reaction time (SSRT) from the combined-Simon-Stop-Signal task showed moderate test-retest reliability, however, the combined SSRT interference effect showed poor test-retest reliability. Our results emphasize the need to use control groups when tracking clinical progression or use pre-baseline training on tasks susceptible to practice effects.

  12. Thermal effects in shales: measurements and modeling

    International Nuclear Information System (INIS)

    McKinstry, H.A.

    1977-01-01

    Research is reported concerning thermal and physical measurements and theoretical modeling relevant to the storage of radioactive wastes in a shale. Reference thermal conductivity measurements are made at atmospheric pressure in a commercial apparatus; and equipment for permeability measurements has been developed, and is being extended with respect to measurement ranges. Thermal properties of shales are being determined as a function of temperature and pressures. Apparatus was developed to measure shales in two different experimental configurations. In the first, a disk 15 mm in diameter of the material is measured by a steady state technique using a reference material to measure the heat flow within the system. The sample is sandwiched between two disks of a reference material (single crystal quartz is being used initially as reference material). The heat flow is determined twice in order to determine that steady state conditions prevail; the temperature drop over the two references is measured. When these indicate an equal heat flow, the thermal conductivity of the sample can be calculated from the temperature difference of the two faces. The second technique is for determining effect of temperature in a water saturated shale on a larger scale. Cylindrical shale (or siltstone) specimens that are being studied (large for a laboratory sample) are to be heated electrically at the center, contained in a pressure vessel that will maintain a fixed water pressure around it. The temperature is monitored at many points within the shale sample. The sample dimensions are 25 cm diameter, 20 cm long. A micro computer system has been constructed to monitor 16 thermocouples to record variation of temperature distribution with time

  13. Stochastic Measurement Models for Quantifying Lymphocyte Responses Using Flow Cytometry

    Science.gov (United States)

    Kan, Andrey; Pavlyshyn, Damian; Markham, John F.; Dowling, Mark R.; Heinzel, Susanne; Zhou, Jie H. S.; Marchingo, Julia M.; Hodgkin, Philip D.

    2016-01-01

    Adaptive immune responses are complex dynamic processes whereby B and T cells undergo division and differentiation triggered by pathogenic stimuli. Deregulation of the response can lead to severe consequences for the host organism ranging from immune deficiencies to autoimmunity. Tracking cell division and differentiation by flow cytometry using fluorescent probes is a major method for measuring progression of lymphocyte responses, both in vitro and in vivo. In turn, mathematical modeling of cell numbers derived from such measurements has led to significant biological discoveries, and plays an increasingly important role in lymphocyte research. Fitting an appropriate parameterized model to such data is the goal of these studies but significant challenges are presented by the variability in measurements. This variation results from the sum of experimental noise and intrinsic probabilistic differences in cells and is difficult to characterize analytically. Current model fitting methods adopt different simplifying assumptions to describe the distribution of such measurements and these assumptions have not been tested directly. To help inform the choice and application of appropriate methods of model fitting to such data we studied the errors associated with flow cytometry measurements from a wide variety of experiments. We found that the mean and variance of the noise were related by a power law with an exponent between 1.3 and 1.8 for different datasets. This violated the assumptions inherent to commonly used least squares, linear variance scaling and log-transformation based methods. As a result of these findings we propose a new measurement model that we justify both theoretically, from the maximum entropy standpoint, and empirically using collected data. Our evaluation suggests that the new model can be reliably used for model fitting across a variety of conditions. Our work provides a foundation for modeling measurements in flow cytometry experiments thus

  14. Computerized Classification Testing with the Rasch Model

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  15. Teacher Competency in Classroom Testing, Measurement Preparation, and Classroom Testing Practices.

    Science.gov (United States)

    Newman, Dorothy C.; Stallings, William M.

    An assessment instrument and a questionnaire (Appendices A and B) were developed to determine how well teachers understand classroom testing principles and to gain information on the measurement preparation and classroom practices of teachers. Two hundred ninety-four inservice teachers, grades 1 through 12, from three urban school systems in…

  16. Reliability of skin testing as a measure of nutritional state

    International Nuclear Information System (INIS)

    Forse, R.A.; Christou, N.; Meakins, J.L.; MacLean, L.D.; Shizgal, H.M.

    1981-01-01

    The reliability of skin testing to assess the nutritional state was evaluated in 257 patients who received total parenteral nutrition (TPN). The nutritional state was assessed by determining body composition, by multiple-isotope dilution. Immunocompetence was simultaneously evaluated by skin testing with five recall antigens. These measurements were carried out before and at two-week intervals during TPN. A statistically significant relationship existed between the response to skin testing and the nutritional state. A body composition consistent with malnutrition was present in the anergic patients, while body composition was normal in the patients who reacted normally to skin testing. However, a considerable overlap existed as 43% of the reactive patients were malnourished, and 21% of the anergic patients were normally nourished. Thirty-seven (43%) of the 86 anergic patients converted and became reactive during TPN, and their body composition improved significantly. The remaining 49 anergic patients (57%) did not convert, and their body composition did not change despite similar nutritional support. The principal difference between the two groups of anergic patients was the nature of the therapy administered. In the anergic patients who converted, therapy was aggressive and appropriate, and clinical improvement occurred in 23 (62.2%) of the patients, with a mortality of 5.4%. In the 49 patients who remained anergic, therapy was often inappropriate or unsuccessful, with clinical improvement in only three (6.1%) of the patients and a mortality of 42.8%. The data demonstrated a significant relationship between the response to skin testing and the nutritional state. However, because of the wide overlap, skin testing does not accurately assess a person's nutritional state. The persistence of the anergic state is indicative of a lack of response to therapy

  17. Use of electromyography measurement in human body modeling

    Directory of Open Access Journals (Sweden)

    Valdmanová L.

    2011-06-01

    Full Text Available The aim of this study is to test the use of the human body model for the muscle activity computation. This paper shows the comparison of measured and simulated muscle activities. Muscle active states of biceps brachia muscle are monitored by method called electromyography (EMG in a given position and for given subsequently increasing loads. The same conditions are used for simulation using a human body model (Hynčík, L., Rigid Body Based Human Model for Crash Test Purposes, EngineeringMechanics, 5 (8 (2001 1–6. This model consists of rigid body segments connected by kinematic joints and involves all major muscle bunches. Biceps brachia active states are evaluated by a special muscle balance solver. Obtained simulation results show the acceptable correlation with the experimental results. The analysis shows that the validation procedure of muscle activities determination is usable.

  18. Standardization of Solar Mirror Reflectance Measurements - Round Robin Test: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Meyen, S.; Lupfert, E.; Fernandez-Garcia, A.; Kennedy, C.

    2010-10-01

    Within the SolarPaces Task III standardization activities, DLR, CIEMAT, and NREL have concentrated on optimizing the procedure to measure the reflectance of solar mirrors. From this work, the laboratories have developed a clear definition of the method and requirements needed of commercial instruments for reliable reflectance results. A round robin test was performed between the three laboratories with samples that represent all of the commercial solar mirrors currently available for concentrating solar power (CSP) applications. The results show surprisingly large differences in hemispherical reflectance (sh) of 0.007 and specular reflectance (ss) of 0.004 between the laboratories. These differences indicate the importance of minimum instrument requirements and standardized procedures. Based on these results, the optimal procedure will be formulated and validated with a new round robin test in which a better accuracy is expected. Improved instruments and reference standards are needed to reach the necessary accuracy for cost and efficiency calculations.

  19. Measuring and testing awareness of emotional face expressions.

    Science.gov (United States)

    Sandberg, Kristian; Bibby, Bo Martin; Overgaard, Morten

    2013-09-01

    Comparison of behavioural measures of consciousness has attracted much attention recently. In a recent article, Szczepanowski et al. conclude that confidence ratings (CR) predict accuracy better than both the perceptual awareness scale (PAS) and post-decision wagering (PDW) when using stimuli with emotional content (fearful vs. neutral faces). Although we find the study interesting, we disagree with the conclusion that CR is superior to PAS because of two methodological issues. First, the conclusion is not based on a formal test. We performed this test and found no evidence that CR predicted accuracy better than PAS (p=.4). Second, Szczepanowski et al. used the present version of PAS in a manner somewhat different from how it was originally intended, and the participants may not have been adequately instructed. We end our commentary with a set of recommendations for future studies using PAS. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Test-retest reliability of Attention Network Test measures in schizophrenia.

    Science.gov (United States)

    Hahn, Eric; Ta, Thi Minh Tam; Hahn, Constanze; Kuehl, Linn K; Ruehl, Claudia; Neuhaus, Andres H; Dettling, Michael

    2011-12-01

    The Attention Network Test (ANT) is a well established behavioral measure in neuropsychological research to assess three different facets of selective attention, i.e., alerting, orienting, and conflict processing. Although the ANT has been applied in healthy individuals and various clinical populations, data on retest reliability are scarce in healthy samples and lacking for clinical populations. The objective of the present study was a longitudinal assessment of relevant ANT network measures in healthy controls and schizophrenic patients. Forty-five schizophrenic patients and 55 healthy controls were tested with ANT in a test-retest design with an average interval of 7.4 months between test sessions. Test-retest reliability was analyzed with Pearson and Intra-class correlations. Healthy controls revealed moderate to high test-retest correlations for mean reaction time, mean accuracy, conflict effect, and conflict error rates. In schizophrenic patients, moderate test-retest correlations for mean reaction time, orienting effect, and conflict effect were found. The analysis of error rates in schizophrenic patients revealed very low test-retest correlations. The current study provides converging statistical evidence that the conflict effect and mean reaction time of ANT yield acceptable test-retest reliabilities in healthy controls and, investigated longitudinally for the first time, also in schizophrenia. Obtained differences of alerting and orienting effects in schizophrenia case-control studies should be considered more carefully. The analysis of error rates revealed heterogeneous results and therefore is not recommended for case control studies in schizophrenia. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Innovative testing and measurement solutions for smart grid

    CERN Document Server

    Huang, Qi; Yi, Jianbo; Zhen, Wei

    2015-01-01

    Focuses on sensor applications and smart meters in the newly developing interconnected smart grid Focuses on sensor applications and smart meters in the newly developing interconnected smart grid Presents the most updated technological developments in the measurement and testing of power systems within the smart grid environment Reflects the modernization of electric utility power systems with the extensive use of computer, sensor, and data communications technologies, providing benefits to energy consumers and utility companies alike The leading author heads a group of researchers focusing on

  2. Skin test reactivity among Danish children measured 15 years apart

    DEFF Research Database (Denmark)

    Thomsen, SF; Ulrik, Charlotte Suppli; Porsbjerg, C

    2006-01-01

    BACKGROUND: Knowledge of secular trends in the prevalence of allergy among children stems in large part from questionnaire surveys, whereas repeated cross-sectional studies using objective markers of atopic sensitization are sparse. OBJECTIVES: To investigate whether the prevalence of skin prick...... (n = 527) and the second in 2001 (n = 480). Skin test reactivity to nine common aeroallergens was measured at both occasions. RESULTS: The prevalence of positive SPT to at least one allergen decreased from 24.1% in 1986 to 18.9% in 2001, (p = 0.05). We found a declining prevalence of sensitization...

  3. Standard Test Method for Measured Speed of Oil Diffusion Pumps

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1982-01-01

    1.1 This test method covers the determination of the measured speed (volumetric flow rate) of oil diffusion pumps. 1.2 The values stated in inch-pound units are to be regarded as the standard. The metric equivalents of inch-pound units may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  4. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  5. [Using the Implicit Association Test (IAT) to measure implicit shyness].

    Science.gov (United States)

    Aikawa, Atsushi; Fujii, Tsutomu

    2011-04-01

    Previous research has shown that implicitly measured shyness predicted spontaneous shy behavior in social situations, while explicit self-ratings of shyness predicted controlled shy behavior (Asendorpf, Banse, & Mücke, 2002). The present study examined whether these same results would be replicated in Japan. In Study 1, college students (N=47) completed a shyness Implicit Association Test (IAT for shyness) and explicit self-ratings of shyness. In Study 2, friends (N=69) of the Study 1 participants rated those participants on various personality scales. Covariance structure analysis, revealed that only implicit self-concept measured by the shyness IAT predicted other-rated high interpersonal tension (spontaneous shy behavior). Also, only explicit self-concept predicted other-rated low praise seeking (controlled shy behavior). The results of this study are similar to the findings of the previous research.

  6. Modeling and testing treated tumor growth using cubic smoothing splines.

    Science.gov (United States)

    Kong, Maiying; Yan, Jun

    2011-07-01

    Human tumor xenograft models are often used in preclinical study to evaluate the therapeutic efficacy of a certain compound or a combination of certain compounds. In a typical human tumor xenograft model, human carcinoma cells are implanted to subjects such as severe combined immunodeficient (SCID) mice. Treatment with test compounds is initiated after tumor nodule has appeared, and continued for a certain time period. Tumor volumes are measured over the duration of the experiment. It is well known that untreated tumor growth may follow certain patterns, which can be described by certain mathematical models. However, the growth patterns of the treated tumors with multiple treatment episodes are quite complex, and the usage of parametric models is limited. We propose using cubic smoothing splines to describe tumor growth for each treatment group and for each subject, respectively. The proposed smoothing splines are quite flexible in modeling different growth patterns. In addition, using this procedure, we can obtain tumor growth and growth rate over time for each treatment group and for each subject, and examine whether tumor growth follows certain growth pattern. To examine the overall treatment effect and group differences, the scaled chi-squared test statistics based on the fitted group-level growth curves are proposed. A case study is provided to illustrate the application of this method, and simulations are carried out to examine the performances of the scaled chi-squared tests. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Item Construction Using Reflective, Formative, or Rasch Measurement Models: Implications for Group Work

    Science.gov (United States)

    Peterson, Christina Hamme; Gischlar, Karen L.; Peterson, N. Andrew

    2017-01-01

    Measures that accurately capture the phenomenon are critical to research and practice in group work. The vast majority of group-related measures were developed using the reflective measurement model rooted in classical test theory (CTT). Depending on the construct definition and the measure's purpose, the reflective model may not always be the…

  8. Perceived game realism: a test of three alternative models.

    Science.gov (United States)

    Ribbens, Wannes

    2013-01-01

    Perceived realism is considered a key concept in explaining the mental processing of media messages and the societal impact of media. Despite its importance, little is known about its conceptualization and dimensional structure, especially with regard to digital games. The aim of this study was to test a six-factor model of perceived game realism comprised of simulational realism, freedom of choice, perceptual pervasiveness, social realism, authenticity, and character involvement and to assess it against an alternative single- and five-factor model. Data were collected from 380 male digital game users who judged the realism of the first-person shooter Half-Life 2 based upon their previous experience with the game. Confirmatory factor analysis was applied to investigate which model fits the data best. The results support the six-factor model over the single- and five-factor solutions. The study contributes to our knowledge of perceived game realism by further developing its conceptualization and measurement.

  9. Catheter-based flow measurements in hemodialysis fistulas - Bench testing and clinical performance

    DEFF Research Database (Denmark)

    Heerwagen, Søren T; Lönn, Lars; Schroeder, Torben V

    2012-01-01

    Purpose: The purpose of this study was to perform bench and clinical testing of a catheter-based intravascular system capable of measuring blood flow in hemodialysis vascular accesses during endovascular procedures. Methods: We tested the Transonic ReoCath Flow Catheter System which uses...... the thermodilution method. A simulated vascular access model was constructed for the bench test. In total, 1960 measurements were conducted and the results were used to determine the accuracy and precision of the catheters, the effects of external factors (e.g., catheter placement, injection duration), and to test....... Blood flow measurements provide unique information on the hemodynamic status of a vascular access and have the potential to optimize results of interventions....

  10. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Combination of the H1 and ZEUS inclusive cross-section measurements at proton beam energies of 460 GeV and 575 GeV and tests of low Bjorken-x phenomenological models

    Energy Technology Data Exchange (ETDEWEB)

    Belov, Pavel

    2013-06-15

    A combination is presented of the inclusive neutral current e{sup {+-}}p scattering cross section data collected by the H1 and ZEUS collaborations during the last months of the HERA II operation period with proton beam energies E{sub p} of 460 and 575 GeV. The kinematic range of the cross section data covers low absolute four-momentum transfers squared, 1.5 GeV{sup 2} {<=} Q{sup 2} {<=} 110 GeV{sup 2}, small values of Bjorken-x, 2.8.10{sup -5} {<=} x {<=} 1.5.10{sup -2}, and high inelasticity y {<=} 0.85. The combination algorithm is based on the method of least squares and takes into account correlations of the systematic uncertainties. The combined data are used in the QCD fits to extract the parton distribution functions. The phenomenological low-x dipole models are tested and parameters of the models are obtained. A good description of the data by the dipole model taking into account the evolution of the gluon distribution is observed. The longitudinal structure function F{sub L} is extracted from the combination of the currently used H1 and ZEUS reduced proton beam energy data with previously published H1 nominal proton beam energy data of 920 GeV. A precision of the obtained values of F{sub L} is improved at medium Q{sup 2} compared to the published results of the H1 collaboration.

  12. Observational Tests of Magnetospheric Accretion Models in Young Stars

    Directory of Open Access Journals (Sweden)

    Johns–Krull Christopher M.

    2014-01-01

    Full Text Available Magnetically controlled accretion of disk material onto the surface of Classical T Tauri stars is the dominant paradigm in our understanding of how these young stars interact with their surrounding disks. These stars provide a powerful test of magnetically controlled accretion models since all of the relevant parameters, including the magnetic field strength and geometry, are in principle measureable. Both the strength and the field geometry are key for understanding how these stars interact with their disks. This talk will focus on recent advances in magnetic field measurements on a large number of T Tauri stars, as well as very recent studies of the accretion rates onto a sample of young stars in NGC 2264 with known rotation periods. We discuss how these observations provide critical tests of magnetospheric accretion models which predict a rotational equilibrium is reached. We find good support for the model predictions once the complex geometry of the stellar magnetic field is taken into account. We will also explore how the observations of the accretion properties of the 2264 cluster stars can be used to test emerging ideas on how magnetic fields on young stars are generated and organized as a function of their internal structure (i.e. the presence of a radiative core. We do not find support for the hypothesis that large changes in the magentic field geometry occur when a radiative core appears in these young stars.

  13. ATLAS Standard Model Measurements Using Jet Grooming and Substructure

    CERN Document Server

    Ucchielli, Giulia; The ATLAS collaboration

    2017-01-01

    Boosted topologies allow to explore Standard Model processes in kinematical regimes never tested before. In such LHC challenging environments, standard reconstruction techniques quickly hit the wall. Targeting hadronic final states means to properly reconstruct energy and multiplicity of the jets in the event. In order to be able to identify the decay product of boosted objects, i.e. W bosons, $t\\bar{t}$ pairs or Higgs produced in association with $t\\bar{t}$ pairs, ATLAS experiment is currently exploiting several algorithms using jet grooming and jet substructure. This contribution will mainly cover the following ATLAS measurements: $t\\bar{t}$ differential cross section production and jet mass using the soft drop procedure. Standard Model measurements offer the perfect field to test the performances of new jet tagging techniques which will become even more important in the search for new physics in highly boosted topologies.”

  14. Electrostatic sensor modeling for torque measurements

    Science.gov (United States)

    Mika, Michał; Dannert, Mirjam; Mett, Felix; Weber, Harry; Mathis, Wolfgang; Nackenhorst, Udo

    2017-09-01

    Torque load measurements play an important part in various engineering applications, as for automotive industry, in which the drive torque of a motor has to be determined. A widely used measuring method are strain gauges. A thin flexible foil, which supports a metallic pattern, is glued to the surface of the object the torque is being applied to. In case of a deformation due to the torque load, the change in the electrical resistance is measured. With the combination of constitutive equations the applied torque load is determined by the change of electrical resistance. The creep of the glue and the foil material, together with the temperature and humidity dependence, may become an obstacle for some applications Kapralov and Fesenko (1984). Thus, there have been optical and magnetical, as well as capacitive sensors introduced). This paper discusses the general idea behind an electrostatic capacitive sensor based on a simple draft of an exemplary measurement setup. For better understanding an own electrostatical, geometrical and mechanical model of this setup has been developed.

  15. Electrostatic sensor modeling for torque measurements

    Directory of Open Access Journals (Sweden)

    M. Mika

    2017-09-01

    Full Text Available Torque load measurements play an important part in various engineering applications, as for automotive industry, in which the drive torque of a motor has to be determined. A widely used measuring method are strain gauges. A thin flexible foil, which supports a metallic pattern, is glued to the surface of the object the torque is being applied to. In case of a deformation due to the torque load, the change in the electrical resistance is measured. With the combination of constitutive equations the applied torque load is determined by the change of electrical resistance. The creep of the glue and the foil material, together with the temperature and humidity dependence, may become an obstacle for some applications Kapralov and Fesenko(1984. Thus, there have been optical and magnetical, as well as capacitive sensors introduced . This paper discusses the general idea behind an electrostatic capacitive sensor based on a simple draft of an exemplary measurement setup. For better understanding an own electrostatical, geometrical and mechanical model of this setup has been developed.

  16. Quality of Life: Meaning, Measurement, and Models

    Science.gov (United States)

    1992-05-01

    occupation of head of household, education, religion , and sex. In the Rosen and Moghadam (1988) study of the quality of life of Army wives, only 3...Navy Personnel Research and Development Center San Diego, California 92152-6800 TN-92-15 May 1992 AD-A250 813 Quality of Life : Meaning, Measurement...and Models Elyse W. Kerce 92-13297 $9ý 1 4 Approved for public release: distribuior , is unlimited. NPRDC-TN-92-15 May 1992 Quality of Life : Meaning

  17. Towards testing the "honeycomb rippling model" in cerrado.

    Science.gov (United States)

    Gonçalves, C S; Batalha, M A

    2011-05-01

    Savannas are tropical formations in which trees and grasses coexist. According to the "honeycomb rippling model", inter-tree competition leads to an effect of trees growing and dying due to competition, which, at fine spatial scale, would resemble honeycomb rippling. The model predicts that the taller the trees, the higher the inter-tree distances and the evenness of inter-tree distances. The model had been corroborated in arid savannas, in what appears to be caused by uneven distribution of rains, but had not yet been tested in seasonal savannas, such as the cerrado, which could be caused by the irregular occurrence of fire.A basic assumption of the model is that strong inter-tree competition affects growth (estimated by height) and mortality (estimated by inter-tree distances). As a first step towards testing this model in the cerrado, we tested this assumption in a single cerrado patch in southeastern Brazil. We placed 80 quadrats, each one with 25 m², in which we sampled all shrubs and trees. For each individual, we measured its height and the distance to its nearest neighbour--the inter-tree distance. We did not find correlations between tree height and both inter-tree distances and evenness of inter-tree distances, refuting the honeycomb rippling model. Inter-tree distances were spatially autocorrelated, but height was not. According to our results, the basic assumption of the model does not apply to seasonal savannas. If, in arid savannas, rainfall events are rare and unpredictable, in seasonal savannas, the rainy season is well-defined and rainfall is considerable. We found horizontal structuring in the community, which may be due to soil nutrient heterogeneity. The absence of vertical structuring suggests that competition for light among adult trees is not as important as competition for nutrients in the soil. We tested the basic assumption of the model in a single patch and at a single moment. To test the model effectively, we suggest this assumption to

  18. Tests of the validity of a model relating frequency of contaminated items and increasing radiation dose

    International Nuclear Information System (INIS)

    Tallentire, A.; Khan, A.A.

    1975-01-01

    The 60 Co radiation response of Bacillus pumilus E601 spores has been characterized when present in a laboratory test system. The suitability of test vessels to act as both containers for irradiation and culture vessels in sterility testing has been checked. Tests have been done with these spores to verify assumptions basic to the general model described in a previous paper. First measurements indicate that the model holds with this laboratory test system. (author)

  19. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  20. Hydrogen recycle modeling and measurements in tokamaks

    International Nuclear Information System (INIS)

    Howe, H.C.

    1980-01-01

    A model for hydrogen recycling developed for use in a tokamak transport code is described and compared with measurements on ISX-B and DITE. The model includes kinetic reflection of charge-exchange neutrals from the wall and deposition, thermal diffusion, and desorption processes in the wall. In a tokamak with a limiter, the inferred recycle coefficient of 0.9-1.0 is due primarily to reflection (0.8-0.9) with the remainder (0.1-0.2) being due to desorption. Laboratory experiments supply much of the data for the model and several areas are discussed where additional data are needed, such as reflection from hydrogen-loaded walls at low (approx. equal to100 eV) energy. Simulation of ISX-B shows that the recently observed density decrease with neutral beam injection may be partially due to a decrease in recycling caused by hardening of the charge-exchange flux incident on the wall from the plasma. Modeling of isotopic exchange in DITE indicates the need for an ion-induced desorption process which responds on a timescale shorter than the wall thermal diffusion time. (orig.)

  1. Development of multiple choice pictorial test for measuring the dimensions of knowledge

    Science.gov (United States)

    Nahadi, Siswaningsih, Wiwi; Erna

    2017-05-01

    This study aims to develop a multiple choice pictorial test as a tool to measure dimension of knowledge in chemical equilibrium subject. The method used is Research and Development and validation that was conducted in the preliminary studies and model development. The product is multiple choice pictorial test. The test was developed by 22 items and tested to 64 high school students in XII grade. The quality of test was determined by value of validity, reliability, difficulty index, discrimination power, and distractor effectiveness. The validity of test was determined by CVR calculation using 8 validators (4 university teachers and 4 high school teachers) with average CVR value 0,89. The reliability of test has very high category with value 0,87. Discrimination power of items with a very good category is 32%, 59% as good category, and 20% as sufficient category. This test has a varying level of difficulty, item with difficult category is 23%, the medium category is 50%, and the easy category is 27%. The distractor effectiveness of items with a very poor category is 1%, poor category is 1%, medium category is 4%, good category is 39%, and very good category is 55%. The dimension of knowledge that was measured consist of factual knowledge, conceptual knowledge, and procedural knowledge. Based on the questionnaire, students responded quite well to the developed test and most of the students like this kind of multiple choice pictorial test that include picture as evaluation tool compared to the naration tests was dominated by text.

  2. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia

    2014-06-01

    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  3. Operational Testing of Satellite based Hydrological Model (SHM)

    Science.gov (United States)

    Gaur, Srishti; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.

    2017-04-01

    gauging sites as reference, viz., Muri, Jamshedpur and Ghatshila. Individual model set-up has been prepared for these sub-basins and calibration and validation using Split-sample test, first level of operational testing scheme is in progress. Subsequently for geographic transposability, Proxy-basin test will be done using Muri and Jamshedpur as proxy basins. Climatic transposability will be tested for dry and wet years using Differential split-sample test. For incorporating both geographic and climatic transposability Proxy-basin differential split sample test will be used. For quantitative evaluation of SHM, during Split-sample test Nash-Sutcliffe efficiency (NSE), Coefficient of Determination (R R^2)) and Percent BIAS (PBIAS) are being used. However, for transposability, a productive approach involving these performance measures, i.e. NSE*R R^2)*PBIAS will be used to decide the best value of parameters. Keywords: SHM, credibility, operational testing, transposability.

  4. Model year 2010 Ford Fusion Level-1 testing report.

    Energy Technology Data Exchange (ETDEWEB)

    Rask, E.; Bocci, D.; Duoba, M.; Lohse-Busch, H.; Energy Systems

    2010-11-23

    As a part of the US Department of Energy's Advanced Vehicle Testing Activity (AVTA), a model year 2010 Ford Fusion was procured by eTec (Phoenix, AZ) and sent to ANL's Advanced Powertrain Research Facility for the purposes of vehicle-level testing in support of the Advanced Vehicle Testing Activity. Data was acquired during testing using non-intrusive sensors, vehicle network information, and facilities equipment (emissions and dynamometer). Standard drive cycles, performance cycles, steady-state cycles, and A/C usage cycles were conducted. Much of this data is openly available for download in ANL's Downloadable Dynamometer Database. The major results are shown in this report. Given the benchmark nature of this assessment, the majority of the testing was done over standard regulatory cycles and sought to obtain a general overview of how the vehicle performs. These cycles include the US FTP cycle (Urban) and Highway Fuel Economy Test cycle as well as the US06, a more aggressive supplemental regulatory cycle. Data collection for this testing was kept at a fairly high level and includes emissions and fuel measurements from an exhaust emissions bench, high-voltage and accessory current/voltage from a DC power analyzer, and CAN bus data such as engine speed, engine load, and electric machine operation. The following sections will seek to explain some of the basic operating characteristics of the MY2010 Fusion and provide insight into unique features of its operation and design.

  5. Modeling ramp-hold indentation measurements based on Kelvin-Voigt fractional derivative model

    Science.gov (United States)

    Zhang, Hongmei; zhe Zhang, Qing; Ruan, Litao; Duan, Junbo; Wan, Mingxi; Insana, Michael F.

    2018-03-01

    Interpretation of experimental data from micro- and nano-scale indentation testing is highly dependent on the constitutive model selected to relate measurements to mechanical properties. The Kelvin-Voigt fractional derivative model (KVFD) offers a compact set of viscoelastic features appropriate for characterizing soft biological materials. This paper provides a set of KVFD solutions for converting indentation testing data acquired for different geometries and scales into viscoelastic properties of soft materials. These solutions, which are mostly in closed-form, apply to ramp-hold relaxation, load-unload and ramp-load creep-testing protocols. We report on applications of these model solutions to macro- and nano-indentation testing of hydrogels, gastric cancer cells and ex vivo breast tissue samples using an atomic force microscope (AFM). We also applied KVFD models to clinical ultrasonic breast data using a compression plate as required for elasticity imaging. Together the results show that KVFD models fit a broad range of experimental data with a correlation coefficient typically R 2  >  0.99. For hydrogel samples, estimation of KVFD model parameters from test data using spherical indentation versus plate compression as well as ramp relaxation versus load-unload compression all agree within one standard deviation. Results from measurements made using macro- and nano-scale indentation agree in trend. For gastric cell and ex vivo breast tissue measurements, KVFD moduli are, respectively, 1/3-1/2 and 1/6 of the elasticity modulus found from the Sneddon model. In vivo breast tissue measurements yield model parameters consistent with literature results. The consistency of results found for a broad range of experimental parameters suggest the KVFD model is a reliable tool for exploring intrinsic features of the cell/tissue microenvironments.

  6. Tipjet 80-inch Model Rotor Hover Test: Test No. 1198

    Science.gov (United States)

    1993-09-01

    CONTZNT0 (Continued) Page A- A. -Derived p t andEquatio qo ...................... 33 AB . u iLog ............................................ 3D Appendix...pressure, a Druck PPCR 920-series 100-psig unit (S/N 238784), calibrated to 70 psig (about 85 psia) in 7-psi increments; a transducer used to measure...pressure in the I rotor mast plenum, a Druck PDCR 920-series 30-psig unit (S’N 227480), calibrated to 30 psig (about 45 psia) in 3.0-psi increments. 5

  7. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  8. Penetration Testing Model for Web sites Hosted in Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohd Dzul Aiman Aslan; Mohamad Safuan Sulaiman; Siti Nurbahyah Hamdan; Saaidi Ismail; Mohd Fauzi Haris; Norzalina Nasiruddin; Raja Murzaferi Mokhtar

    2012-01-01

    Nuclear Malaysia web sites has been very crucial in providing important and useful information and services to the clients as well as the users worldwide. Furthermore, a web site is important as it reflects the organisation image. To ensure the integrity of the content of web site, a study has been made and a penetration testing model has been implemented to test the security of several web sites hosted at Nuclear Malaysia for malicious attempts. This study will explain how the security was tested in the detailed condition and measured. The result determined the security level and the vulnerability of several web sites. This result is important for improving and hardening the security of web sites in Nuclear Malaysia. (author)

  9. Bayesian Test of Significance for Conditional Independence: The Multinomial Model

    Directory of Open Access Journals (Sweden)

    Pablo de Morais Andrade

    2014-03-01

    Full Text Available Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value.

  10. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  11. Estimating a DIF decomposition model using a random-weights linear logistic test model approach.

    Science.gov (United States)

    Paek, Insu; Fukuhara, Hirotaka

    2015-09-01

    A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.

  12. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    Science.gov (United States)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.

    2018-03-01

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.

  13. A multivariate multilevel approach to the modeling of accuracy and speed of test takers

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, Gerardus J.A.; van der Linden, Willem J.

    2009-01-01

    Response times on test items are easily collected in modern computerized testing. When collecting both (binary) responses and (continuous) response times on test items, it is possible to measure the accuracy and speed of test takers. To study the relationships between these two constructs, the model

  14. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna

    2017-01-01

    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting

  15. Measurement of Masses in SUGRA Models at LHC

    CERN Document Server

    Bachacou, H; Paige, FE

    1999-01-01

    This note presents new measurements at ``Point 5'' in the minimal SUGRA model with $m_0=100\\,\\GeV$, $\\mhalf=300\\,\\GeV$, $A_0=0$, $\\tan\\beta=2$, and $\\sgn\\mu=+$ based on four-body distributions from three-step decays and on minimum masses in such decays. These measurements allow masses to be determined without relying on a model. This note also contains an estimate of the possible statistical errors on the dilepton endpoint. Slepton universality can be tested at the $\\sim0.1\\%$ level at high luminosity. In addition the effect of enlarging the parameter space of the minimal SUGRA model is discussed. The direct production of left handed sleptons and the non-observation of additional structure in the dilepton invariant mass distributions is shown to produce powerful additional constraints.

  16. Diagnostic Measures for the Cox Regression Model with Missing Covariates.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Chen, Ming-Hui

    2015-12-01

    This paper investigates diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for the Cox regression model. Our diagnostics include case-deletion measures, conditional martingale residuals, and score residuals. The Q-distance is proposed to examine the effects of deleting individual observations on the estimates of finite-dimensional and infinite-dimensional parameters. Conditional martingale residuals are used to construct goodness of fit statistics for testing possible misspecification of the model assumptions. A resampling method is developed to approximate the p -values of the goodness of fit statistics. Simulation studies are conducted to evaluate our methods, and a real data set is analyzed to illustrate their use.

  17. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  19. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  20. Experimental testing facilities for ultrasonic measurements in heavy liquid metal

    International Nuclear Information System (INIS)

    Cojocaru, V.; Ionescu, V.; Nicolescu, D.; Nitu, A.

    2016-01-01

    The thermo-physical properties of Heavy Liquid Metals (HLM), like lead or its alloy, Lead Bismuth Eutectic (LBE), makes them attractive as coolant candidates in advanced nuclear systems. The opaqueness, that is common to all liquid metals, disables all optical methods. For this reason ultrasound waves are used in different applications in heavy liquid metal technology, for example for flow and velocity measurements and for inspection techniques. The practical use of ultrasound in heavy liquid metals still needs to be demonstrated by experiments. This goal requires heavy liquid metal technology facility especially adapted to this task. In this paper is presented an experimental testing facility for investigations of Heavy Liquid Metals acoustic properties, designed and constructed in RATEN ICN. (authors)

  1. Should Soil Testing Services Measure Soil Biological Activity?

    Directory of Open Access Journals (Sweden)

    Alan J. Franzluebbers

    2016-02-01

    Full Text Available Health of agricultural soils depends largely on conservation management to promote soil organic matter accumulation. Total soil organic matter changes slowly, but active fractions are more dynamic. A key indicator of healthy soil is potential biological activity, which could be measured rapidly with soil testing services via the flush of CO during 1 to 3 d following rewetting of dried soil. The flush of CO is related to soil microbial biomass C and has repeatedly been shown strongly related to net N mineralization during standard aerobic incubations. New research is documenting the close association with plant N uptake in semicontrolled greenhouse conditions ( = 0.77, = 36. Field calibrations are underway to relate the flush of CO to the need for in-season N requirement in a variety of crops. An index of soil biological activity can and should be determined to help predict soil health and soil N availability.

  2. Transport services quality measurment using SERVQUAL model

    Directory of Open Access Journals (Sweden)

    Maksimović Mlađan V.

    2017-01-01

    Full Text Available Quality in the world is considered to be the most important phenomenon of our age, with a permanent and irreversible growing trend of its emphasis. Many companies have come to the conclusion that high quality of services can provide them with a potential competitive advantage, leading to superior sales results and profit making. The aim of this paper is to test the applicability of service SERVQUAL dimensions and measure the quality of services in the public transport of passengers. Based on the data obtained by researching the views of public transport users in Kragujevac using the SERVQUAL methodology and statistical analysis based on defined service quality dimensions, this research will show the level of quality of urban transport services in Kragujevac and based on this, make recommendations for improving the quality of service.

  3. A new bone surrogate model for testing interbody device subsidence.

    Science.gov (United States)

    Au, Anthony G; Aiyangar, Ameet K; Anderson, Paul A; Ploeg, Heidi-Lynn

    2011-07-15

    An in vitro biomechanical study investigating interbody device subsidence measures in synthetic vertebrae, polyurethane foam blocks, and human cadaveric vertebrae. To compare subsidence measures of bone surrogates with human vertebrae for interbody devices varying in size/placement. Bone surrogates are alternatives when human cadaveric vertebrae are unavailable. Synthetic vertebrae modeling cortices, endplates, and cancellous bone have been developed as an alternative to polyurethane foam blocks for testing interbody device subsidence. Indentors placed on the endplates of synthetic vertebrae, foam blocks, and human vertebrae were subjected to uniaxial compression. Subsidence, measured with custom-made extensometers, was evaluated for an indentor seated either centrally or peripherally on the endplate. Failure force and indentation stiffness were determined from force-displacement curves. Subsidence measures in human vertebrae varied with indentor placement: failure forces were higher and indentors subsided less with peripheral placement. Subsidence measures in foam blocks were insensitive to indentor size/placement; they were similar to human vertebrae for centrally placed but not for peripherally placed indentors. Although subsidence measures in synthetic vertebrae were sensitive to indentor size/placement, failure force and indentation stiffness were overestimated, and subsidence underestimated, for both centrally placed and peripherally placed indentors. The synthetic endplate correctly represented the human endplate geometry, and thus, failure force, stiffness, and subsidence in synthetic vertebrae were sensitive to indentor size/placement. However, the endplate was overly strong and thus synthetic vertebrae did not accurately model indentor subsidence in human cadaveric vertebrae. Foam blocks captured subsidence measures more accurately than synthetic vertebrae for centrally placed indentors, but because of their uniform density were not sufficiently robust to

  4. Fabrication and Testing of Viscosity Measuring Instrument (Viscometer

    Directory of Open Access Journals (Sweden)

    A. B. HASSAN

    2006-01-01

    Full Text Available This paper presents the fabrication and testing of a simple and portable viscometer for the measurement of bulk viscosity of different Newtonian fluids. It is aimed at making available the instrument in local markets and consequently reducing or eliminating the prohibitive cost of importation. The method employed is the use of a D.C motor to rotate a disc having holes for infra-red light to pass through and fall on a photo-diode thus undergoing amplification and this signal being translated on a moving-coil meter as a deflection. The motor speed is kept constant but varies with changes in viscosity of the fluid during stirring, which alter signals being read on the meter. The faster is revolution per minute of the disc, the less the deflection on the meter and vise-versa. From the results of tests conducted on various sample fluids using data on standard Newtonian fluids as reliable guide the efficiency of the viscometer was 76.5%.

  5. Measuring and Modeling Shared Visual Attention

    Science.gov (United States)

    Mulligan, Jeffrey B.; Gontar, Patrick

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  6. Fidelity of implementation: development and testing of a measure

    Directory of Open Access Journals (Sweden)

    Wiitala Wyndy

    2010-12-01

    Full Text Available Abstract Background Along with the increasing prevalence of chronic illness has been an increase in interventions, such as nurse case management programs, to improve outcomes for patients with chronic illness. Evidence supports the effectiveness of such interventions in reducing patient morbidity, mortality, and resource utilization, but other studies have produced equivocal results. Often, little is known about how implementation of an intervention actually occurs in clinical practice. While studies often assume that interventions are used in clinical practice exactly as originally designed, this may not be the case. Thus, fidelity of an intervention's implementation reflects how an intervention is, or is not, used in clinical practice and is an important factor in understanding intervention effectiveness and in replicating the intervention in dissemination efforts. The purpose of this paper is to contribute to the understanding of implementation science by (a proposing a methodology for measuring fidelity of implementation (FOI and (b testing the measure by examining the association between FOI and intervention effectiveness. Methods We define and measure FOI based on organizational members' level of commitment to using the distinct components that make up an intervention as they were designed. Semistructured interviews were conducted among 18 organizational members in four medical centers, and the interviews were analyzed qualitatively to assess three dimensions of commitment to use--satisfaction, consistency, and quality--and to develop an overall rating of FOI. Mixed methods were used to explore the association between FOI and intervention effectiveness (inpatient resource utilization and mortality. Results Predictive validity of the FOI measure was supported based on the statistical significance of FOI as a predictor of intervention effectiveness. The strongest relationship between FOI and intervention effectiveness was found when an

  7. Fidelity of implementation: development and testing of a measure.

    Science.gov (United States)

    Keith, Rosalind E; Hopp, Faith P; Subramanian, Usha; Wiitala, Wyndy; Lowery, Julie C

    2010-12-30

    Along with the increasing prevalence of chronic illness has been an increase in interventions, such as nurse case management programs, to improve outcomes for patients with chronic illness. Evidence supports the effectiveness of such interventions in reducing patient morbidity, mortality, and resource utilization, but other studies have produced equivocal results. Often, little is known about how implementation of an intervention actually occurs in clinical practice. While studies often assume that interventions are used in clinical practice exactly as originally designed, this may not be the case. Thus, fidelity of an intervention's implementation reflects how an intervention is, or is not, used in clinical practice and is an important factor in understanding intervention effectiveness and in replicating the intervention in dissemination efforts. The purpose of this paper is to contribute to the understanding of implementation science by (a) proposing a methodology for measuring fidelity of implementation (FOI) and (b) testing the measure by examining the association between FOI and intervention effectiveness. We define and measure FOI based on organizational members' level of commitment to using the distinct components that make up an intervention as they were designed. Semistructured interviews were conducted among 18 organizational members in four medical centers, and the interviews were analyzed qualitatively to assess three dimensions of commitment to use--satisfaction, consistency, and quality--and to develop an overall rating of FOI. Mixed methods were used to explore the association between FOI and intervention effectiveness (inpatient resource utilization and mortality). Predictive validity of the FOI measure was supported based on the statistical significance of FOI as a predictor of intervention effectiveness. The strongest relationship between FOI and intervention effectiveness was found when an alternative measure of FOI was utilized based on

  8. Estimation of Continuous Velocity Model Variations in Rock Deformation Tests.

    Science.gov (United States)

    Flynn, J. W.; Tomas, R.; Benson, P. M.

    2017-12-01

    Seismic interferometry, using either seismic waves coda or ambient noise, is a passive technique to image the sub-surface seismic velocity structure, which directly relates to the physical properties of the material through which they travel. The methodology estimates the Green's function for the volume between two seismic stations by cross-correlating long time series of ambient noise recorded at both stations, with the Green's function being effectively the seismogram recorded at one station due to an impulsive or instantaneous energy source at the second station. In laboratory rock deformation experiments, changes in the velocity structure of the rock sample are generally measured through active surveys using an array of AE piezoelectric P-wave transducers, producing a time series of ultrasonic velocities in both axial and radial directions. The velocity information from the active surveys is used to provide a time dependent velocity model for the inversion of AE event source locations. These velocity measurements are carried out at regular intervals throughout the laboratory test, causing the interruption of passive AE monitoring for the length of the surveys. There is therefore a trade-off between the frequency at which the active velocity surveys are carried out to optimise the velocity model and the availability of a complete AE record during the rock deformation test.This study proposes to use noise interferometry to provide a continuous measurement of velocity variations in a rock sample during a laboratory rock deformation experiment without the need to carry out active velocity surveys while simultaneously passively monitoring AE activity. The continuous noise source in this test, is an AE transducer fed with a white gaussian noise signal from a function generator. Data from all AE transducers is continuously acquired and recorded during the deformation experiment. The cross correlation of the continuous AE record is used to produce a continuous velocity

  9. Effect Size Measures for Differential Item Functioning in a Multidimensional IRT Model

    Science.gov (United States)

    Suh, Youngsuk

    2016-01-01

    This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…

  10. Measurement invariance of the Eating Attitudes Test-26 in Caucasian and Hispanic women.

    Science.gov (United States)

    Belon, Katherine E; Smith, Jane Ellen; Bryan, Angela D; Lash, Denise N; Winn, Jaime L; Gianini, Loren M

    2011-12-01

    To determine whether the EAT-26 functions similarly in Caucasian and Hispanic samples, the current study investigated the factor structure of the Eating Attitudes Test (EAT-26) in 235 undergraduate Caucasian (53.6%) and Hispanic (46.4%) women, and tested for measurement invariance across the two samples. A Confirmatory Factor Analysis (CFA) of the original 3-factor structure of the EAT resulted in a poor fit in both the Caucasian and Hispanic samples. We then performed a CFA using a previously discovered 4-factor, 16-item structure. This abbreviated measure was a good fit in both the Caucasian and Hispanic samples, and the model was invariant across all dimensions tested. The 16-item EAT is a better-fitting measure in Caucasian and Hispanic women than the commonly used EAT-26. This replicates an earlier finding and generalizes those conclusions to a Hispanic sample. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. An Improved Measure of Reading Skill: The Cognitive Structure Test

    National Research Council Canada - National Science Library

    Sorrells, Robert

    1997-01-01

    This study compared the construct validity and the predictive validity of a new test, called the Cognitive Structure Test, to multiple-choice tests of reading skill, namely the Armed Forces Vocational...

  12. Test Driven Development of Scientific Models

    Science.gov (United States)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  13. A human cadaver fascial compartment pressure measurement model.

    Science.gov (United States)

    Messina, Frank C; Cooper, Dylan; Huffman, Gretchen; Bartkus, Edward; Wilbur, Lee

    2013-10-01

    Fresh human cadavers provide an effective model for procedural training. Currently, there are no realistic models to teach fascial compartment pressure measurement. We created a human cadaver fascial compartment pressure measurement model and studied its feasibility with a pre-post design. Three faculty members, following instructions from a common procedure textbook, used a standard handheld intra-compartment pressure monitor (Stryker(®), Kalamazoo, MI) to measure baseline pressures ("unembalmed") in the anterior, lateral, deep posterior, and superficial posterior compartments of the lower legs of a fresh human cadaver. The right femoral artery was then identified by superficial dissection, cannulated distally towards the lower leg, and connected to a standard embalming machine. After a 5-min infusion, the same three faculty members re-measured pressures ("embalmed") of the same compartments on the cannulated right leg. Unembalmed and embalmed readings for each compartment, and baseline readings for each leg, were compared using a two-sided paired t-test. The mean baseline compartment pressures did not differ between the right and left legs. Using the embalming machine, compartment pressure readings increased significantly over baseline for three of four fascial compartments; all in mm Hg (±SD): anterior from 40 (±9) to 143 (±44) (p = 0.08); lateral from 22 (±2.5) to 160 (±4.3) (p measurable fascial compartment pressure measurement model in a fresh human cadaver using a standard embalming machine. Set-up is minimal and the model can be incorporated into teaching curricula. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  15. Ares I-X Launch Vehicle Modal Test Measurements and Data Quality Assessments

    Science.gov (United States)

    Templeton, Justin D.; Buehrle, Ralph D.; Gaspar, James L.; Parks, Russell A.; Lazor, Daniel R.

    2010-01-01

    The Ares I-X modal test program consisted of three modal tests conducted at the Vehicle Assembly Building at NASA s Kennedy Space Center. The first test was performed on the 71-foot 53,000-pound top segment of the Ares I-X launch vehicle known as Super Stack 5 and the second test was performed on the 66-foot 146,000- pound middle segment known as Super Stack 1. For these tests, two 250 lb-peak electro-dynamic shakers were used to excite bending and shell modes with the test articles resting on the floor. The third modal test was performed on the 327-foot 1,800,000-pound Ares I-X launch vehicle mounted to the Mobile Launcher Platform. The excitation for this test consisted of four 1000+ lb-peak hydraulic shakers arranged to excite the vehicle s cantilevered bending modes. Because the frequencies of interest for these modal tests ranged from 0.02 to 30 Hz, high sensitivity capacitive accelerometers were used. Excitation techniques included impact, burst random, pure random, and force controlled sine sweep. This paper provides the test details for the companion papers covering the Ares I-X finite element model calibration process. Topics to be discussed include test setups, procedures, measurements, data quality assessments, and consistency of modal parameter estimates.

  16. The physical interpretation of the parameters measured during the tensile testing of materials at elevated temperatures

    International Nuclear Information System (INIS)

    Burton, B.

    1984-01-01

    Hot tensile (or compression) testing, where the stress developed in a material is measured under an imposed strain rate, is often used as an alternative to conventional creep testing. The advantages of the hot tensile test are that its duration can be more closely controlled by the experimenter and also that the technique is more convenient, since high precision testing machines are available. The main disadvantage is that the interpretation of results is more complex. The present paper relates the parameters which are measured in hot tensile tests, to physical processes which occur in materials deforming by a variety of mechanisms. For cases where no significant structural changes occur, as in viscous or superplastic flow, analytical expressions are derived which relate the stresses measured in these tests to material constants. When deformation is controlled by recovery processes, account has to be taken of the structural changes which occur concurrently. A wide variety of behaviour may then be exhibited which depends on the initial dislocation density, the presence of second-phase particles and the relative values of the recovery rate parameters and the velocity imposed by the testing machine. Numerical examples are provided for simple recovery models. (author)

  17. Magnetic field measurements of model SSC [Superconducting Super Collider] dipoles

    International Nuclear Information System (INIS)

    Hassenzahl, W.V.; Gilbert, W.S.; Green, M.I.; Barale, P.J.

    1986-10-01

    To qualify for use in the Superconducting Super Collider, the 8000 or so 16 m long dipole magnets must pass a series of tests. One of these will be a set of warm measurements of field quality, which must be precise to about 0.001% of the 100 G field produced by 10 A, the maximum current the coils are allowed to carry for an extended period at room temperature. Field measurements of better than this accuracy have already been carried out on 1 m long model dipoles. These measurements have included determinations of the dipole fields and the higher harmonics in the central or two dimensional region and in the total magnet. In addition, axial scans of the dipole and higher harmonic magnetic fields have been made to determine the local variations, which might reflect fabrication and assembly tolerances. This paper describes the equipment developed for these measurements, the results of a representative set of measurements of the central and integral fields and axial scans, and a comparison between warm and cold measurements. Reproducibility, accuracy and precision will be described for some of the measurements. The significance of the warm measurements as a part of the certification process for the SSC dipoles will be discussed

  18. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  19. Measuring social health in the patient-reported outcomes measurement information system (PROMIS): item bank development and testing.

    Science.gov (United States)

    Hahn, Elizabeth A; Devellis, Robert F; Bode, Rita K; Garcia, Sofia F; Castel, Liana D; Eisen, Susan V; Bosworth, Hayden B; Heinemann, Allen W; Rothrock, Nan; Cella, David

    2010-09-01

    To develop a social health measurement framework, to test items in diverse populations and to develop item response theory (IRT) item banks. A literature review guided framework development of Social Function and Social Relationships sub-domains. Items were revised based on patient feedback, and Social Function items were field-tested. Analyses included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), two-parameter IRT modeling and evaluation of differential item functioning (DIF). The analytic sample included 956 general population respondents who answered 56 Ability to Participate and 56 Satisfaction with Participation items. EFA and CFA identified three Ability to Participate sub-domains. However, because of positive and negative wording, and content redundancy, many items did not fit the IRT model, so item banks do not yet exist. EFA, CFA and IRT identified two preliminary Satisfaction item banks. One item exhibited trivial age DIF. After extensive item preparation and review, EFA-, CFA- and IRT-guided item banks help provide increased measurement precision and flexibility. Two Satisfaction short forms are available for use in research and clinical practice. This initial validation study resulted in revised item pools that are currently undergoing testing in new clinical samples and populations.

  20. Pitfalls in the measurement and interpretation of thyroid function tests.

    Science.gov (United States)

    Koulouri, Olympia; Moran, Carla; Halsall, David; Chatterjee, Krishna; Gurnell, Mark

    2013-12-01

    Thyroid function tests (TFTs) are amongst the most commonly requested laboratory investigations in both primary and secondary care. Fortunately, most TFTs are straightforward to interpret and confirm the clinical impression of euthyroidism, hypothyroidism or hyperthyroidism. However, in an important subgroup of patients the results of TFTs can seem confusing, either by virtue of being discordant with the clinical picture or because they appear incongruent with each other [e.g. raised thyroid hormones (TH), but with non-suppressed thyrotropin (TSH); raised TSH, but with normal TH]. In such cases, it is important first to revisit the clinical context, and to consider potential confounding factors, including alterations in normal physiology (e.g. pregnancy), intercurrent (non-thyroidal) illness, and medication usage (e.g. thyroxine, amiodarone, heparin). Once these have been excluded, laboratory artefacts in commonly used TSH or TH immunoassays should be screened for, thus avoiding unnecessary further investigation and/or treatment in cases where there is assay interference. In the remainder, consideration should be given to screening for rare genetic and acquired disorders of the hypothalamic-pituitary-thyroid (HPT) axis [e.g. resistance to thyroid hormone (RTH), thyrotropinoma (TSHoma)]. Here, we discuss the main pitfalls in the measurement and interpretation of TFTs, and propose a structured algorithm for the investigation and management of patients with anomalous/discordant TFTs. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Modelling, Measuring and Compensating Color Weak Vision.

    Science.gov (United States)

    Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui

    2016-03-08

    We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color weak observers. The two main applications are the simulation of the perception of a color weak observer for a color normal observer and the compensation of color images in a way that a color weak observer has approximately the same perception as a color normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticable-differences between color which are measured with the help of color-matching experiments. The constructed mappings are isometries of Riemann spaces that preserve the perceived color-differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared to previously used methods this method is free from approximation errors due to local linearizations and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyse the variations of the Riemann metrics for different observers obtained from new color matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential (SD) tests.

  2. A verification system survival probability assessment model test methods

    International Nuclear Information System (INIS)

    Jia Rui; Wu Qiang; Fu Jiwei; Cao Leituan; Zhang Junnan

    2014-01-01

    Subject to the limitations of funding and test conditions, the number of sub-samples of large complex system test less often. Under the single sample conditions, how to make an accurate evaluation of the performance, it is important for reinforcement of complex systems. It will be able to significantly improve the technical maturity of the assessment model, if that can experimental validation and evaluation model. In this paper, a verification system survival probability assessment model test method, the method by the test system sample test results, verify the correctness of the assessment model and a priori information. (authors)

  3. Multilevel Factor Analysis by Model Segregation: New Applications for Robust Test Statistics

    Science.gov (United States)

    Schweig, Jonathan

    2014-01-01

    Measures of classroom environments have become central to policy efforts that assess school and teacher quality. This has sparked a wide interest in using multilevel factor analysis to test measurement hypotheses about classroom-level variables. One approach partitions the total covariance matrix and tests models separately on the…

  4. A maximin model for test design with practical constraints

    NARCIS (Netherlands)

    van der Linden, Willem J.; Boekkooi-Timminga, Ellen

    1987-01-01

    A "maximin" model for item response theory based test design is proposed. In this model only the relative shape of the target test information function is specified. It serves as a constraint subject to which a linear programming algorithm maximizes the information in the test. In the practice of

  5. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  6. A SysML Test Model and Test Suite for the ETCS Ceiling Speed Monitor

    DEFF Research Database (Denmark)

    Braunstein, Cécile; Peleska, Jan; Schulze, Uwe

    2014-01-01

    dedicated to the publication of models that are of interest for the model-based testing (MBT) community, and may serve as benchmarks for comparing MBT tool capabilities. The model described here is of particular interest for analysing the capabilities of equivalence class testing strategies. The CSM...... application inputs velocity values from a domain which could not be completely enumerated for test purposes with reasonable effort. We describe a novel method for equivalence class testing that – despite the conceptually infinite cardinality of the input domains – is capable to produce finite test suites...... that are exhaustive under certain hypotheses about the internal structure of the system under test....

  7. Hydrologic modeling and field testing at Yucca mountain, Nevada

    International Nuclear Information System (INIS)

    Hoxie, D.T.

    1991-01-01

    Yucca Mountain, Nevada, is being evaluated as a possible site for a mined geologic repository for the disposal of high-level nuclear waste. The repository is proposed to be constructed in fractured, densely welded tuff within the thick (500 to 750 meters) unsaturated zone at the site. Characterization of the site unsaturated-zone hydrogeologic system requires quantitative specification of the existing state of the system and the development of numerical hydrologic models to predict probable evolution of the hydrogeologic system over the lifetime of the repository. To support development of hydrologic models for the system, a testing program has been designed to characterize the existing state of the system, to measure hydrologic properties for the system and to identify and quantify those processes that control system dynamics. 12 refs

  8. A Lagrange Multiplier Test for Testing the Adequacy of the Constant Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Catani, Paul; Teräsvirta, Timo; Yin, Meiqun

    A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which...

  9. The shuttle walk test: a new approach to functional walking capacity measurements for patients after stroke?

    Science.gov (United States)

    van Bloemendaal, Maijke; Kokkeler, Astrid M; van de Port, Ingrid G

    2012-01-01

    To determine the construct validity, test-retest reliability, and measurement error of the shuttle walk test (SWT) for patients after stroke. Clinimetric study. Three rehabilitation centers in the Netherlands. A sample of patients after stroke (N=75; mean age ± SD, 58.8±9.8y) who are capable of walking without physical assistance. Patients were excluded if they had sustained a subarachnoid hemorrhage or a stroke in the cerebellum or brainstem, or had any other conditions that limited their walking capacity more than the current stroke, or had sensory aphasia. Not applicable. Construct validity (6-minute walk test [6MWT]) and test-retest reliability of the SWT were assessed. Measurement error was determined with the standard error of measurement (SEM), limits of agreement, and smallest detectable differences (SDDs). Construct validity was confirmed by high significant correlations (r(p)≥.65, Pwalking distance in favor of the 6MWT. Test-retest reliability was good (intraclass correlation coefficient model 2,1 [ICC(2,1)]=.961 [.936-.977]). SEM was 6.0%, and the SDDs for individual and group were 302.0m (37%) and 38.7m (5%), respectively. The SWT is a valid and reliable measure and therefore a feasible instrument to determine functional walking capacity of patients after stroke, especially in high-speed walkers. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  10. Testing the race model inequality : a nonparametric approach

    NARCIS (Netherlands)

    Maris, G.K.J.; Maris, E.

    2003-01-01

    This paper introduces a nonparametric procedure for testing the race model explanation of the redundant signals effect. The null hypothesis is the race model inequality derived from the race model by Miller (Cognitive Psychol. 14 (1982) 247). The construction of a nonparametric test is made possible

  11. A Model for Random Student Drug Testing

    Science.gov (United States)

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  12. Modeling Reliability Growth in Accelerated Stress Testing

    Science.gov (United States)

    2013-12-01

    190285075. html . [Accessed October 2012]. [2] R. Gates, "Science and Technology (S&T) Priorities for Fiscal Years 2013-17 Planning," 19 April 2011...John Wiley & Sons Inc., 1982. [100] J. Seo , M. Jung and C. Kim, "Design of accelerated life test sampling plans with a non-constant shape

  13. Revisiting test stability: further evidence relating to the measurement ...

    African Journals Online (AJOL)

    In several earlier analyses of two tests of academic literacy – the Test of Academic Literacy Levels (TALL) and its Afrikaans counterpart, the Toets vir Akademiese Geletterdheidsvlakke (TAG) – we have adopted an approach to the problem that tests may be abused (and therefore used to harm people) by discussing various ...

  14. Item Response Theory Model Empat Parameter Logistik Pada Computerized Adaptive Test

    Directory of Open Access Journals (Sweden)

    Aslam Fatkhudin

    2016-01-01

    Full Text Available One of the computer-based testing is the Computerized Adaptive Test (CAT, which is a computer-based testing system where the items were given to the participants adapted to test the ability of the participants. Assessment methods are usually applied in CAT is Item Response Theory (IRT. IRT models are most commonly used today is the model 3 Parameter Logistic (3PL, which is about the discrimination, difficulty and guessing. However 3PL IRT models have not provided information more objectively test the ability of participants. The opinion of the test participants were tested items were also to be considered. In this study using CAT in combination with IRT model of 4PL. In this research, the development of CAT which uses about 4 parameters, namely the discrimination, difficulty, guessing and questionnaires. The questions used were about UAS 1 English subjects. Samples were taken from 30 students answer with the best value of the total 172 students spread across 6 classes to measure the parameter estimation problem. Further testing using CAT application 4PL IRT models compared to CAT 3PL IRT models. From research done shows that the CAT application combined with IRT models 4PL can measure the ability of the test taker shorter or faster and also opportunities participants correctly answered the test items was done tend to be better than the 3PL IRT models.   Keywords: Ability; CAT; IRT; 3PL; 4PL; Probability; Test

  15. Computational model for simulation small testing launcher, technical solution

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Cristian, Barbu; Chelaru, Adrian

    2014-12-01

    The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project "Suborbital Launcher for Testing" (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle

  16. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  17. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  18. Air injection test on a Kaplan turbine: prototype - model comparison

    Science.gov (United States)

    Angulo, M.; Rivetti, A.; Díaz, L.; Liscia, S.

    2016-11-01

    Air injection is a very well-known resource to reduce pressure pulsation magnitude in turbines, especially on Francis type. In the case of large Kaplan designs, even when not so usual, it could be a solution to mitigate vibrations arising when tip vortex cavitation phenomenon becomes erosive and induces structural vibrations. In order to study this alternative, aeration tests were performed on a Kaplan turbine at model and prototype scales. The research was focused on efficiency of different air flow rates injected in reducing vibrations, especially at the draft tube and the discharge ring and also in the efficiency drop magnitude. It was found that results on both scales presents the same trend in particular for vibration levels at the discharge ring. The efficiency drop was overestimated on model tests while on prototype were less than 0.2 % for all power output. On prototype, air has a beneficial effect in reducing pressure fluctuations up to 0.2 ‰ of air flow rate. On model high speed image computing helped to quantify the volume of tip vortex cavitation that is strongly correlated with the vibration level. The hydrophone measurements did not capture the cavitation intensity when air is injected, however on prototype, it was detected by a sonometer installed at the draft tube access gallery.

  19. [Reliability study in the measurement of the cusp inclination angle of a chairside digital model].

    Science.gov (United States)

    Xinggang, Liu; Xiaoxian, Chen

    2018-02-01

    This study aims to evaluate the reliability of the software Picpick in the measurement of the cusp inclination angle of a digital model. Twenty-one trimmed models were used as experimental objects. The chairside digital impression was then used for the acquisition of 3D digital models, and the software Picpick was employed for the measurement of the cusp inclination of these models. The measurements were repeated three times, and the results were compared with a gold standard, which was a manually measured experimental model cusp angle. The intraclass correlation coefficient (ICC) was calculated. The paired t test value of the two measurement methods was 0.91. The ICCs between the two measurement methods and three repeated measurements were greater than 0.9. The digital model achieved a smaller coefficient of variation (9.9%). The software Picpick is reliable in measuring the cusp inclination of a digital model.

  20. Modellering, test og fortolkning af indirekte revisionsbeviser

    DEFF Research Database (Denmark)

    Holm, Claus

    1999-01-01

    vil være påvirket af kildens troværdighed og rapportens relevans. I dette indlæg vises, hvordan den normative teori bag den Bayesianske flertrinsmodel giver mulighed for at hypoteser kan udledes og testes. Et 2*2 eksperimentielt design er testet på 89 revisorer med tre hovedresultater, nemlig (1......, når den sammenlignes med dén normative præstationsstandard, som de selv fastlægger....

  1. Made-to-measure modelling of observed galaxy dynamics

    Science.gov (United States)

    Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.

    2018-01-01

    Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.

  2. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  3. Combinatorial QSAR modeling of chemical toxicants tested against Tetrahymena pyriformis.

    Science.gov (United States)

    Zhu, Hao; Tropsha, Alexander; Fourches, Denis; Varnek, Alexandre; Papa, Ester; Gramatica, Paola; Oberg, Tomas; Dao, Phuong; Cherkasov, Artem; Tetko, Igor V

    2008-04-01

    Selecting most rigorous quantitative structure-activity relationship (QSAR) approaches is of great importance in the development of robust and predictive models of chemical toxicity. To address this issue in a systematic way, we have formed an international virtual collaboratory consisting of six independent groups with shared interests in computational chemical toxicology. We have compiled an aqueous toxicity data set containing 983 unique compounds tested in the same laboratory over a decade against Tetrahymena pyriformis. A modeling set including 644 compounds was selected randomly from the original set and distributed to all groups that used their own QSAR tools for model development. The remaining 339 compounds in the original set (external set I) as well as 110 additional compounds (external set II) published recently by the same laboratory (after this computational study was already in progress) were used as two independent validation sets to assess the external predictive power of individual models. In total, our virtual collaboratory has developed 15 different types of QSAR models of aquatic toxicity for the training set. The internal prediction accuracy for the modeling set ranged from 0.76 to 0.93 as measured by the leave-one-out cross-validation correlation coefficient ( Q abs2). The prediction accuracy for the external validation sets I and II ranged from 0.71 to 0.85 (linear regression coefficient R absI2) and from 0.38 to 0.83 (linear regression coefficient R absII2), respectively. The use of an applicability domain threshold implemented in most models generally improved the external prediction accuracy but at the same time led to a decrease in chemical space coverage. Finally, several consensus models were developed by averaging the predicted aquatic toxicity for every compound using all 15 models, with or without taking into account their respective applicability domains. We find that consensus models afford higher prediction accuracy for the

  4. Measurement and modelling in anthropo-radiometry

    International Nuclear Information System (INIS)

    Carlan, Loic de

    2011-01-01

    In this HDR (Accreditation to supervise researches) report, the author gives an overview of his research activities, gives a summary of his research thesis (feasibility study of an actinide measurement system in the case of lungs), and proposes a research report on the different aspects of anthropo-radiometric measurement: context (principles, significance, sampling phantoms), development of digital phantoms (software presentation and validation), interface development and validation, application to actinide measurement in lung, taking biokinetic data into account for anthropo-radiometric measurement

  5. A Bilinear Multidimensional Measurement Model of Asian American Acculturation and Enculturation: Implications for Counseling Interventions

    Science.gov (United States)

    Miller, Matthew J.

    2007-01-01

    Several unilinear and bilinear dimensional measurement models of Asian American acculturation and enculturation were tested with confirmatory factor analysis. Bilinear models of acculturation consistently outperformed the unilinear model. In addition, models that articulated multiple dimensions (i.e., values and behavior) exhibited a better fit to…

  6. Measurement of indoor air quality in two new test houses

    Energy Technology Data Exchange (ETDEWEB)

    Hodgson, A.T.

    1996-01-01

    This study assessed indoor air quality in two similar, new houses being evaluated for energy performance. One house (A) was built conventionally. The other (B) was an energy-efficient structure. Air samples for individual volatile organic compounds (VOCs), total VOCs (TVOC) and formaldehyde were collected following completion of the interiors of the houses and on several occasions during the following year. Ventilation rates were also determined so that source strengths of airborne contaminants could be estimated with a mass- balance model. There were no substantial differences in indoor air quality between the houses. The TVOC concentrations in House A ranged from 1,700 - 4,400 {mu}p m{sup -3}, with the highest value coinciding with the lowest ventilation rate. The TVOC concentrations in House B were 2,400 - 2,800 {mu}g m{sup -3}. These values are elevated compared to a median value of 700 {mu}g m{sup -3} measured for a large residential study. Formaldehyde concentrations ranged up to 74 {mu}g m{sup -3}. The dominant VOC in both houses was hexanal, an odorous chemical irritant. The concentrations of acetone, pentanal, toluene, alpha-pinene and other aldehydes were also relatively high in both houses. The source strengths of many of the analytes did not decline substantially over the course of the study. The OSB was estimated to contribute substantially to concentrations of formaldehyde and acetone in the houses. The results also suggested that OSB was not the dominant source of pentanal, hexanal and alpha-pinene, all of which had elevated emissions in the houses, possibly from a single source.

  7. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  8. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  9. A test battery measuring auditory capabilities of listening panels

    DEFF Research Database (Denmark)

    Ghani, Jody; Ellermeier, Wolfgang; Zimmer, Karin

    2005-01-01

    a battery of tests covering a larger range of auditory capabilities in order to assess individual listeners. The format of all tests is kept as 'objective' as possible by using a three-alternative forced-choice paradigm in which the subject must choose which of the sound samples is different, thus keeping...... the instruction to the subjects simple and common for all tests. Both basic (e.g. frequency discrimination) and complex (e.g. profile analysis) psychoacoustic tests are covered in the battery and a threshold of discrimination or detection is obtained for each test. Data were collected on 24 listeners who had been...... recruited for participation in an expert listening panel for evaluating the sound quality of hi-fi audio systems. The test battery data were related to the actual performance of the listeners when judging the degradation in quality produced by audio codecs....

  10. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean......The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before...

  11. Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model

    OpenAIRE

    Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori

    2011-01-01

    This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...

  12. Quake's motion modeled in relay test

    International Nuclear Information System (INIS)

    Calhoun, H.J.

    1978-01-01

    Relays in safety-related functions at nuclear plants can now be tested seismically at lower frequencies and with more meaningful force mangnitudes than ever before. A massive, computer-controlled machine shakes the relays with a complex set of frequencies to determine their fragility. NRC has examined the technique by which relay movement is programmed, and has accepted it as an adequate means to do so. The machine, installed last fall, will improve the ability of manufacturers to produce more rugged, vibration-resistant relays, thus increasing system reliability

  13. An overview of animal models of pain: disease models and outcome measures

    Science.gov (United States)

    Gregory, N; Harris, AL; Robinson, CR; Dougherty, PM; Fuchs, PN; Sluka, KA

    2013-01-01

    Pain is ultimately a perceptual phenomenon. It is built from information gathered by specialized pain receptors in tissue, modified by spinal and supraspinal mechanisms, and integrated into a discrete sensory experience with an emotional valence in the brain. Because of this, studying intact animals allows the multidimensional nature of pain to be examined. A number of animal models have been developed, reflecting observations that pain phenotypes are mediated by distinct mechanisms. Animal models of pain are designed to mimic distinct clinical diseases to better evaluate underlying mechanisms and potential treatments. Outcome measures are designed to measure multiple parts of the pain experience including reflexive hyperalgesia measures, sensory and affective dimensions of pain and impact of pain on function and quality of life. In this review we discuss the common methods used for inducing each of the pain phenotypes related to clinical pain syndromes, as well as the main behavioral tests for assessing pain in each model. PMID:24035349

  14. A magnetorheological actuation system: test and model

    International Nuclear Information System (INIS)

    John, Shaju; Chaudhuri, Anirban; Wereley, Norman M

    2008-01-01

    Self-contained actuation systems, based on frequency rectification of the high frequency motion of an active material, can produce high force and stroke output. Magnetorheological (MR) fluids are active fluids whose rheological properties can be altered by the application of a magnetic field. By using MR fluids as the energy transmission medium in such hybrid devices, a valving system with no moving parts can be implemented and used to control the motion of an output cylinder shaft. The MR fluid based valves are configured in the form of an H-bridge to produce bi-directional motion in an output cylinder by alternately applying magnetic fields in the two opposite arms of the bridge. The rheological properties of the MR fluid are modeled using both Bingham plastic and bi-viscous models. In this study, the primary actuation is performed using a compact terfenol-D rod driven pump and frequency rectification of the rod motion is done using passive reed valves. The pump and reed valve configuration along with MR fluidic valves form a compact hydraulic actuation system. Actuator design, analysis and experimental results are presented in this paper. A time domain model of the actuator is developed and validated using experimental data

  15. Academic Self-Concept: Modeling and Measuring for Science

    Science.gov (United States)

    Hardy, Graham

    2014-08-01

    In this study, the author developed a model to describe academic self-concept (ASC) in science and validated an instrument for its measurement. Unlike previous models of science ASC, which envisage science as a homogenous single global construct, this model took a multidimensional view by conceiving science self-concept as possessing distinctive facets including conceptual and procedural elements. In the first part of the study, data were collected from 1,483 students attending eight secondary schools in England, through the use of a newly devised Secondary Self-Concept Science Instrument, and structural equation modeling was employed to test and validate a model. In the second part of the study, the data were analysed within the new self-concept framework to examine learners' ASC profiles across the domains of science, with particular attention paid to age- and gender-related differences. The study found that the proposed science self-concept model exhibited robust measures of fit and construct validity, which were shown to be invariant across gender and age subgroups. The self-concept profiles were heterogeneous in nature with the component relating to self-concept in physics, being surprisingly positive in comparison to other aspects of science. This outcome is in stark contrast to data reported elsewhere and raises important issues about the nature of young learners' self-conceptions about science. The paper concludes with an analysis of the potential utility of the self-concept measurement instrument as a pedagogical device for science educators and learners of science.

  16. Earthquake induced rock shear through a deposition hole - modelling of three scale tests for validation of models

    International Nuclear Information System (INIS)

    Boergesson, Lennart; Hernelind, Jan

    2012-01-01

    Document available in extended abstract form only. Three model shear tests of very high quality simulating a horizontal rock shear through a KBS-3V deposition hole in the centre of a canister were performed 1986. The tests simulated a deposition hole in the scale 1:10 with reference density of the buffer, very stiff confinement simulating the rock, and a solid bar of copper simulating the canister. The three tests were almost identical with exception of the rate of shear, which was varied between 0.031 and 160 mm/s, i.e. with a factor of more than 5000, and the density of the bentonite, which differed slightly. The tests were very well documented. Shear force, shear rate, total stress in the bentonite, strain in the copper and the movement of the top of the simulated canister were measured continuously during the shear. After finished shear the equipment was dismantled and careful sampling of the bentonite with measurement of water ratio and density were made. The deformed copper 'canister' was also carefully measured after the test. The tests have been modelled with the finite element code Abaqus with the same models and techniques that were used for the full scale cases in the Swedish safety assessment SR-Site. The results have been compared with the measured results, which has yielded very valuable information about the relevancy of the material models and the modelling technique. An elastic-plastic material model was used for the bentonite where the stress-strain relations have been derived from laboratory tests. The material model is also described in another article to this conference. The material model is made a function of both the density and the strain rate at shear. Since the shear is fast and takes place under undrained conditions, the density is not changed during the tests. However, strain rate varies largely with both the location of the elements and time. This can be taken into account in Abaqus by making the material model a function of the strain

  17. Multifactorial assessment of measurement errors affecting intraoral quantitative sensory testing reliability.

    Science.gov (United States)

    Moana-Filho, Estephan J; Alonso, Aurelio A; Kapos, Flavia P; Leon-Salazar, Vladimir; Durand, Scott H; Hodges, James S; Nixdorf, Donald R

    2017-07-01

    Measurement error of intraoral quantitative sensory testing (QST) has been assessed using traditional methods for reliability, such as intraclass correlation coefficients (ICCs). Most studies reporting QST reliability focused on assessing one source of measurement error at a time, e.g., inter- or intra-examiner (test-retest) reliabilities and employed two examiners to test inter-examiner reliability. The present study used a complex design with multiple examiners with the aim of assessing the reliability of intraoral QST taking account of multiple sources of error simultaneously. Four examiners of varied experience assessed 12 healthy participants in two visits separated by 48h. Seven QST procedures to determine sensory thresholds were used: cold detection (CDT), warmth detection (WDT), cold pain (CPT), heat pain (HPT), mechanical detection (MDT), mechanical pain (MPT) and pressure pain (PPT). Mixed linear models were used to estimate variance components for reliability assessment; dependability coefficients were used to simulate alternative test scenarios. Most intraoral QST variability arose from differences between participants (8.8-30.5%), differences between visits within participant (4.6-52.8%), and error (13.3-28.3%). For QST procedures other than CDT and MDT, increasing the number of visits with a single examiner performing the procedures would lead to improved dependability (dependability coefficient ranges: single visit, four examiners=0.12-0.54; four visits, single examiner=0.27-0.68). A wide range of reliabilities for QST procedures, as measured by ICCs, was noted for inter- (0.39-0.80) and intra-examiner (0.10-0.62) variation. Reliability of sensory testing can be better assessed by measuring multiple sources of error simultaneously instead of focusing on one source at a time. In experimental settings, large numbers of participants are needed to obtain accurate estimates of treatment effects based on QST measurements. This is different from clinical

  18. The verification tests of residual radioactivity measurement and assessment techniques for buildings and soils

    International Nuclear Information System (INIS)

    Onozawa, T.; Ishikura, T.; Yoshimura, Yukio; Nakazawa, M.; Makino, S.; Urayama, K.; Kawasaki, S.

    1996-01-01

    According to the standard procedure for decommissioning a commercial nuclear power plant (CNPP) in Japan, controlled areas will be released for unrestricted use before the dismantling of a reactor building. If manual survey and sampling techniques were applied to measurement for unrestricted release on and in the extensive surface of the building, much time and much specialized labor would be required to assess the appropriateness of the releasing. Therefore the authors selected the following three techniques for demonstrating reliability and applicability of the techniques for CNPPs: (1) technique of assessing radioactive concentration distribution on the surface of buildings (ADB); (2) technique of assessing radioactive permeation distribution in the concrete structure of buildings (APB); (3) technique of assessing radioactive concentration distribution in soil (ADS). These tests include the techniques of measuring and assessing very low radioactive concentration distribution on the extensive surfaces of buildings and the soil surrounding of a plant with automatic devices. Technical investigation and preliminary study of the verification tests were started in 1990. In the study, preconditions were clarified for each technique and the performance requirements were set up. Moreover, simulation models have been constructed for several feasible measurement method to assess their performance in terms of both measurement test and simulation analysis. Fundamental tests have been under way using small-scale apparatuses since 1994

  19. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2014-01-01

    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  20. Measuring and testing awareness of emotional face expressions

    DEFF Research Database (Denmark)

    Sandberg, Kristian; Bibby, Bo Martin; Overgaard, Morten

    2013-01-01

    with emotional content (fearful vs. neutral faces). Although we find the study interesting, we disagree with the conclusion that CR is superior to PAS because of two methodological issues. First, the conclusion is not based on a formal test. We performed this test and found no evidence that CR predicted accuracy...