WorldWideScience

Sample records for model test measurements

  1. Testing substellar models with dynamical mass measurements

    Directory of Open Access Journals (Sweden)

    Liu M.C.

    2011-07-01

    Full Text Available We have been using Keck laser guide star adaptive optics to monitor the orbits of ultracool binaries, providing dynamical masses at lower luminosities and temperatures than previously available and enabling strong tests of theoretical models. We have identified three specific problems with theory: (1 We find that model color–magnitude diagrams cannot be reliably used to infer masses as they do not accurately reproduce the colors of ultracool dwarfs of known mass. (2 Effective temperatures inferred from evolutionary model radii are typically inconsistent with temperatures derived from fitting atmospheric models to observed spectra by 100–300 K. (3 For the only known pair of field brown dwarfs with a precise mass (3% and age determination (≈25%, the measured luminosities are ~2–3× higher than predicted by model cooling rates (i.e., masses inferred from Lbol and age are 20–30% larger than measured. To make progress in understanding the observed discrepancies, more mass measurements spanning a wide range of luminosity, temperature, and age are needed, along with more accurate age determinations (e.g., via asteroseismology for primary stars with brown dwarf binary companions. Also, resolved optical and infrared spectroscopy are needed to measure lithium depletion and to characterize the atmospheres of binary components in order to better assess model deficiencies.

  2. Rasch modeling of accuracy and confidence measures from cognitive tests.

    Science.gov (United States)

    Paek, Insu; Lee, Jihyun; Stankov, Lazar; Wilson, Mark

    2013-01-01

    The use of IRT models has not been rigorously applied in studies of the relationship between test-takers' confidence and accuracy. This study applied the Rasch measurement models to investigate the relationship between test-takers' confidence and accuracy on English proficiency tests, proposing potentially useful measures of under or overconfidence. The Rasch approach provided the scaffolding to formulate indices that can assess the discrepancy between confidence and accuracy at the item or total test level, as well as at particular ability levels locally. In addition, a "disattenuated" measure of association between accuracy and confidence, which takes measurement error into account, was obtained through a multidimensional Rasch modeling of the two constructs where the latent variance-covariance structure is directly estimated from the data. The results indicate that the participants tend to show overconfidence bias in their own cognitive abilities.

  3. Data Modeling for Measurements in the Metrology and Testing Fields

    CERN Document Server

    Pavese, Franco

    2009-01-01

    Offers a comprehensive set of modeling methods for data and uncertainty analysis. This work develops methods and computational tools to address general models that arise in practice, allowing for a more valid treatment of calibration and test data and providing an understanding of complex situations in measurement science

  4. Testing Geological Models with Terrestrial Antineutrino Flux Measurements

    CERN Document Server

    Dye, Steve

    2009-01-01

    Uranium and thorium are the main heat producing elements in the earth. Their quantities and distributions, which specify the flux of detectable antineutrinos generated by the beta decay of their daughter isotopes, remain unmeasured. Geological models of the continental crust and the mantle predict different quantities and distributions of uranium and thorium. Many of these differences are resolvable with precision measurements of the terrestrial antineutrino flux. This precision depends on both statistical and systematic uncertainties. An unavoidable background of antineutrinos from nuclear reactors typically dominates the systematic uncertainty. This report explores in detail the capability of various operating and proposed geo-neutrino detectors for testing geological models.

  5. Façade fire testsmeasurements and modeling

    Directory of Open Access Journals (Sweden)

    Anderson Johan

    2013-11-01

    Full Text Available In two recent papers [1, 2] the fire dynamics in a test rig for façade constructions according to the test method SP Brand 105 [3, 4] was investigated both experimentally and numerically. The experimental setup simulates a three-story apartment building (height 6.7 m, width 4 m and depth 1.6 m, with external wall-cladding and a “room fire” at the base. The numerical model was developed in the CFD program Fire Dynamics Simulator (FDS [5] with analogous geometry and instrumentation. The general features of the fire test were well reproduced in the numerical model however temperatures close to the fire source could not be properly accounted for in the model. In this paper the bi-directional probe measurements are elaborated on and the test used in Ref. [1] is revisited using different heat release rates in the numerical model. The velocity of the hot gases along the façade was well reproduced by the simulations although some deviations were found.

  6. Stereovision vibration measurement test of a masonry building model

    Science.gov (United States)

    Shan, Baohua; Gao, Yunli; Shen, Yu

    2016-04-01

    To monitor 3D deformations of structural vibration response, a stereovision-based 3D deformation measurement method is proposed in paper. The world coordinate system is established on structural surface, and 3D displacement equations of structural vibration response are acquired through coordinate transformation. The algorithms of edge detection, center fitting and matching constraint are developed for circular target. A shaking table test of a masonry building model under Taft and El Centro earthquake at different acceleration peak is performed in lab, 3D displacement time histories of the model are acquired by the integrated stereovision measurement system. In-plane displacement curves obtained by two methods show good agreement, this suggests that the proposed method is reliable for monitoring structural vibration response. Out-of-plane displacement curves indicate that the proposed method is feasible and useful for monitoring 3D deformations of vibration response.

  7. Numerical Modelling and Measurement in a Test Secondary Settling Tank

    DEFF Research Database (Denmark)

    Dahl, C.; Larsen, Torben; Petersen, O.

    1994-01-01

    A numerical model and measurements of flow and settling in activated sludge suspension is presented. The numerical model is an attempt to describe the complex and interrelated hydraulic and sedimentation phenomena by describing the turbulent flow field and the transport/dispersion of suspended sl...

  8. Can atom-surface potential measurements test atomic structure models?

    Science.gov (United States)

    Lonij, Vincent P A; Klauss, Catherine E; Holmgren, William F; Cronin, Alexander D

    2011-06-30

    van der Waals (vdW) atom-surface potentials can be excellent benchmarks for atomic structure calculations. This is especially true if measurements are made with two different types of atoms interacting with the same surface sample. Here we show theoretically how ratios of vdW potential strengths (e.g., C₃(K)/C₃(Na)) depend sensitively on the properties of each atom, yet these ratios are relatively insensitive to properties of the surface. We discuss how C₃ ratios depend on atomic core electrons by using a two-oscillator model to represent the contribution from atomic valence electrons and core electrons separately. We explain why certain pairs of atoms are preferable to study for future experimental tests of atomic structure calculations. A well chosen pair of atoms (e.g., K and Na) will have a C₃ ratio that is insensitive to the permittivity of the surface, whereas a poorly chosen pair (e.g., K and He) will have a ratio of C₃ values that depends more strongly on the permittivity of the surface.

  9. The Latent Class Model as a Measurement Model for Situational Judgment Tests

    Directory of Open Access Journals (Sweden)

    Frank Rijmen

    2011-11-01

    Full Text Available In a situational judgment test, it is often debatable what constitutes a correct answer to a situation. There is currently a multitude of scoring procedures. Establishing a measurement model can guide the selection of a scoring rule. It is argued that the latent class model is a good candidate for a measurement model. Two latent class models are applied to the Managing Emotions subtest of the Mayer, Salovey, Caruso Emotional Intelligence Test: a plain-vanilla latent class model, and a second-order latent class model that takes into account the clustering of several possible reactions within each hypothetical scenario of the situational judgment test. The results for both models indicated that there were three subgroups characterised by the degree to which differentiation occurred between possible reactions in terms of perceived effectiveness. Furthermore, the results for the second-order model indicated a moderate cluster effect.

  10. A blast absorber test: measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Berg, F. van den; Hof, J. van 't; Arkel, E. van

    2006-01-01

    A blast absorber test was conducted at the Aberdeen Test Centre from 13 to 17 June 2005. The test was set up to determine the absorbing and shielding effect of a gravel pile, of 1.5 meters high and 15 by 15 meters wide, on blasts from large weapons: e.g. armor, artillery or demolition. The blast was

  11. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  12. Ares I Scale Model Acoustic Tests Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas D.

    2011-01-01

    The Ares I Scale Model Acoustic Test (ASMAT) was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116. The test article included a 5% scale Ares I vehicle model and tower mounted on the Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments located throughout the test article. There were four primary ASMAT instrument suites: ignition overpressure (IOP), lift-off acoustics (LOA), ground acoustics (GA), and spatial correlation (SC). Each instrumentation suite incorporated different sensor models which were selected based upon measurement requirements. These requirements included the type of measurement, exposure to the environment, instrumentation check-outs and data acquisition. The sensors were attached to the test article using different mounts and brackets dependent upon the location of the sensor. This presentation addresses the observed effect of the sensors and mounts on the acoustic and pressure measurements.

  13. Testing Measurement Invariance of the Students' Affective Characteristics Model across Gender Sub-Groups

    Science.gov (United States)

    Demir, Ergül

    2017-01-01

    In this study, the aim was to construct a significant structural measurement model comparing students' affective characteristics with their mathematic achievement. According to this model, the aim was to test the measurement invariances between gender sub-groups hierarchically. This study was conducted as basic and descriptive research. Secondary…

  14. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  15. Moving Model Test of High-Speed Train Aerodynamic Drag Based on Stagnation Pressure Measurements.

    Science.gov (United States)

    Yang, Mingzhi; Du, Juntao; Li, Zhiwei; Huang, Sha; Zhou, Dan

    2017-01-01

    A moving model test method based on stagnation pressure measurements is proposed to measure the train aerodynamic drag coefficient. Because the front tip of a high-speed train has a high pressure area and because a stagnation point occurs in the center of this region, the pressure of the stagnation point is equal to the dynamic pressure of the sensor tube based on the obtained train velocity. The first derivation of the train velocity is taken to calculate the acceleration of the train model ejected by the moving model system without additional power. According to Newton's second law, the aerodynamic drag coefficient can be resolved through many tests at different train speeds selected within a relatively narrow range. Comparisons are conducted with wind tunnel tests and numerical simulations, and good agreement is obtained, with differences of less than 6.1%. Therefore, the moving model test method proposed in this paper is feasible and reliable.

  16. Evaluating a technical university's placement test using the Rasch measurement model

    Science.gov (United States)

    Salleh, Tuan Salwani; Bakri, Norhayati; Zin, Zalhan Mohd

    2016-10-01

    This study discusses the process of validating a mathematics placement test at a technical university. The main objective is to produce a valid and reliable test to measure students' prerequisite knowledge to learn engineering technology mathematics. It is crucial to have a valid and reliable test as the results will be used in a critical decision making to assign students into different groups of Technical Mathematics 1. The placement test which consists of 50 mathematics questions were tested on 82 new diplomas in engineering technology students at a technical university. This study employed rasch measurement model to analyze the data through the Winsteps software. The results revealed that there are ten test questions lower than less able students' ability. Nevertheless, all the ten questions satisfied infit and outfit standard values. Thus, all the questions can be reused in the future placement test at the technical university.

  17. Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach.

    Science.gov (United States)

    Campitelli, Guillermo; Gerrans, Paul

    2014-04-01

    We used a mathematical modeling approach, based on a sample of 2,019 participants, to better understand what the cognitive reflection test (CRT; Frederick In Journal of Economic Perspectives, 19, 25-42, 2005) measures. This test, which is typically completed in less than 10 min, contains three problems and aims to measure the ability or disposition to resist reporting the response that first comes to mind. However, since the test contains three mathematically based problems, it is possible that the test only measures mathematical abilities, and not cognitive reflection. We found that the models that included an inhibition parameter (i.e., the probability of inhibiting an intuitive response), as well as a mathematical parameter (i.e., the probability of using an adequate mathematical procedure), fitted the data better than a model that only included a mathematical parameter. We also found that the inhibition parameter in males is best explained by both rational thinking ability and the disposition toward actively open-minded thinking, whereas in females this parameter was better explained by rational thinking only. With these findings, this study contributes to the understanding of the processes involved in solving the CRT, and will be particularly useful for researchers who are considering using this test in their research.

  18. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    CERN Document Server

    Aghamousa, Amir

    2015-01-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit $\\Lambda$CDM model at $95\\% (\\sim 2\\sigma)$ confidence distance from the center of the nonparametri...

  19. Precision measurements of {\\theta}12 for testing models of discrete leptonic flavour symmetries

    CERN Document Server

    Ballett, Peter; Luhn, Christoph; Pascoli, Silvia; Schmidt, Michael A

    2014-01-01

    Models of leptonic flavour with discrete symmetries can provide an attractive explanation of the pattern of elements found in the leptonic mixing matrix. The next generation of neutrino oscillation experiments will allow the mixing parameters to be tested to a new level of precision, crucially measuring the CP violating phase {\\delta} for the first time. In this contribution, we present results of a systematic survey of the predictions of a class of models based on residual discrete symmetries and the prospects for excluding such models at medium- and long-term oscillation experiments. We place particular emphasis on the complementary role that a future circa 50 km reactor experiment, e.g. JUNO, can play in constraining these models.

  20. Extrapolation of model tests measurements of whipping to identify the dimensioning sea states for container ships

    DEFF Research Database (Denmark)

    Storhaug, Gaute; Andersen, Ingrid Marie Vincent

    2015-01-01

    Whipping can contribute to increased fatigue and extreme loading of container ships, and guidelines have been made available by the leading class societies. Reports concerning the hogging collapse of MSC Napoli and MOL Comfort suggest that whipping contributed. The accidents happened in moderate...... to small storms. Model tests of three container ships have been carried out in different sea states under realistic assumptions. Preliminary extrapolation of the measured data suggested that moderate storms are dimensioning when whipping is included due to higher maximum speed in moderate storms...

  1. A TEST OF COSMOLOGICAL MODELS USING HIGH-z MEASUREMENTS OF H(z)

    Energy Technology Data Exchange (ETDEWEB)

    Melia, Fulvio [Department of Physics, The Applied Math Program, and Department of Astronomy, The University of Arizona, AZ 85721 (United States); McClintock, Thomas M., E-mail: fmelia@email.arizona.edu, E-mail: tmcclintock89@gmail.com [Department of Physics, The University of Arizona, AZ 85721 (United States)

    2015-10-15

    The recently constructed Hubble diagram using a combined sample of SNLS and SDSS-II SNe Ia, and an application of the Alcock–Paczyński (AP) test using model-independent Baryon Acoustic Oscillation (BAO) data, have suggested that the principal constraint underlying the cosmic expansion is the total equation-of-state of the cosmic fluid, rather than that of its dark energy. These studies have focused on the critical redshift range (0 ≲ z ≲ 2) within which the transition from decelerated to accelerated expansion is thought to have occurred, and they suggest that the cosmic fluid has zero active mass, consistent with a constant expansion rate. The evident impact of this conclusion on cosmological theory calls for an independent confirmation. In this paper, we carry out this crucial one-on-one comparison between the R{sub h} = ct universe (a Friedmann–Robertson–Walker cosmology with zero active mass) and wCDM/ΛCDM, using the latest high-z measurements of H(z). Whereas the SNe Ia yield the integrated luminosity distance, while the AP diagnostic tests the geometry of the universe, the Hubble parameter directly samples the expansion rate itself. We find that the model-independent cosmic chronometer data prefer R{sub h} = ct over wCDM/ΛCDM with a Bayes Information Criterion likelihood of ∼95% versus only ∼5%, in strong support of the earlier SNe Ia and AP results. This contrasts with a recent analysis of H(z) data based solely on BAO measurements which, however, strongly depend on the assumed cosmology. We discuss why the latter approach is inappropriate for model comparisons, and emphasize again the need for truly model-independent observations to be used in cosmological tests.

  2. Lessons from wet gas flow metering systems using differential measurements devices: Testing and flow modelling results

    Energy Technology Data Exchange (ETDEWEB)

    Cazin, J.; Couput, J.P.; Dudezert, C. et al

    2005-07-01

    A significant number of wet gas meters used for high GVF and very high GVF are based on differential pressure measurements. Recent high pressure tests performed on a variety of different DP devices on different flow loops are presented. Application of existing correlations is discussed for several DP devices including Venturi meters. For Venturi meters, deviations vary from 9% when using the Murdock correlation to less than 3 % with physical based models. The use of DP system in a large domain of conditions (Water Liquid Ratio) especially for liquid estimation will require information on the WLR This obviously raises the question of the gas and liquid flow metering accuracy in wet gas meters and highlight needs to understand AP systems behaviour in wet gas flows (annular / mist / annular mist). As an example, experimental results obtained on the influence of liquid film characteristics on a Venturi meter are presented. Visualizations of the film upstream and inside the Venturi meter are shown. They are completed by film characterization. The AP measurements indicate that for a same Lockhart Martinelli parameter, the characteristics of the two phase flow have a major influence on the correlation coefficient. A 1D model is defined and the results are compared with the experiments. These results indicate that the flow regime influences the AP measurements and that a better modelling of the flow phenomena is needed even for allocation purposes. Based on that, lessons and way forward in wet gas metering systems improvement for allocation and well metering are discussed and proposed. (author) (tk)

  3. Evaluation of electrolytic tilt sensors for measuring model angle of attack in wind tunnel tests

    Science.gov (United States)

    Wong, Douglas T.

    1992-01-01

    The results of a laboratory evaluation of electrolytic tilt sensors as potential candidates for measuring model attitude or angle of attack in wind tunnel tests are presented. The performance of eight electrolytic tilt sensors was compared with that of typical servo accelerometers used for angle-of-attack measurements. The areas evaluated included linearity, hysteresis, repeatability, temperature characteristics, roll-on-pitch interaction, sensitivity to lead-wire resistance, step response time, and rectification. Among the sensors being evaluated, the Spectron model RG-37 electrolytic tilt sensors have the highest overall accuracy in terms of linearity, hysteresis, repeatability, temperature sensitivity, and roll sensitivity. A comparison of the sensors with the servo accelerometers revealed that the accuracy of the RG-37 sensors was on the average about one order of magnitude worse. Even though a comparison indicates that the cost of each tilt sensor is about one-third the cost of each servo accelerometer, the sensors are considered unsuitable for angle-of-attack measurements. However, the potential exists for other applications such as wind tunnel wall-attitude measurements where the errors resulting from roll interaction, vibration, and response time are less and sensor temperature can be controlled.

  4. Nondestructive Testing of Metallic Cables Based on a Homogenized Model and Global Measurements

    Directory of Open Access Journals (Sweden)

    Valdemar Melicher

    2010-01-01

    Full Text Available We propose a simple, quick, and cost-effective method for nondestructive eddy-current testing of metallic cables. Inclusions in the cross section of the cable are detected on the basis of certain global data: hysteresis loop measurements for different frequencies. We detect air-gap inclusions inside the cross section using a homogenized model. The problem, which can be understood as an inverse spectral problem, is posed in two dimensions. We consider its reduction to one dimension. The identifiability is studied. We obtain a uniqueness result for a single inclusion in 1D by two measurements for sufficiently low frequency. We study the sensibility of the inverse problem numerically. A study case with real data is performed to confirm the usefulness.

  5. Detailed measurements and modelling of thermo active components using a room size test facility

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes an investigation of thermo active components based on prefabricated hollow core concrete decks. Recent years have given an increased awareness of the use of thermo active components as an alternative to mechanical cooling systems in office buildings. The investigation covers...... measurements in an office sized test facility with thermo active ceiling and floor as well as modelling of similar conditions in a computer program designed for analysis of building integrated heating and cooling systems. A method for characterizing the cooling capacity of thermo active components is described...... based on measurements of the energy balance of the thermo active deck. A cooling capacity of around 60W/m² at a temperature difference of 10K between room and fluid temperature has been found. It is also shown, that installing a lowered acoustic ceiling covering around 50% of the ceiling surface area...

  6. Testing of a measurement model for baccalaureate nursing students' self-evaluation of core competencies.

    Science.gov (United States)

    Hsu, Li-Ling; Hsieh, Suh-Ing

    2009-11-01

    Testing of a measurement model for baccalaureate nursing students' self-evaluation of core competencies. This paper is a report of a study to test the psychometric properties of the Self-Evaluated Core Competencies Scale for baccalaureate nursing students. Baccalaureate nursing students receive basic nursing education and continue to build competency in practice settings after graduation. Nursing students today face great challenges. Society demands analytic, critical, reflective and transformative attitudes from graduates. It also demands that institutions of higher education take the responsibility to encourage students, through academic work, to acquire knowledge and skills that meet the needs of the modern workplace, which favours highly skilled and qualified workers. A survey of 802 senior nursing students in their last semester at college or university was conducted in Taiwan in 2007 using the Self-Evaluated Core Competencies Scale. Half of the participants were randomly assigned either to principal components analysis with varimax rotation or confirmatory factor analysis. Principal components analysis revealed two components of core competencies that were named as humanity/responsibility and cognitive/performance. The initial model of confirmatory factor analysis was then converged to an acceptable solution but did not show a good fit; however, the final model of confirmatory factor analysis was converged to an acceptable solution with acceptable fit. The final model has two components, namely humanity/responsibility and cognitive/performance. Both components have four indicators. In addition, six indicators have their correlated measurement errors. Self-Evaluated Core Competencies Scale could be used to assess the core competencies of undergraduate nursing students. In addition, it should be used as a teaching guide to increase students' competencies to ensure quality patient care in hospitals.

  7. Using a Differential Emission Measure and Density Measurements in an Active Region Core to Test a Steady Heating Model

    Science.gov (United States)

    Winebarger, Amy R.; Schmelz, Joan T.; Warren, Harry P.; Saar, Steve H.; Kashyap, Vinay L.

    2011-10-01

    The frequency of heating events in the corona is an important constraint on the coronal heating mechanisms. Observations indicate that the intensities and velocities measured in active region cores are effectively steady, suggesting that heating events occur rapidly enough to keep high-temperature active region loops close to equilibrium. In this paper, we couple observations of active region (AR) 10955 made with the X-Ray Telescope and the EUV Imaging Spectrometer on board Hinode to test a simple steady heating model. First we calculate the differential emission measure (DEM) of the apex region of the loops in the active region core. We find the DEM to be broad and peaked around 3 MK. We then determine the densities in the corresponding footpoint regions. Using potential field extrapolations to approximate the loop lengths and the density-sensitive line ratios to infer the magnitude of the heating, we build a steady heating model for the active region core and find that we can match the general properties of the observed DEM for the temperature range of 6.3 accounts for the base pressure, loop length, and distribution of apex temperatures of the core loops. We find that the density-sensitive spectral line intensities and the bulk of the hot emission in the active region core are consistent with steady heating. We also find, however, that the steady heating model cannot address the emission observed at lower temperatures. This emission may be due to foreground or background structures, or may indicate that the heating in the core is more complicated. Different heating scenarios must be tested to determine if they have the same level of agreement.

  8. Using a Differential Emission Measure and Density Measurements in an Active Region Core to Test a Steady Heating Model

    CERN Document Server

    Winebarger, Amy; Warren, Harry; Saar, Steve; Kashyap, Vinay

    2011-01-01

    The frequency of heating events in the corona is an important constraint on the coronal heating mechanisms. Observations indicate that the intensities and velocities measured in active region cores are effectively steady, suggesting that heating events occur rapidly enough to keep high temperature active region loops close to equilibrium. In this paper, we couple observations of Active Region 10955 made with XRT and EIS on \\textit{Hinode} to test a simple steady heating model. First we calculate the differential emission measure of the apex region of the loops in the active region core. We find the DEM to be broad and peaked around 3\\,MK. We then determine the densities in the corresponding footpoint regions. Using potential field extrapolations to approximate the loop lengths and the density-sensitive line ratios to infer the magnitude of the heating, we build a steady heating model for the active region core and find that we can match the general properties of the observed DEM for the temperature range of 6...

  9. Social Cognitive Model of College Satisfaction: A Test of Measurement and Path Models

    Science.gov (United States)

    Feldt, Ronald C.

    2012-01-01

    The study examined a model that integrates social-cognitive and trait-personality constructs to examine two domains of college satisfaction. Direct and indirect effects were observed for conscientiousness, perception of institutional resources, self-efficacy, and goal progress. Paths differed for personal and institutional satisfaction. Most…

  10. Uncertainties in façade fire testsmeasurements and modeling

    Directory of Open Access Journals (Sweden)

    Anderson Johan

    2016-01-01

    Full Text Available In this paper a comparison between test and modelling results are performed for two large-scale façade fire testing methods, namely SP Fire 105 and BS 8414-1. In order to be able to compare tests and modelling the uncertainties have to be quantified both in the test and the modelling. Here we present a methodology based on deterministic sampling to quantify uncertainties in the modelling input. We find, in general good agreement between the models and the test results. Moreover, temperatures estimated by plate thermometers is indicated to be less sensitive to small variations in model input and is thus suitable for these kind of comparisons.

  11. Bayesian tests of measurement invariance.

    Science.gov (United States)

    Verhagen, A J; Fox, J P

    2013-11-01

    Random item effects models provide a natural framework for the exploration of violations of measurement invariance without the need for anchor items. Within the random item effects modelling framework, Bayesian tests (Bayes factor, deviance information criterion) are proposed which enable multiple marginal invariance hypotheses to be tested simultaneously. The performance of the tests is evaluated with a simulation study which shows that the tests have high power and low Type I error rate. Data from the European Social Survey are used to test for measurement invariance of attitude towards immigrant items and to show that background information can be used to explain cross-national variation in item functioning.

  12. Development of Neural Network Model for Predicting Peak Ground Acceleration Based on Microtremor Measurement and Soil Boring Test Data

    National Research Council Canada - National Science Library

    Kerh, T; Lin, J. S; Gunaratnam, D

    2012-01-01

    .... This paper is therefore aimed at developing a neural network model, based on available microtremor measurement and on-site soil boring test data, for predicting peak ground acceleration at a site...

  13. Soil water dynamics at a midlatitude test site: Field measurements and box modeling approaches

    NARCIS (Netherlands)

    Baudena, M.; Bevilacqua, I.; Canone, D.; Ferraris, S.; Previati, M.; Provenzale, A.

    2012-01-01

    We test the ability of three box models (Milly, 1993; Kim et al., 1996; Laio et al., 2001b) to describe soil moisture dynamics in a regularly monitored experimental site in northwestern Italy. The models include increasingly complex representations of leakage and evapotranspiration processes. We for

  14. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  15. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  16. Advantages of the Rasch Measurement Model in Analysing Educational Tests: An Applicator's Reflection

    Science.gov (United States)

    Tormakangas, Kari

    2011-01-01

    Educational achievement is a very important issue for parents, teachers, and the government. An accurate measurement plays a very important role in evaluating achievement fairly, and, therefore, analysis methods have been developed considerably in recent years. Education based on long-time learning processes forms a fruitful base for item tests,…

  17. Measuring Japanese EFL Student Perceptions of Internet-Based Tests with the Technology Acceptance Model

    Science.gov (United States)

    Dizon, Gilbert

    2016-01-01

    The Internet has made it possible for teachers to administer online assessments with affordability and ease. However, little is known about Japanese English as a Foreign Language (EFL) students' attitudes of internet-based tests (IBTs). Therefore, this study aimed to measure the perceptions of IBTs among Japanese English language learners with the…

  18. Psychometric Characteristics of a Measure of Emotional Dispositions Developed to Test a Developmental Propensity Model of Conduct Disorder

    Science.gov (United States)

    Lahey, Benjamin B.; Applegate, Brooks; Chronis, Andrea M.; Jones, Heather A.; Williams, Stephanie Hall; Loney, Jan; Waldman, Irwin D.

    2008-01-01

    Lahey and Waldman proposed a developmental propensity model in which three dimensions of children's emotional dispositions are hypothesized to transact with the environment to influence risk for conduct disorder, heterogeneity in conduct disorder, and comorbidity with other disorders. To prepare for future tests of this model, a new measure of…

  19. Bayesian tests of measurement invariance

    NARCIS (Netherlands)

    Verhagen, A.J.; Fox, J.P.

    2013-01-01

    Random item effects models provide a natural framework for the exploration of violations of measurement invariance without the need for anchor items. Within the random item effects modelling framework, Bayesian tests (Bayes factor, deviance information criterion) are proposed which enable multiple m

  20. Testing the Standard Model by precision measurement of the weak charges of quarks

    Energy Technology Data Exchange (ETDEWEB)

    Ross Young; Roger Carlini; Anthony Thomas; Julie Roche

    2007-05-01

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.

  1. Pilot Wave Model for Impulsive Thrust from RF Test Device Measured in Vacuum

    Science.gov (United States)

    White, Harold; Lawrence, James; Sylvester, Andre; Vera, Jerry; Chap, Andrew; George, Jeff

    2017-01-01

    A physics model is developed in detail and its place in the taxonomy of ideas about the nature of the quantum vacuum is discussed. The experimental results from the recently completed vacuum test campaign evaluating the impulsive thrust performance of a tapered RF test article excited in the TM212 mode at 1,937 megahertz (MHz) are summarized. The empirical data from this campaign is compared to the predictions from the physics model tools. A discussion is provided to further elaborate on the possible implications of the proposed model if it is physically valid. Based on the correlation of analysis prediction with experimental data collected, it is proposed that the observed anomalous thrust forces are real, not due to experimental error, and are due to a new type of interaction with quantum vacuum fluctuations.

  2. Correction of magnetotelluric static shift by analysis of 3D forward modelling and measured test data

    Science.gov (United States)

    Zhang, Kun; Wei, Wenbo; Lu, Qingtian; Wang, Huafeng; Zhang, Yawei

    2016-06-01

    To solve the problem of correction of magnetotelluric (MT) static shift, we quantise factors that influence geological environments and observation conditions and study MT static shift according to 3D MT numerical forward modelling and field tests with real data collection. We find that static shift distortions affect both the apparent resistivity and the impedance phase. The distortion results are also related to the frequency. On the basis of synthetic and real data analysis, we propose the concept of generalised static shift resistivity (GSSR) and a new method for correcting MT static shift. The approach is verified by studying 2D inversion models using synthetic and real data.

  3. Modified likelihood ratio tests in heteroskedastic multivariate regression models with measurement error

    CERN Document Server

    Melo, Tatiane F N; Patriota, Alexandre G

    2012-01-01

    In this paper, we develop a modified version of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard adjusted likelihood ratio statistic, which follows a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed test displays superior finite sample behavior as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.

  4. Testing the standard model by precision measurement of the weak charges of quarks.

    Science.gov (United States)

    Young, R D; Carlini, R D; Thomas, A W; Roche, J

    2007-09-21

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low energy. The precision of this new result, combined with earlier atomic parity-violation measurements, places tight constraints on the size of possible contributions from physics beyond the standard model. Consequently, this result improves the lower-bound on the scale of relevant new physics to approximately 1 TeV.

  5. New porcine test-model reveals remarkable differences between algorithms for spectrophotometrical haemoglobin saturation measurements with VLS

    DEFF Research Database (Denmark)

    Gade, John; Greisen, Gorm

    2016-01-01

    The study created an 'ex vivo' model to test different algorithms for measurements of mucosal haemoglobin saturation with visible light spectrophotometry (VLS). The model allowed comparison between algorithms, but it also allowed comparison with co-oximetry using a 'gold standard' method. This has......  -32.8 to  +29.9 percentage points and from  -5.0 to  +9.2 percentage points, respectively. CONCLUSION: the algorithms showed remarkable in-between differences when tested on raw-spectra from an 'ex vivo' model. All algorithms had bias, more marked at high oxygenation than low oxygenation. Three...

  6. Testing of models of stomatal ozone fluxes with field measurements in a mixed Mediterranean forest

    Science.gov (United States)

    Fares, S.; Matteucci, G.; Scarascia Mugnozza, G.; Morani, A.; Calfapietra, C.; Salvatori, E.; Fusaro, L.; Manes, F.; Loreto, F.

    2013-03-01

    Mediterranean forests close to urban areas are exposed to polluted plumes loaded with tropospheric ozone. This is the case of Castelporziano Estate, a 6000 ha Mediterranean forest 25 km from Rome downtown on the coast of the Mediterranean Sea. In September 2011 we started an intensive field campaign aimed at investigating ozone deposition from a mixed Mediterranean forest, mainly composed by Quercus suber, Quercus ilex, Pinus pinea. Measurements at canopy level with the eddy covariance technique were supported by a vegetation survey and the measurement of all environmental parameters which allowed to calculate stomatal ozone fluxes. Leaf-level measurements were used to parameterize models to calculate stomatal conductance based on a Jarvis-type and Ball-Berry approach. We show changes in magnitude of ozone fluxes from a warm (September) to a cold period (October-December). Stomatal component explained almost the totality of ozone fluxes during the cold days, but contributed only up to 50% to total ozone deposition during warm days, suggesting that other sinks (e.g. chemistry in the gas-phase) play a major role. Modeled stomatal ozone fluxes based on a Jarvis-type approach (DO3SE) correlated with measured fluxes better than using a Ball-Berry approach. A third model based on a modified Ball-Berry equation was proposed to account for the non-linear dependency of stomatal conductance on relative humidity. This research will help the development of metrics for ozone-risk assessment and advance our understanding of mixed Mediterranean forests in biosphere-atmosphere exchange.

  7. Do Two or More Multicomponent Instruments Measure the Same Construct? Testing Construct Congruence Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Tong, Bing

    2016-01-01

    A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies.…

  8. Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members

    NARCIS (Netherlands)

    Rusman, Ellen; Van Bruggen, Jan; Valcke, Martin

    2009-01-01

    Rusman, E., Van Bruggen, J., & Valcke, M. (2009). Empirical Testing of a Conceptual Model and Measurement Instrument for the Assessment of Trustworthiness of Project Team Members. Paper presented at the Trust Workshop at the Eighth International Conference on Autonomous Agents and Multiagent Systems

  9. Finite Element Modeling of the Bulk Magnitization of Railroad Wheels to Improve Test Conditions for Magnetoacoustic Residual Stress Measurements

    Science.gov (United States)

    Fulton, J. P.; Wincheski, B.; Namkung, M.; Utrata, D.

    1992-01-01

    The magnetoacoustic measurement technique has been used successfully for residual stress measurements in laboratory samples. However, when used to field test samples with complex geometries, such as railroad wheels, the sensitivity of the method declines dramatically. It has been suggested that the decrease in performance may be due, in part, to an insufficient or nonuniform magnetic induction in the test sample. The purpose of this paper is to optimize the test conditions by using finite element modeling to predict the distribution of the induced bulk magnetization of railroad wheels. The results suggest that it is possible to obtain a sufficiently large and uniform bulk magnetization by altering the shape of the electromagnet used in the tests. Consequently, problems associated with bulk magnetization can be overcome, and should not prohibit the magnetoacoustic technique from being used to make residual stress measurements in railroad wheels. We begin by giving a brief overview of the magnetoacoustic technique as it applies to residual stress measurements of railroad wheels. We then define the finite element model used to predict the behavior of the current test configuration along with the nonlinear constitutive relations which we obtained experimentally through measurements on materials typically used to construct both railroad wheels and electromagnets. Finally, we show that by modifying the pole of the electromagnet it is possible to obtain a significantly more uniform bulk magnetization in the region of interest.

  10. Modeling and Experimental Tests of a Mechatronic Device to Measure Road Profiles Considering Impact Dynamics

    DEFF Research Database (Denmark)

    Souza, A.; Santos, Ilmar

    2002-01-01

    Vehicles travel at different speeds and, as a consequence, experience a broad spectrum of vibrations. One of the most important source of vehicle vibration is the road profile. Hence the knowledge of the characteristics of a road profile enables engineers to predict the dynamic behavior...... to highlight that the aim of this device is to independently measure two road profiles, without the influence of the vehicle dynamics where the mechanism is attached. Before the mechatronic mechanism is attached to a real vehicle, its dynamic behavior must be known. A theoretical analysis of the mechanism...... the mechanism components. By modeling impacts between a wheel and the road by Newton´s Law, the complete dynamics of the system can be predicted, and the operational range (velocity limits) of the mechanism can be defined based on the mathematical model. Key words: multibody dynamics, impact dynamics and road...

  11. Does Test Anxiety Induce Measurement Bias in Cognitive Ability Tests?

    Science.gov (United States)

    Reeve, Charlie L.; Bonaccio, Silvia

    2008-01-01

    Although test anxiety is typically negatively related to performance on cognitive ability tests, little research has systematically investigated whether differences in test anxiety result in measurement bias on cognitive ability tests. The current paper uses a structural equation modeling technique to explicitly test for measurement bias due to…

  12. Testing different decoupling coefficients with measurements and models of contrasting canopies and soil water conditions

    Directory of Open Access Journals (Sweden)

    V. Goldberg

    2008-07-01

    Full Text Available Four different approaches for the calculation of the well established decoupling coefficient Ω are compared using measurements at three experimental sites (Tharandt – spruce forest, Grillenburg and Melpitz – grass and simulations from the soil-vegetation boundary layer model HIRVAC. These investigations aimed to quantify differences between the calculation routines regarding their ability to describe the vegetation-atmosphere coupling of grass and forest with and without water stress.

    The model HIRVAC used is a vertically highly resolved atmospheric boundary layer model, which includes vegetation. It is coupled with a single-leaf gas exchange model to simulate physiologically based reactions of different vegetation types to changing atmospheric conditions. A multilayer soil water module and a functional parameterisation are the base in order to link the stomata reaction of the gas exchange model to the change of soil water.

    The omega factor was calculated for the basic formulation according to McNaughton and Jarvis (1983 and three modifications. To compare measurements and simulations for the above mentioned spruce and grass sites, the summer period 2007 as well as a dry period in June 2000 were used. Additionally a developing water stress situation for three forest canopies (spruce, pine and beech and for a grass site was simulated. The results showed large differences between the different omega approaches which depend on the vegetation type and the soil moisture.

    Between the omega values, which were calculated by the used approach, the ranking was always the same not only for the measurements but also for the adapted simulations. The lowest values came from the first modification including doubling factors and summands in all parts of omega equation in relation to the original approach. And the highest values were calculated with the second modification missing one doubling factor in the denominator of the

  13. Testing Yasso07 and CENTURY soil C models with boreal forest soil C stocks and CO2 efflux measurements

    Science.gov (United States)

    Tupek, Boris; Peltoniemi, Mikko; Launiainen, Samuli; Kulmala, Liisa; Penttilä, Timo; Lehtonen, Aleksi

    2017-04-01

    Soil C models need further development, especially in terms of factors influencing spatial variability of soil C stocks and soil C stock changes. In this study we tested the estimates of soil C stocks and C stock changes of two widely used soil C models (Yasso07 and CENTURY) against measurements of the boreal forest soil C stock and CO2 efflux at four forest sites in Finland. In addition we evaluated the effects of using coarse versus detailed meteorological, soil, and plant litter input data on modeled monthly CO2 estimates. We found out that CO2 estimates of both models showed similar seasonal CO2 efflux pattern as the upscaled monthly measurements regardless of the fact whether the models used soil properties as input data. Winter and early summer CO2 fluxes agreed somewhat better between estimates and measurements than summer CO2 peaks and autumn CO2 levels, which were underestimated by models. Both models also underestimated equilibrium soil carbon (SOC) stocks, although SOC of CENTURY were larger than SOCs of Yasso07. CENTURY was more sensitive to variation in meteorological input data than Yasso07 and also to functional form of temperature response to decomposition. In conclusion, for modeling boreal forest soil C Yasso07 would benefit from including soil properties in the model structure, while Century would benefit from reformulation of temperature and moisture functions.

  14. Development of Neural Network Model for Predicting Peak Ground Acceleration Based on Microtremor Measurement and Soil Boring Test Data

    Directory of Open Access Journals (Sweden)

    T. Kerh

    2012-01-01

    Full Text Available It may not be possible to collect adequate records of strong ground motions in a short period of time; hence microtremor survey is frequently conducted to reveal the stratum structure and earthquake characteristics at a specified construction site. This paper is therefore aimed at developing a neural network model, based on available microtremor measurement and on-site soil boring test data, for predicting peak ground acceleration at a site, in a science park of Taiwan. The four key parameters used as inputs for the model are soil values of the standard penetration test, the medium grain size, the safety factor against liquefaction, and the distance between soil depth and measuring station. The results show that a neural network model with four neurons in the hidden layer can achieve better performance than other models presently available. Also, a weight-based neural network model is developed to provide reliable prediction of peak ground acceleration at an unmeasured site based on data at three nearby measuring stations. The method employed in this paper provides a new way to treat this type of seismic-related problem, and it may be applicable to other areas of interest around the world.

  15. Measuring English Language Workplace Proficiency across Subgroups: Using CFA Models to Validate Test Score Interpretation

    Science.gov (United States)

    Yoo, Hanwook; Manna, Venessa F.

    2017-01-01

    This study assessed the factor structure of the Test of English for International Communication (TOEIC®) Listening and Reading test, and its invariance across subgroups of test-takers. The subgroups were defined by (a) gender, (b) age, (c) employment status, (d) time spent studying English, and (e) having lived in a country where English is the…

  16. Measuring English Language Workplace Proficiency across Subgroups: Using CFA Models to Validate Test Score Interpretation

    Science.gov (United States)

    Yoo, Hanwook; Manna, Venessa F.

    2017-01-01

    This study assessed the factor structure of the Test of English for International Communication (TOEIC®) Listening and Reading test, and its invariance across subgroups of test-takers. The subgroups were defined by (a) gender, (b) age, (c) employment status, (d) time spent studying English, and (e) having lived in a country where English is the…

  17. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  18. Advantages of a 3-parameter reduced constitutive model for the measurement of polymers elastic modulus using tensile tests

    Science.gov (United States)

    Blaise, A.; André, S.; Delobelle, P.; Meshaka, Y.; Cunat, C.

    2016-11-01

    Exact measurements of the rheological parameters of time-dependent materials are crucial to improve our understanding of their intimate relation to the internal bulk microstructure. Concerning solid polymers and the apparently simple determination of Young's modulus in tensile tests, international standards rely on basic protocols that are known to lead to erroneous values. This paper describes an approach allowing a correct measurement of the instantaneous elastic modulus of polymers by a tensile test. It is based on the use of an appropriate reduced model to describe the behavior of the material up to great strains, together with well-established principles of parameter estimation in engineering science. These principles are objective tools that are used to determine which parameters of a model can be correctly identified according to the informational content of a given data set. The assessment of the methodology and of the measurements is accomplished by comparing the results with those obtained from two other physical experiments, probing the material response at small temporal and length scales, namely, ultrasound measurements with excitation at 5 MHz and modulated nanoindentation tests over a few nanometers of amplitude.

  19. Construction and Pre-Test of a Semantic Expressiveness Measure for Conceptual Models

    OpenAIRE

    Poels, G; Maes, A.; Gailly, F; R. PAEMELEIRE

    2004-01-01

    One quality attributed to McCarthy’s Resources-Events-Agents (REA) accounting model is semantic expressiveness. Compared to accounting models without a semantic orientation like the Debit-Credit-Account (DCA) model, the REA model is claimed to better represent the economic phenomena underlying an accounting system. The alleged benefits of this increased semantic expressiveness include easier integration with representations of non-accounting information and better user understanding of accoun...

  20. Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects

    Science.gov (United States)

    Lockwood, J. R.; McCaffrey, Daniel F.

    2014-01-01

    A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…

  1. Software Testing Measures

    Science.gov (United States)

    1982-05-01

    would also heft- o formulate his own test input data. In reality , structural and func- tional testing methods are available, and we deal with their...taken up describing the algor- * ithes to detect these anomalies. The algorithms are based on a new data C-31 structure called a "process aumented

  2. Understanding water uptake in bioaerosols using laboratory measurements, field tests, and modeling

    Science.gov (United States)

    Chaudhry, Zahra; Ratnesar-Shumate, Shanna A.; Buckley, Thomas J.; Kalter, Jeffrey M.; Gilberry, Jerome U.; Eshbaugh, Jonathan P.; Corson, Elizabeth C.; Santarpia, Joshua L.; Carter, Christopher C.

    2013-05-01

    Uptake of water by biological aerosols can impact their physical and chemical characteristics. The water content in a bioaerosol can affect the backscatter cross-section as measured by LIDAR systems. Better understanding of the water content in controlled-release clouds of bioaerosols can aid in the development of improved standoff detection systems. This study includes three methods to improve understanding of how bioaerosols take up water. The laboratory method measures hygroscopic growth of biological material after it is aerosolized and dried. Hygroscopicity curves are created as the humidity is increased in small increments to observe the deliquescence point, then the humidity is decreased to observe the efflorescence point. The field component of the study measures particle size distributions of biological material disseminated into a large humidified chamber. Measurements are made with a Twin-Aerodynamic Particle Sizer (APS, TSI, Inc), -Relative Humidity apparatus where two APS units measure the same aerosol cloud side-by-side. The first operated under dry conditions by sampling downstream of desiccant dryers, the second operated under ambient conditions. Relative humidity was measured within the sampling systems to determine the difference in the aerosol water content between the two sampling trains. The water content of the bioaerosols was calculated from the twin APS units following Khlystov et al. 2005 [1]. Biological material is measured dried and wet and compared to laboratory curves of the same material. Lastly, theoretical curves are constructed from literature values for components of the bioaerosol material.

  3. Comparing Science Virtual and Paper-Based Test to Measure Students’ Critical Thinking based on VAK Learning Style Model

    Science.gov (United States)

    Rosyidah, T. H.; Firman, H.; Rusyati, L.

    2017-02-01

    This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.

  4. Modeling and Experimental Tests of a Mechatronic Device to Measure Road Profiles Considering Impact Dynamics

    DEFF Research Database (Denmark)

    Souza, A.; Santos, Ilmar

    2002-01-01

    profile by means of gravitational and spring forces. Accelerometers are attached above the rolling wheels and the wheels follow the profiles of a rough ground. After integrating the acceleration signal twice and measuring the vehicle displacement the road profiles can be achieved. It is important...... to highlight that the aim of this device is to independently measure two road profiles, without the influence of the vehicle dynamics where the mechanism is attached. Before the mechatronic mechanism is attached to a real vehicle, its dynamic behavior must be known. A theoretical analysis of the mechanism...

  5. Testing the dark matter scenario in the inert doublet model by future precision measurements of the Higgs boson couplings

    Science.gov (United States)

    Kanemura, Shinya; Kikuchi, Mariko; Sakurai, Kodai

    2016-12-01

    We evaluate radiative corrections to the Higgs boson couplings in the inert doublet model, in which the lightest component of the Z2 odd scalar doublet field can be a dark matter candidate. The one-loop contributions to the h V V , h f f , and h h h couplings are calculated in the on-shell scheme, where h is the Higgs boson with the mass 125 GeV, V represents a weak gauge boson, and f is a fermion. We investigate how the one-loop corrected Higgs boson couplings can be deviated from the predictions in the standard model under the constraints from perturbative unitarity and vacuum stability in the scenario where the model can explain current dark matter data. When the mass of the dark matter is slightly above a half of the Higgs boson mass, it would be difficult to test the model by the direct search experiments for dark matter. We find that in such a case the model can be tested at future collider experiments by either the direct search of heavier inert particles or precision measurements of the Higgs boson couplings.

  6. Testing the dark matter scenario in the inert doublet model by future precision measurements of the Higgs boson couplings

    CERN Document Server

    Kanemura, Shinya; Sakurai, Kodai

    2016-01-01

    We evaluate radiative corrections to the Higgs boson couplings in the inert doublet model, in which the lightest component of the $Z_2^{}$ odd scalar doublet field can be a dark matter candidate. The one-loop contributions to the $hVV$, $hff$ and $hhh$ couplings are calculated in the on-shell scheme, where $h$ is the Higgs boson with the mass 125 GeV, $V$ represents a weak gauge boson and $f$ is a fermion. We investigate how the one-loop corrected Higgs boson couplings can be deviated from the predictions in the standard model under the constraints from perturbative unitarity and vacuum stability in the scenario where the model can explain current dark matter data. When the mass of the dark matter is slightly above a half of the Higgs boson mass, it would be difficult to test the model by the direct search experiments for dark matter. We find that in such a case the model can be tested at future collider experiments by either the direct search of heavier inert particles or precision measurements of the Higgs ...

  7. Testing Earth System Model Assumptions of Photosynthetic Parameters with in situ Leaf Measurements from a Temperate Zone Forest.

    Science.gov (United States)

    Cheng, S. J.; Thomas, R. Q.; Wilkening, J. V.; Curtis, P.; Sharkey, T. D.; Nadelhoffer, K. J.

    2015-12-01

    Estimates of global land CO2 uptake vary widely across Earth system models. This uncertainty around model estimates of land-atmosphere CO2 fluxes may result from differences in how models parameterize and scale photosynthesis from the leaf-to-global level. To test model assumptions about photosynthesis, we derive rates of maximum carboxylation (Vc,max), electron transport (J), and triose phosphate utilization (TPU) from in situ leaf measurements from a forest representative of the Great Lakes region. Leaf-level gas exchange measurements were collected across a temperature range from sun and shade leaves of canopy-dominant tree species typically grouped into the same plant functional type. We evaluate the influence of short-term increases in leaf temperature, nitrogen per leaf area (Narea), species, and leaf light environment on Vc,max, J, and TPU by testing contrasting model equations that isolate the influence of these factors on these rate-limiting steps in leaf photosynthesis. Results indicate that patterns in Vc,max are best explained by a model that includes temperature and Narea. However, J varied with species and leaf light environment in addition to temperature. TPU also varied with leaf light environment and possibly with temperature. These variations in J and TPU with species or between sun and shade leaves suggest that plant traits outside of Narea are needed to explain patterns in J and TPU. This study provides in situ evidence on how Vc,max, J, and TPU vary within a forest canopy and highlight how leaf responses to changes in climate, forest species composition, and canopy structure may alter forest CO2 uptake.

  8. Understanding Solar Eruptions with SDO/HMI Measuring Photospheric Flows, Testing Models, and Steps Towards Forecasting Solar Eruptions

    Science.gov (United States)

    Schuck, Peter W.; Linton, Mark; Muglach, Karin; Welsch, Brian; Hageman, Jacob

    2010-01-01

    The imminent launch of Solar Dynamics Observatory (SDO) will carry the first full-disk imaging vector magnetograph, the Helioseismic and Magnetic Imager (HMI), into an inclined geosynchronous orbit. This magnetograph will provide nearly continuous measurements of photospheric vector magnetic fields at cadences of 90 seconds to 12 minutes with I" resolution, precise pointing, and unfettered by atmospheric seeing. The enormous data stream of 1.5 Terabytes per day from SDO will provide an unprecedented opportunity to understand the mysteries of solar eruptions. These ground-breaking observations will permit the application of a new technique, the differential affine velocity estimator for vector magnetograms (DAVE4VM), to measure photospheric plasma flows in active regions. These measurements will permit, for the first time, accurate assessments of the coronal free energy available for driving CMEs and flares. The details of photospheric plasma flows, particularly along magnetic neutral-lines, are critical to testing models for initiating coronal mass ejections (CMEs) and flares. Assimilating flows and fields into state-of-the art 3D MHD simulations that model the highly stratified solar atmosphere from the convection zone to the corona represents the next step towards achieving NASA's Living with a Star forecasting goals of predicting "when a solar eruption leading to a CME will occur." This talk will describe these major science and predictive advances that will be delivered by SDO /HMI.

  9. An evaluation of the construct validity of the Developmental Test of Visual-Motor Integration using the Rasch Measurement Model.

    Science.gov (United States)

    Brown, Ted; Unsworth, Carolyn; Lyons, Carissa

    2009-12-01

    One method of evaluating the construct validity of instruments is the Rasch Measurement Model (RMM), an increasingly popular method used for test construction and validation. The aim was to examine the construct validity of the Developmental Test of Visual-Motor Integration 5th Edition (VMI) by applying the RMM to evaluate its scalability, dimensionality, differential item functioning and hierarchical ordering. The participants were 400 children aged 5 to 12 years, recruited from six schools in Melbourne, Victoria, who completed the VMI under the supervision of an occupational therapist. VMI items 1, 2 and 3 were excluded from the Rasch analysis since all of the children achieved a perfect score on these items. None of the items exhibited RMM misfit due to goodness-of-fit mean square (MnSq) infit statistics and standardised z (ZStd) scores being outside the specified acceptable range. VMI item 9 (copied circle) exhibited differential item functioning based on gender. In relation to hierarchical ordering of items, several were found to have similar logit difficulty values. For example, VMI items 26, 27 and 29; items 18, 22 and 24; and items 4, 5 and 11 were found to have the same level of challenge. As well, the VMI scale item logit measure order did not match that presented in the VMI test manual. Theoretically, the VMI items are developmentally ordered; however, this ordering was not mirrored by the item logit difficulty scores obtained. This has scoring implications, where scoring a respondent's VMI test booklet is terminated after three consecutive items are not passed. Clinicians should also be aware that item 9 may exhibit bias related to gender.

  10. Offering a model for measuring service brand equity in the field of services : Testing and implementation in a virtual university.

    Directory of Open Access Journals (Sweden)

    Mehdi Giahchin

    2013-09-01

    Full Text Available Nowadays brand in businesses around the world including service provider companies, has a special role and a great importance.Brand equity is also a powerful tool in the marketing competitionthus managing the brand equity measurement is consequential. In this study we are trying to find some effective dimensions of the brand equity in the service firms and companies and after that we are going to offer some significant dimensions as a comprehensive model in education and e-learning at the level of higher education with considering different aspects of the products and services and also with considering the different aspects of service businesses on the services spectrumthen test the model statistically in a virtual university. Data collection was done through a questionnaire distributed to all 1031 faculty, students and staff of Mehralborz university and 300 responses were received. Sampling was done through random method and the sample volume based on Cochran's formula was 280 persons. Cronbach's alpha was used to ensure the reliability of the questionnaire and structural equation modeling was used to test the model. The consequences of statistical analysis showed that among the factors related to the customers, just experiences and psychological characteristics of faculty, students and staff are effective on the service brand equity of Mehralborz university.And also among the factors related to the brand awareness,just marketing activities are effective on the service brand equity of Mehralborz uiversity. Among the characteristics offered for the brand image,just symbolic characteristics and servicescape and also servise provider characteristics are effective on the service brand equity of Mehralborz university.

  11. Philosophy of Testing and Measurement.

    Science.gov (United States)

    Ediger, Marlow

    This paper considers several philosophies as they relate to student assessment. Realists believe that one can know the real world as it truly is. As a philosophy of testing and measurement, realism is characterized by behaviorally stated objectives, measurement-driven instruction, and report cards, along with the use of programmed materials.…

  12. Conformance Testing: Measurement Decision Rules

    Science.gov (United States)

    Mimbs, Scott M.

    2010-01-01

    The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to provide assurance to the customer that end products meet specifications. Measuring devices, often called measuring and test equipment (MTE), are used to provide the evidence of product conformity to specified requirements. Unfortunately, processes that employ MTE can become a weak link to the overall QMS if proper attention is not given to the measurement process design, capability, and implementation. Documented "decision rules" establish the requirements to ensure measurement processes provide the measurement data that supports the needs of the QMS. Measurement data are used to make the decisions that impact all areas of technology. Whether measurements support research, design, production, or maintenance, ensuring the data supports the decision is crucial. Measurement data quality can be critical to the resulting consequences of measurement-based decisions. Historically, most industries required simplistic, one-size-fits-all decision rules for measurements. One-size-fits-all rules in some cases are not rigorous enough to provide adequate measurement results, while in other cases are overly conservative and too costly to implement. Ideally, decision rules should be rigorous enough to match the criticality of the parameter being measured, while being flexible enough to be cost effective. The goal of a decision rule is to ensure that measurement processes provide data with a sufficient level of quality to support the decisions being made - no more, no less. This paper discusses the basic concepts of providing measurement-based evidence that end products meet specifications. Although relevant to all measurement-based conformance tests, the target audience is the MTE end-user, which is anyone using MTE other than calibration service providers. Topics include measurement fundamentals, the associated decision risks, verifying conformance to specifications, and basic measurement

  13. Impact of air traffic emissions on airport air quality. Multi-scale modeling, test bed and field measurements

    Science.gov (United States)

    Ramaroson, R.; Vuillot, F.; Durand, Y.; Courbet, B.; Janin, F.; Copalle, A.; Guin, C.; Paux, E.; Vannier, F.; Talbaut, M.; Weill, M.

    2004-12-01

    Air traffic emissions are playing a significant role in airport air quality. Engine emissions contribute to the ozone and PM formation. There is an emergence of a need to develop advanced numerical tools and airport emission databases for air pollution studies. Field monitoring at airports necessary to support model assessment is still limited in time and space. The French ONERA AIRPUR project has focused on three objectives: emission inventories; dispersion models; field measurements. Results are presented and discussed in this paper. The ground spatial distribution of LTO emissions using realistic aircraft trajectories, aircraft-engine classification by ICAO, fuel flow methodology and diurnal variations of fleet number, is presented and discussed. Exhaust species time evolution is simulated using a chemical-dispersion model. Results show high emissions of NOx during LTO, and a maximum of CO and Hydrocarbons during taxi. Depending on seasons, the NOx lifetime is varying differently; lower concentration is calculated far away from LTO emissions. Longer-lived pollutants such as ozone are formed downstream and require the use of advanced dispersion models. For this reason, two interactive models coupling the micro and the regional scales are developed and used in this work. A 3D CFD model (CEDRE) simulates the flow characteristics around buildings and the dispersion of emissions. CEDRE boundary conditions are provided by the 3D nested dispersion model MEDIUM/MM5, which includes a surface boundary layer chemistry and calculates the concentration of pollutants from the local to the airport vicinities. The CFD results show a tracer accumulation calculated downstream beside terminals, consistent with observations at some mega-airports. Sensibility studies are conducted to highlight the impact of emissions on ozone formation with MEDIUM. Results show that longer-lived species are produced downstream, their concentration depending on NOx, aromatics and VOC released by

  14. Modal analysis of measurements from a large-scale VIV model test of a riser in linearly sheared flow

    Science.gov (United States)

    Lie, H.; Kaasen, K. E.

    2006-05-01

    Large-scale model testing of a tensioned steel riser in well-defined sheared current was performed at Hanøytangen outside Bergen, Norway in 1997. The length of the model was 90 m and the diameter was 3 cm. The aim of the present work is to look into this information and try to improve the understanding of vortex-induced vibrations (VIV) for cases with very high order of responding modes, and in particular to study if and under which circumstances the riser motions would be single-mode or multi-mode. The measurement system consisted of 29 biaxial gauges for bending moment. The signals are processed to yield curvature and displacement and further to identify modes of vibration. A modal approach is used successfully employing a combination of signal filtering and least-squares fitting of precalculated mode-shapes. As a part of the modal analysis, it is demonstrated that the equally spaced instrumentation limited the maximum mode number to be extracted to be equal to the number of instrumentation locations. This imposed a constraint on the analysis of in-line (IL) vibration, which occurs at higher frequencies and involves higher modes than cross-flow (CF). The analysis has shown that in general the riser response was irregular (i.e. broad-banded) and that the degree of irregularity increases with the flow speed. In some tests distinct spectral peaks could be seen, corresponding to a dominating mode. No occurrences of single-mode (lock-in) were seen. The IL response is more broad-banded than the CF response and contains higher frequencies. The average value of the displacement r.m.s over the length of the riser is computed to indicate the magnitude of VIV motion during one test. In the CF direction the average displacement is typically 1/4 of the diameter, almost independent of the flow speed. For the IL direction the values are in the range 0.05 0.08 of the diameter. The peak frequency taken from the spectra of the CF displacement at riser midpoint show approximately

  15. Comparison between traditional laboratory tests, permeability measurements and CT-based fluid flow modelling for cultural heritage applications.

    Science.gov (United States)

    De Boever, Wesley; Bultreys, Tom; Derluyn, Hannelore; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-06-01

    In this paper, we examine the possibility to use on-site permeability measurements for cultural heritage applications as an alternative for traditional laboratory tests such as determination of the capillary absorption coefficient. These on-site measurements, performed with a portable air permeameter, were correlated with the pore network properties of eight sandstones and one granular limestone that are discussed in this paper. The network properties of the 9 materials tested in this study were obtained from micro-computed tomography (μCT) and compared to measurements and calculations of permeability and the capillary absorption rate of the stones under investigation, in order to find the correlation between pore network characteristics and fluid management characteristics of these sandstones. Results show a good correlation between capillary absorption, permeability and network properties, opening the possibility of using on-site permeability measurements as a standard method in cultural heritage applications.

  16. Test speed and other factors affecting the measurements of tree root properties used in soil reinforcement models

    NARCIS (Netherlands)

    Cofie, P.; Koolen, A.J.

    2001-01-01

    Measured values of the mechanical properties of tree roots are found to be affected by a number of factors. Shear properties of tree roots are found to be partly influenced by size of the testing equipment, level of soil compaction, deformation of the root material and estimated width of the shear z

  17. Comparison between traditional laboratory tests, permeability measurements and CT-based fluid flow modelling for cultural heritage applications

    Energy Technology Data Exchange (ETDEWEB)

    De Boever, Wesley, E-mail: Wesley.deboever@ugent.be [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium); Bultreys, Tom; Derluyn, Hannelore [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium); Van Hoorebeke, Luc [UGCT/Radiation Physics, Dept. of Physics & Astronomy, Ghent University, Proeftuinstraat 86, 9000 Ghent (Belgium); Cnudde, Veerle [UGCT/PProGRess, Dept. of Geology, Ghent University, Krijgslaan 281, 9000 Ghent (Belgium)

    2016-06-01

    In this paper, we examine the possibility to use on-site permeability measurements for cultural heritage applications as an alternative for traditional laboratory tests such as determination of the capillary absorption coefficient. These on-site measurements, performed with a portable air permeameter, were correlated with the pore network properties of eight sandstones and one granular limestone that are discussed in this paper. The network properties of the 9 materials tested in this study were obtained from micro-computed tomography (μCT) and compared to measurements and calculations of permeability and the capillary absorption rate of the stones under investigation, in order to find the correlation between pore network characteristics and fluid management characteristics of these sandstones. Results show a good correlation between capillary absorption, permeability and network properties, opening the possibility of using on-site permeability measurements as a standard method in cultural heritage applications. - Highlights: • Measurements of capillary absorption are compared to in-situ permeability. • We obtain pore size distribution and connectivity by using micro-CT. • These properties explain correlation between permeability and capillarity. • Correlation between both methods is good to excellent. • Permeability measurements could be a good alternative to capillarity measurement.

  18. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...

  19. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  20. A proposed experimental platform for measuring the properties of warm dense mixtures: Testing the applicability of the linear mixing model

    Science.gov (United States)

    Hawreliak, James

    2017-06-01

    This paper presents a proposed experimental technique for investigating the impact of chemical interactions in warm dense liquid mixtures. It uses experimental equation of state (EOS) measurements of warm dense liquid mixtures with different compositions to determine the deviation from the linear mixing model. Statistical mechanics is used to derive the EOS of a mixture with a constant pressure linear mixing term (Amagat's rule) and an interspecies interaction term. A ratio between the particle density of two different compositions of mixtures, K(P, T)i: ii, is defined. By comparing this ratio for a range of mixtures, the impact of interspecies interactions can be studied. Hydrodynamic simulations of mixtures with different carbon/hydrogen ratios are used to demonstrate the application of this proposed technique to multiple shock and ramp compression experiments. The limit of the pressure correction that can be measured due to interspecies interactions using this methodology is determined by the uncertainty in the density measurement.

  1. Tests of the electroweak standard model and measurement of the weak mixing angle with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Goebel, M.

    2011-09-15

    In this thesis the global Standard Model (SM) fit to the electroweak precision observables is revisted with respect to newest experimental results. Various consistency checks are performed showing no significant deviation from the SM. The Higgs boson mass is estimated by the electroweak fit to be M{sub H}=94{sub -24}{sup +30} GeV without any information from direct Higgs searches at LEP, Tevatron, and the LHC and the result is M{sub H}=125{sub -10}{sup +8} GeV when including the direct Higgs mass constraints. The strong coupling constant is extracted at fourth perturbative order as {alpha}{sub s}(M{sub Z}{sup 2})=0.1194{+-}0.0028(exp){+-}0.0001 (theo). From the fit including the direct Higgs constraints the effective weak mixing angle is determined indirectly to be sin{sup 2} {theta}{sup l}{sub eff}=0.23147{sub -0.00010}{sup +0.00012}. For the W mass the value of M{sub W}=80.360{sub -0.011}{sup +0.012} GeV is obtained indirectly from the fit including the direct Higgs constraints. The electroweak precision data is also exploited to constrain new physics models by using the concept of oblique parameters. In this thesis the following models are investigated: models with a sequential fourth fermion generation, the inert-Higgs doublet model, the littlest Higgs model with T-parity conservation, and models with large extra dimensions. In contrast to the SM, in these models heavy Higgs bosons are in agreement with the electroweak precision data. The forward-backward asymmetry as a function of the invariant mass is measured for pp{yields} Z/{gamma}{sup *}{yields}e{sup +}e{sup -} events collected with the ATLAS detector at the LHC. The data taken in 2010 at a center-of-mass energy of {radical}(s)=7 TeV corresponding to an integrated luminosity of 37.4 pb{sup -1} is analyzed. The measured forward-backward asymmetry is in agreement with the SM expectation. From the measured forward-backward asymmetry the effective weak mixing angle is extracted as sin{sup 2} {theta}{sup l

  2. Modeling typical performance measures

    NARCIS (Netherlands)

    Weekers, Anke Martine

    2009-01-01

    In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl

  3. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  4. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  5. Measuring Galactic Extinction A Test

    CERN Document Server

    Arce, H G; Arce, Hector G.; Goodman, Alyssa A.

    1999-01-01

    We test the recently published all-sky reddening map of Schlegel, Finkbeiner & Davis (1998 [SFD]) using the extinction study of a region in the Taurus dark cloud complex by Arce & Goodman (1999 [AG]). In their study, AG use four different techniques to measure the amount and structure of the extinction toward Taurus, and all four techniques agree very well. Thus we believe that the AG results are a truthful representation of the extinction in the region and can be used to test the reliability of the SFD reddening map. The results of our test show that the SFD all-sky reddening map, which is based on data from COBE/DIRBE and IRAS/ISSA, overestimates the reddening by a factor of 1.3 to 1.5 in regions of smooth extinction with A_V > 0.5 mag. In some regions of steep extinction gradients the SFD map underestimates the reddening value, probably due to its low spatial resolution. We expect that the astronomical community will be using the SFD reddening map extensively. We offer this Letter as a cautionary n...

  6. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...

  7. Testing the Multidimensionality in Teacher Interpersonal Behavior: Validating the Questionnaire on Teacher Interaction Using the Rasch Measurement Model.

    Science.gov (United States)

    Fulmer, Gavin W; Lang, Quek Choon

    2015-01-01

    This study investigated the perceptions of 1235 students of their form teachers' interpersonal behaviors across 40 classrooms in 24 Singaporean secondary schools. The 32-item Questionnaire on Teacher Interaction (QTI) survey was administered to obtain the initial quantitative data of teacher behaviors perceived by the students in these classrooms. The eight scales of QTI are: Leadership, Helping/Friendly, Understanding, Student Responsibility/ Freedom, Uncertain, Dissatisfied, Admonishing, and Strict. The Rasch measurement model was used to estimate students' traits with respect to each subscale, and then to examine its proposed multidimensional structure. Findings demonstrate overall good fit of the responses with the Rasch model for each subscale. Findings also support the hypothesized relationships among the eight dimensions proposed for the QTI.

  8. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...... as well as the flow pattern during discharge of the silo. During discharge a mixed flow pattern has been identified...

  9. Testing of the QUIC-plume model with wind-tunnel measurements for a high-rise building

    Energy Technology Data Exchange (ETDEWEB)

    Williams, M. D. (Michael D.); Brown, M. J. (Michael J.); Boswell, D. (David); Singh, B. (Balwinder); Pardyjak, E. M. (Eric M.)

    2004-01-01

    ), QUICPLUME, a model that describes dispersion near buildings (Williams et al., 2003), and a graphical user interface QUIC-GUI (Boswell et al., 2004). The QUIC dispersion code is currently being used for building scale to neighborhood scale transport and diffusion problems with domains on the order of several kilometers. Figure 1 illustrates the modeled dispersion for a release in downtown Salt Lake City. This paper describes the QUIC-PLUME randomwalk dispersion model formulation, the turbulence parameterization assumptions, and shows comparisons of model-computed concentration fields with measurements from a single-building wind-tunnel experiment. It is shown that the traditional three-term random walk model with a turbulence scheme based on gradients of the mean wind performs poorly for dispersion in the cavity of the single-building, and that model-experiment comparisons are improved significantly when additional drift terms are added and a non-local mixing scheme is implemented.

  10. Measurement invariance within and between subjects: A distinct problem in testing the equivalence of intra- and inter-individual model structures.

    Directory of Open Access Journals (Sweden)

    Janne eAdolf

    2014-09-01

    Full Text Available We address the question of equivalence between modeling results obtained on intra-individual and inter-individual levels of psychometric analysis. Our focus is on the concept of measurement invariance and the role it may play in this context. We discuss this in general against the background of the latent variable paradigm, complemented by an operational demonstration in terms of a linear state-space model, i.e., a time series model with latent variables. Implemented in a multiple-occasion and multiple-subject setting, the model simultaneously accounts for intra-individual and inter-individual differences. We consider the conditions – in terms of invariance constraints – under which modeling results are generalizable (a over time within subjects, (b over subjects within occasions, and (c over time and subjects simultaneously thus implying an equivalence-relationship between both dimensions. Since we distinguish the measurement model from the structural model governing relations between the latent variables of interest, we decompose the invariance constraints into those that involve structural parameters and those that involve measurement parameters and relate to measurement invariance. Within the resulting taxonomy of models, we show that, under the condition of measurement invariance over time and subjects, there exists a form of structural equivalence between levels of analysis that is distinct from full structural equivalence, i.e., ergodicity. We demonstrate how measurement invariance between and within subjects can be tested in the context of high-frequency repeated measures in personality research. Finally, we discuss problems of measurement variance in relation to problems of non-ergodicity as currently discussed and approached in the literature.

  11. Comparison of road traffic emission factors and testing by comparison of modelled and measured ambient air quality data.

    Science.gov (United States)

    Peace, H; Owen, B; Raper, D W

    2004-12-01

    This paper describes a comparison of three different sets of road traffic emission factors released by the UK government for use in air quality review and assessment. The air quality management process of review and assessment began in 1997 in the UK. During this period of ongoing review and assessment, a number of changes have been made to the emission factors provided by the government. The use of different sets of emission factors during the assessment process has lead to some inconsistencies between results from neighbouring local authorities and also between different modelling exercises undertaken by the same local authorities. One purpose of this study has been to compare three different sets of emission factors, including the most recent set, and to some degree highlight the uncertainty associated with the use of factors, such as the shift of emphasis in terms of emissions from cars to heavy goods vehicles. The most recently released emission factors are the most comprehensive to date, and theoretically more accurate than previous sets due to the larger database of emission measurements that they have been based on. Therefore, the most recent set of emission factors have been additionally used in a validation exercise between modelled and monitored data. Comparison has been undertaken with monitoring data at a variety of urban background, urban centre and roadside sites. This work has shown some differences between the predicted trends in emission factors and measured trends in ambient air pollution levels, especially at roadside sites, indicating an under-prediction of the air pollution contribution from road traffic.

  12. Do different circadian typology measures modulate their relationship with personality? A test using the Alternative Five Factor Model.

    Science.gov (United States)

    Randler, Christoph; Gomà-i-Freixanet, Montserrat; Muro, Anna; Knauber, Christina; Adan, Ana

    2015-03-01

    The relationship between personality and circadian typology shows some inconsistent results and it has been hypothesized that the model used to measure personality might have a moderating effect on this relationship. However, it has never been explored if this inconsistency was dependent on the questionnaire used to measure differences in circadian rhythms as well. We explored this issue in a sample of 564 university students (32% men; 19-40 years) using the Zuckerman-Kuhlman Personality Questionnaire, which is based on an evolutionary-biological approach, in combination with the Composite Scale of Morningness (CSM) and the reduced Morningness-Eveningness Questionnaire (rMEQ). Both questionnaires detected differences between circadian typologies in Sociability (highest in evening types; ET) and Impulsive Sensation-Seeking scales (highest in ET), while the CSM also detected differences in Activity (lowest in ET) and Aggression-Hostility (highest in ET). Further, both questionnaires detected differences between circadian typologies in the subscales General Activity (morning types [MT] higher than ET), Impulsivity (ET highest) and Sensation-Seeking (highest in ET). Differences between circadian typologies/groups in the subscales Parties (highest in ET) and Isolation Intolerance (lowest in MT) were only detected by the rMEQ. The CSM clearly separated evening types from neither and morning types while the rMEQ showed that neither types are not intermediate but closer to evening types in General Activity and Isolation Intolerance, and closer to morning types in Impulsive Sensation-Seeking, Parties, Impulsivity and Sensation Seeking. The obtained results indicate that the relationship between circadian typology and personality may be dependent on the instrument used to assess circadian typology. This fact may help to explain some of the conflicting data available on the relationship between these two concepts.

  13. Dual-porosity modeling of groundwater recharge: testing a quick calibration using in situ moisture measurements, Areuse River Delta, Switzerland

    Science.gov (United States)

    Alaoui, Abdallah; Eugster, Werner

    A simple method for calibrating the dual-porosity MACRO model via in situ TDR measurements during a brief infiltration run (2.8 h) is proposed with the aim of estimating local groundwater recharge (GR). The recharge was modeled firstly by considering the entire 3 m of unsaturated soil, and secondly by considering only the topsoil to the zero-flux plane (0-0.70 m). The modeled recharge was compared against the GR obtained from field measurements. Measured GR was 313 mm during a 1-year period (15 October 1990-15 October 1991). The best simulation results were obtained when considering the entire unsaturated soil under equilibrium conditions excluding the macropore flow effect (330 mm), whereas under non-equilibrium conditions GR was overestimated (378 mm). Sensitivity analyses showed that the investigation of the topsoil is sufficient in estimating local GR in this case, since the water stored below this depth appears to be below the typical rooting depth of the vegetation and is not available for evapotranspiration. The modeled recharge under equilibrium conditions for the 0.7-m-topsoil layer was found to be 364 mm, which is in acceptable agreement with measurements. Une méthode simple pour la calibration du modèle à double porosité MACRO par des mesures TDR in situ durant un bref essai d'infiltration (2.8 h) a été proposée pour l'estimation locale de la recharge de la nappe (RN). La RN a été d'abord simulée en tenant compte de toute la zone non saturée (3 m) et ensuite, en considérant uniquement la couverture du sol entre zéro et le plan du flux nul (0.70 m). La RN simulée a été comparée à la RN observée. La RN mesurée durant une année (15 octobre 1990-15 octobre 1991) était de 313 mm. Les meilleures simulations ont été obtenues en tenant compte de toute la zone non saturée sous les conditions d'équilibre excluant le flux préférentiel (330 mm). Sous les conditions de non équilibre, la RN a été surestimée (378 mm). Les analyses de

  14. Testing the model for testing competency.

    Science.gov (United States)

    Keating, Sarah B; Rutledge, Dana N; Sargent, Arlene; Walker, Polly

    2003-05-01

    The pilot study to demonstrate the utility of the CBRDM in the practice setting was successful. Using a matrix evaluation tool based on the model's competencies, evaluators were able to observe specific performance behaviors of senior nursing students and new graduates at either the novice or competent levels. The study faced the usual perils of pilot studies, including small sample size, a limited number of items from the total CBRDM, restricted financial resources, inexperienced researchers, unexpected barriers, and untested evaluation tools. It was understood from the beginning of the study that the research would be based on a program evaluation model, analyzing both processes and outcomes. However, the meager data findings led to the desire to continue to study use of the model for practice setting job expectations, career planning for nurses, and curriculum development for educators. Although the California Strategic Planning Committee for Nursing no longer has funding, we hope that others interested in role differentiation issues will take the results of this study and test the model in other practice settings. Its ability to measure higher levels of competency as well as novice and competent should be studied, i.e., proficient, expert, and advanced practice. The CBRDM may be useful in evaluating student and nurse performance, defining role expectations, and identifying the preparation necessary for the roles. The initial findings related to the two functions as leader and teacher in the care provider and care coordinator roles led to much discussion about helping students and nurses develop competence. Additional discussion focused on the roles as they apply to settings such as critical care or primary health care. The model is useful for all of nursing as it continues to define its levels of practice and their relationship to on-the-job performance, curriculum development, and career planning.

  15. Recent tests of realistic models

    Energy Technology Data Exchange (ETDEWEB)

    Brida, Giorgio; Degiovanni, Ivo Pietro; Genovese, Marco; Gramegna, Marco; Piacentini, Fabrizio; Schettini, Valentina; Traina, Paolo, E-mail: m.genovese@inrim.i [Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, 10135 Torino (Italy)

    2009-06-01

    In this article we present recent activity of our laboratories on testing specific hidden variable models and in particular we discuss the realizations of Alicki - van Ryn test and tests of SED and of Santos' models.

  16. Laboratory Tests of Chameleon Models

    CERN Document Server

    Brax, Philippe; Davis, Anne-Christine; Shaw, Douglas

    2009-01-01

    We present a cursory overview of chameleon models of dark energy and their laboratory tests with an emphasis on optical and Casimir experiments. Optical experiments measuring the ellipticity of an initially polarised laser beam are sensitive to the coupling of chameleons to photons. The next generation of Casimir experiments may be able to unravel the nature of the scalar force mediated by the chameleon between parallel plates.

  17. Strain measurement based battery testing

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk

    2017-05-23

    A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.

  18. A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model. Measurement of Earth's dragging of inertial frames

    Energy Technology Data Exchange (ETDEWEB)

    Ciufolini, Ignazio [Universita del Salento, Dipartimento Ingegneria dell' Innovazione, Lecce (Italy); Sapienza Universita di Roma, Scuola di Ingegneria Aerospaziale, Rome (Italy); Paolozzi, Antonio; Paris, Claudio [Sapienza Universita di Roma, Scuola di Ingegneria Aerospaziale, Rome (Italy); Museo della Fisica e Centro Studi e Ricerche Enrico Fermi, Rome (Italy); Pavlis, Erricos C. [University of Maryland, Joint Center for Earth Systems Technology (JCET), Baltimore County (United States); Koenig, Rolf [GFZ German Research Centre for Geosciences, Helmholtz Centre Potsdam, Potsdam (Germany); Ries, John [University of Texas at Austin, Center for Space Research, Austin (United States); Gurzadyan, Vahe; Khachatryan, Harutyun; Mirzoyan, Sergey [Alikhanian National Laboratory and Yerevan State University, Center for Cosmology and Astrophysics, Yerevan (Armenia); Matzner, Richard [University of Texas at Austin, Theory Center, Austin (United States); Penrose, Roger [University of Oxford, Mathematical Institute, Oxford (United Kingdom); Sindoni, Giampiero [Sapienza Universita di Roma, DIAEE, Rome (Italy)

    2016-03-15

    We present a test of general relativity, the measurement of the Earth's dragging of inertial frames. Our result is obtained using about 3.5 years of laser-ranged observations of the LARES, LAGEOS, and LAGEOS 2 laser-ranged satellites together with the Earth gravity field model GGM05S produced by the space geodesy mission GRACE. We measure μ = (0.994 ± 0.002) ± 0.05, where μ is the Earth's dragging of inertial frames normalized to its general relativity value, 0.002 is the 1-sigma formal error and 0.05 is our preliminary estimate of systematic error mainly due to the uncertainties in the Earth gravity model GGM05S. Our result is in agreement with the prediction of general relativity. (orig.)

  19. A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model: Measurement of Earth's dragging of inertial frames.

    Science.gov (United States)

    Ciufolini, Ignazio; Paolozzi, Antonio; Pavlis, Erricos C; Koenig, Rolf; Ries, John; Gurzadyan, Vahe; Matzner, Richard; Penrose, Roger; Sindoni, Giampiero; Paris, Claudio; Khachatryan, Harutyun; Mirzoyan, Sergey

    2016-01-01

    We present a test of general relativity, the measurement of the Earth's dragging of inertial frames. Our result is obtained using about 3.5 years of laser-ranged observations of the LARES, LAGEOS, and LAGEOS 2 laser-ranged satellites together with the Earth gravity field model GGM05S produced by the space geodesy mission GRACE. We measure [Formula: see text], where [Formula: see text] is the Earth's dragging of inertial frames normalized to its general relativity value, 0.002 is the 1-sigma formal error and 0.05 is our preliminary estimate of systematic error mainly due to the uncertainties in the Earth gravity model GGM05S. Our result is in agreement with the prediction of general relativity.

  20. FABASOFT BEST PRACTICES AND TEST METRICS MODEL

    Directory of Open Access Journals (Sweden)

    Nadica Hrgarek

    2007-06-01

    Full Text Available Software companies have to face serious problems about how to measure the progress of test activities and quality of software products in order to estimate test completion criteria, and if the shipment milestone will be reached on time. Measurement is a key activity in testing life cycle and requires established, managed and well documented test process, defined software quality attributes, quantitative measures, and using of test management and bug tracking tools. Test metrics are a subset of software metrics (product metrics, process metrics and enable the measurement and quality improvement of test process and/or software product. The goal of this paper is to briefly present Fabasoft best practices and lessons learned during functional and system testing of big complex software products, and to describe a simple test metrics model applied to the software test process with the purpose to better control software projects, measure and increase software quality.

  1. Computational modeling as part of alternative testing strategies in the respiratory and cardiovascular systems: inhaled nanoparticle dose modeling based on representative aerosol measurements and corresponding toxicological analysis.

    Science.gov (United States)

    Pilou, Marika; Mavrofrydi, Olga; Housiadas, Christos; Eleftheriadis, Kostas; Papazafiri, Panagiota

    2015-05-01

    The objectives of modeling in this work were (a) the integration of two existing numerical models in order to connect external exposure to nanoparticles (NPs) with internal dose through inhalation, and (b) to use computational fluid-particle dynamics (CFPD) to analyze the behavior of NPs in the respiratory and the cardiovascular system. Regarding the first objective, a lung transport and deposition model was combined with a lung clearance/retention model to estimate NPs dose in the different regions of the human respiratory tract and some adjacent tissues. On the other hand, CFPD was used to estimate particle transport and deposition of particles in a physiologically based bifurcation created by the third and fourth lung generations (respiratory system), as well as to predict the fate of super-paramagnetic particles suspended in a liquid under the influence of an external magnetic field (cardiovascular system). All the above studies showed that, with proper refinement, the developed computational models and methodologies may serve as an alternative testing strategy, replacing transport/deposition experiments that are expensive both in time and resources and contribute to risk assessment.

  2. Accelerated Wafer-Level Integrated Circuit Reliability Testing for Electromigration in Metal Interconnects with Enhanced Thermal Modeling, Structure Design, Control of Stress, and Experimental Measurements.

    Science.gov (United States)

    Shih, Chih-Ching

    Wafer-level electromigration tests have been developed recently to fulfill the need for rapid testing in integrated circuit production facilities. We have developed an improved thermal model-TEARS (Thermal Energy Accounts for the Resistance of the System) that supports these tests. Our model is enhanced by treatments for determination of the thermal conductivity of metal, K_{m}, heat sinking effects of the voltage probes and current lead terminations, and thermoelectric power. Our TEARS analysis of multi-element SWEAT (Standard Wafer-level Electromigration Acceleration Test) structures yields design criteria for the length of current injection leads and choice of voltage probe locations to isolate test units from the heat sinking effect of current lead terminations. This also provides greater insight into the current for thermal runaway. From our TEARS model and Black's equation for lifetime prediction, we have developed an algorithm for a fast and accurate control of stress in SWEAT tests. We have developed a lookup table approach for precise electromigration characterizations without complicated calculations. It decides the peak temperature in the metal, T_ {max}, and the thermal conductivity of the insulator, K_{i}, from an experimental resistance measurement at a given current. We introduce a characteristic temperature, T _{EO}, which is much simpler to use than conventional temperature coefficient of the electrical resistivity of metal for calibration and transfer of calibration data of metallic films as their own temperature sensors. The use of T_{EO} also allows us to establish system specifications for a desirable accuracy in temperature measurement. Our experimental results are the first to show the effects of series elemental SWEAT units on the system failure distribution, spatial failure distribution in SWEAT structures, and bimodal distributions for straight-line structures. The adaptive approach of our TEARS based SWEAT test decides the value of Black

  3. Tests of the electroweak standard model and measurement of the weak mixing angle with the ATLAS detector

    CERN Document Server

    Goebel, Martin; Mnich, Joachim; Schleper, Peter

    In this thesis the global Standard Model (SM) fit to the electroweak precision observables is revisited with respect to newest experimental results. Various consistency checks are performed showing no significant deviation from the SM. The Higgs boson mass is estimated by the electroweak fit to be MH = 94+30−24 GeV without any information from direct Higgs searches at LEP, Tevatron, and the LHC and the result is MH = 125+8−10 GeV when including the direct Higgs mass constraints. The strong coupling constant is extracted at fourth perturbative order as αs(M2Z) = 0.1194 ± 0.0028 (exp) ± 0.0001 (theo). From the fit including the direct Higgs constraints the effective weak mixing angle is determined indirectly to be sin2 θleff = 0.23147+0.00012−0.00010. For the W mass the value of MW = 80.360+0.012−0.011 GeV is obtained indirectly from the fit including the direct Higgs constraints. The electroweak precision data is also exploited to constrain new physics models by using the concept of oblique paramet...

  4. Hybrid choice model to disentangle the effect of awareness from attitudes: Application test of soft measures in medium size city

    DEFF Research Database (Denmark)

    Sottile, Eleonora; Meloni, Italo; Cherchi, Elisabetta

    2017-01-01

    The need to reduce private vehicle use has led to the development of soft measures aimed at re-educating car users through information processes that raise their awareness about the benefits of environmentally friendly modes, encouraging them to voluntarily change their travel choice behaviour...... for fostering changes toward more pro-environmental modes. The objective of this work is to provide empirical evidence of the effect of awareness and individual attitudes on the switch from car driver to more sustainable modes such as Park and Ride. In particular we attempt to discriminate the effect...... of awareness due to the information provided in a Stated Preference experiment from the effect of individuals’ attitudes toward stress and social norms with respect to sustainable transport modes. The case study refers to the implementation of a Voluntary Travel Behaviour Change programme in Cagliari (Italy...

  5. Model-Based Security Testing

    CERN Document Server

    Schieferdecker, Ina; Schneider, Martin; 10.4204/EPTCS.80.1

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing,...

  6. Model test of reactor vessel wall. Pt. 2. Test performance, measurement and partial evaluation; Modellkoerperversuch zur Reaktortankwand. T. 2; Versuchsdurchfuehrung, Messung und Teilauswertung

    Energy Technology Data Exchange (ETDEWEB)

    Maile, K.; Eckert, W.; Theofel, H.; Purper, H.

    1992-07-01

    Due to test interruption because of cut promotion means, the original objective of the project - verification of reactor wall design - could not be achieved because by that point in time the slabs had not yet failed (DIN 1.4948 = X 6 CrNi 18 11). Considering, however, the elongation curve, in particular that of the faulty slab, failure at an earlier stress cycle value than calculated is highly probable. (orig./HP) [Deutsch] Aufgrund des foerderungsbedingten Abbruchs der Versuche konnte die urspruengliche Zielsetzung des Vorhabens - Verifizierung der Auslegung der Reaktorwand - nicht erreicht werden, da ein Versagen der Platten bis zu diesem Zeitpunkt noch nicht vorlag (DIN 1.4948 = X 6 CrNi 18 11). Betrachtet man jedoch den Dehnungsverlauf insbesondere in der Platte mit Fehlern ist zu vermuten, dass ein Versagen zu einer frueheren Lastspielzahl als berechnet sehr wahrscheinlich ist. (orig./HP)

  7. Study of the effects of low-fluence laser irradiation on wall paintings: Test measurements on fresco model samples

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, Valentina, E-mail: v.raimondi@ifac.cnr.it [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Cucci, Costanza [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Cuzman, Oana [Institute for the Conservation and Promotion of Cultural Heritage-National Research Council (CNR-ICVBC), Firenze (Italy); Fornacelli, Cristina [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Galeotti, Monica [Opificio delle Pietre Dure (OPD), Firenze (Italy); Gomoiu, Ioana [National University of Art, Bucharest (Romania); Lognoli, David [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Mohanu, Dan [National University of Art, Bucharest (Romania); Palombi, Lorenzo; Picollo, Marcello [‘Nello Carrara’Applied Physics Institute-National Research Council of Italy (CNR-IFAC), Firenze (Italy); Tiano, Piero [Institute for the Conservation and Promotion of Cultural Heritage-National Research Council (CNR-ICVBC), Firenze (Italy)

    2013-11-01

    Laser-induced fluorescence is widely applied in several fields as a diagnostic tool to characterise organic and inorganic materials and could be also exploited for non-invasive remote investigation of wall paintings using the fluorescence lidar technique. The latter relies on the use of a low-fluence pulsed UV laser and a telescope to carry out remote spectroscopy on a given target. A first step to investigate the applicability of this technique is to assess the effects of low-fluence laser radiation on wall paintings. This paper presents a study devoted to investigate the effects of pulsed UV laser radiation on a set of fresco model samples prepared using different pigments. To irradiate the samples we used a tripled-frequency Q-switched Nd:YAG laser (emission wavelength: 355 nm; pulse width: 5 ns). We varied the laser fluence from 0.1 mJ/cm{sup 2} to 1 mJ/cm{sup 2} and the number of laser pulses from 1 to 500 shots. We characterised the investigated materials using several diagnostic and analytical techniques (colorimetry, optical microscopy, fibre optical reflectance spectroscopy and ATR-FT-IR microscopy) to compare the surface texture and their composition before and after laser irradiation. Results open good prospects for a non-invasive investigation of wall paintings using the fluorescence lidar technique.

  8. Force Measurements in Vibration and Acoustic Tests

    Science.gov (United States)

    Scharton, T. D.

    1996-01-01

    The advent of triaxial, piezoelectric force gages and the associated signal processing is a precursor to several dynamics testing innovations. This new technology is applicable to spacecraft programs that JPL manages. An application of force measurement is force limiting (when testing spacecraft in random vibration tests). Base-drive and acoustic modal testing is a potential application.

  9. Results of first field tests of the improved open-path and enclosed models of CO2 and H2O flux measurements systems

    Science.gov (United States)

    Begashaw, Israel; Fratini, Gerardo; Griessbaum, Frank; Kathilankal, James; Xu, Liukang; Franz, Daniela; Joseph, Everett; Larmanou, Eric; Miller, Scott; Papale, Dario; Sabbatini, Simone; Sachs, Torsten; Sakai, Ricardo; McDermitt, Dayle; Burba, George

    2016-04-01

    In 2014-2015, improved open-path and enclosed-path flux measurement systems were developed, based on established LI-7500A and LI-7200 gas analyzer models, with the focus on improving stability in the presence of contamination, refining temperature control and compensation, and providing more accurate gas concentration measurements. In addition to optical and electronic redesign, both systems incorporate automated on-site flux calculations using EddyPro® software run by a weatherized remotely-accessible microcomputer, SmartFlux 2, with fully digital inputs. The ultimate goal of such development was to reduce errors in CO2 and H2O hourly fluxes and in long-term carbon and water budgets. Field tests of both systems were conducted over six periods, each 5-14 months long, at 6 sites with diverse environments, setups, and types of contamination, using 26 gas analyzers. The open-path LI-7500RS system performed significantly better than the original LI-7500A model in terms of contamination-related drifts in mean concentrations. Improvements in CO2 drifts were strong, with RS models often drifting few-to-tens of times less than the original. Improvements in H2O contamination-related drifts were particularly significant, with modified models often drifting many tens of times less than the original. The enclosed-path LI-7200RS system performed substantially better than the original LI-7200 in terms of the drifts in H2O, sometimes drifting few-to-tens of times less than the original. Improvements in CO2 contamination-related drifts were modest, being similar or just a bit better than the original. Results from field tests suggest that both RS systems can help improve flux data coverage and potentially reduce site maintenance: (i) Frequency of cleaning and site visits for service and maintenance should decrease, especially for the open-path design (ii) Amount of highest quality data with smallest error bars on fluxes is expected to increase for both open-path and enclosed

  10. Ship Model Testing

    Science.gov (United States)

    2016-01-15

    analyzer, dual fuel, material tester, universal tester, laser scanner and 3D printer 16. SECURITY CLASSIFICATION OF: a. REPORT b. ABSTRACT c...New Additions • New material testing machine with environmental chamber • New dual -fuel test bed for Haeberle Laboratory • Upgrade existing...of purchasing more data acquisition equipment (ie. FARO laser scanner, data telemetry , and velocity profiler). Table 1: Spending vs. budget

  11. Testing and validating environmental models

    Science.gov (United States)

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  12. Measuring and modelling concurrency

    Directory of Open Access Journals (Sweden)

    Larry Sawers

    2013-02-01

    Full Text Available This article explores three critical topics discussed in the recent debate over concurrency (overlapping sexual partnerships: measurement of the prevalence of concurrency, mathematical modelling of concurrency and HIV epidemic dynamics, and measuring the correlation between HIV and concurrency. The focus of the article is the concurrency hypothesis – the proposition that presumed high prevalence of concurrency explains sub-Saharan Africa's exceptionally high HIV prevalence. Recent surveys using improved questionnaire design show reported concurrency ranging from 0.8% to 7.6% in the region. Even after adjusting for plausible levels of reporting errors, appropriately parameterized sexual network models of HIV epidemics do not generate sustainable epidemic trajectories (avoid epidemic extinction at levels of concurrency found in recent surveys in sub-Saharan Africa. Efforts to support the concurrency hypothesis with a statistical correlation between HIV incidence and concurrency prevalence are not yet successful. Two decades of efforts to find evidence in support of the concurrency hypothesis have failed to build a convincing case.

  13. Measurement of children's creativity by tests

    Directory of Open Access Journals (Sweden)

    Maksić Slavica B.

    2003-01-01

    Full Text Available After over a 50-year permanent development of tests designed to measure creativity and the results they produced, a question is raised if creativity can be measured by tests at all. A special problem are procedures for measuring creative potential in younger children because children, unlike adults, do not possess creative products that are a single reliable evidence of creativity in the real world. The paper considers test reliability and validity in measuring creativity as well as the dilemma: how much justifiable it is to measure children's creativity by tests if it is not clear what they measure and if there is not a significant relationship between creativity scores and creativity in life. Unsatisfactory creativity test reliability and validity does not mean those tests should be given up the majority of researchers agree. Of the tests of creativity administered in work with the young, the status of Urban-Jellen Test of Creative Thinking - Drawing Production (TCT-DP is given prominence due to the fact that over the past ten years or so it has been used in a larger number of studies as well as in some studies carried out in this country. In TCT-DP scoring is not based on statistical uncommonness of the figures produced but on a number of criteria derived from Gestalt psychology. The factor analyses of the defined criteria of creativity, applied on samples in various settings yielded that the test contains an essential factor of creativity "novelty".

  14. Model testing of Wave Dragon

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    Previous to this project a scale model 1:50 of the wave energy converter (WEC) Wave Dragon was built by the Danish Maritime Institute and tested in a wave tank at Aalborg University (AAU). The test programs investigated the movements of the floating structure, mooring forces and forces in the reflectors. The first test was followed by test establishing the efficiency in different sea states. The scale model has also been extensively tested in the EU Joule Craft project JOR-CT98-7027 (Low-Pressure Turbine and Control Equipment for Wave Energy Converters /Wave Dragon) at University College Cork, Hydraulics and Maritime Research Centre, Ireland. The results of the previous model tests have formed the basis for a redesign of the WEC. In this project a reconstruction of the scale 1:50 model and sequential tests of changes to the model geometry and mass distribution parameters will be performed. AAU will make the modifications to the model based on the revised Loewenmark design and perform the tests in their wave tank. Grid connection requirements have been established. A hydro turbine with no movable parts besides the rotor has been developed and a scale model 1:3.5 tested, with a high efficiency over the whole head range. The turbine itself has possibilities for being used in river systems with low head and variable flow, an area of interest for many countries around the world. Finally, a regulation strategy for the turbines has been developed, which is essential for the future deployment of Wave Dragon.The video includes the following: 1. Title, 2. Introduction of the Wave Dragon, 3. Model test series H, Hs = 3 m, Rc = 3 m, 4. Model test series H, Hs = 5 m, Rc = 4 m, 5. Model test series I, Hs = 7 m, Rc = 1.25 m, 6. Model test series I, Hs = 7 m, Rc = 4 m, 7. Rolling title. On this VCD additional versions of the video can be found in the directory 'addvideo' for playing the video on PC's. These versions are: Model testing of Wave Dragon, DVD version

  15. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  16. How integrated are behavioral and endocrine stress response traits? A repeated measures approach to testing the stress-coping style model.

    Science.gov (United States)

    Boulton, Kay; Couto, Elsa; Grimmer, Andrew J; Earley, Ryan L; Canario, Adelino V M; Wilson, Alastair J; Walling, Craig A

    2015-02-01

    It is widely expected that physiological and behavioral stress responses will be integrated within divergent stress-coping styles (SCS) and that these may represent opposite ends of a continuously varying reactive-proactive axis. If such a model is valid, then stress response traits should be repeatable and physiological and behavioral responses should also change in an integrated manner along a major axis of among-individual variation. While there is some evidence of association between endocrine and behavioral stress response traits, few studies incorporate repeated observations of both. To test this model, we use a multivariate, repeated measures approach in a captive-bred population of Xiphophorus birchmanni. We quantify among-individual variation in behavioral stress response to an open field trial (OFT) with simulated predator attack (SPA) and measure waterborne steroid hormone levels (cortisol, 11-ketotestosterone) before and after exposure. Under the mild stress stimulus (OFT), (multivariate) behavioral variation among individuals was consistent with a strong axis of personality (shy-bold) or coping style (reactive-proactive) variation. However, behavioral responses to a moderate stressor (SPA) were less repeatable, and robust statistical support for repeatable endocrine state over the full sampling period was limited to 11-ketotestosterone. Although post hoc analysis suggested cortisol expression was repeatable over short time periods, qualitative relationships between behavior and glucocorticoid levels were counter to our a priori expectations. Thus, while our results clearly show among-individual differences in behavioral and endocrine traits associated with stress response, the correlation structure between these is not consistent with a simple proactive-reactive axis of integrated stress-coping style. Additionally, the low repeatability of cortisol suggests caution is warranted if single observations (or indeed repeat measures over short sampling

  17. Quantitative renal perfusion measurements in a rat model of acute kidney injury at 3T: testing inter- and intramethodical significance of ASL and DCE-MRI.

    Directory of Open Access Journals (Sweden)

    Fabian Zimmer

    Full Text Available OBJECTIVES: To establish arterial spin labelling (ASL for quantitative renal perfusion measurements in a rat model at 3 Tesla and to test the diagnostic significance of ASL and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI in a model of acute kidney injury (AKI. MATERIAL AND METHODS: ASL and DCE-MRI were consecutively employed on six Lewis rats, five of which had a unilateral ischaemic AKI. All measurements in this study were performed on a 3 Tesla MR scanner using a FAIR True-FISP approach and a TWIST sequence for ASL and DCE-MRI, respectively. Perfusion maps were calculated for both methods and the cortical perfusion of healthy and diseased kidneys was inter- and intramethodically compared using a region-of-interest based analysis. RESULTS/SIGNIFICANCE: Both methods produce significantly different values for the healthy and the diseased kidneys (P<0.01. The mean difference was 147±47 ml/100 g/min and 141±46 ml/100 g/min for ASL and DCE-MRI, respectively. ASL measurements yielded a mean cortical perfusion of 416±124 ml/100 g/min for the healthy and 316±102 ml/100 g/min for the diseased kidneys. The DCE-MRI values were systematically higher and the mean cortical renal blood flow (RBF was found to be 542±85 ml/100 g/min (healthy and 407±119 ml/100 g/min (AKI. CONCLUSION: Both methods are equally able to detect abnormal perfusion in diseased (AKI kidneys. This shows that ASL is a capable alternative to DCE-MRI regarding the detection of abnormal renal blood flow. Regarding absolute perfusion values, nontrivial differences and variations remain when comparing the two methods.

  18. The C-Test: An Integrative Measure of Crystallized Intelligence

    Directory of Open Access Journals (Sweden)

    Purya Baghaei

    2015-05-01

    Full Text Available Crystallized intelligence is a pivotal broad ability factor in the major theories of intelligence including the Cattell-Horn-Carroll (CHC model, the three-stratum model, and the extended Gf-Gc (fluid intelligence-crystallized intelligence model and is usually measured by means of vocabulary tests and other verbal tasks. In this paper the C-Test, a text completion test originally proposed as a test of general proficiency in a foreign language, is introduced as an integrative measure of crystallized intelligence. Based on the existing evidence in the literature, it is argued that the construct underlying the C-Test closely matches the abilities underlying the language component of crystallized intelligence, as defined in the well-established theories of intelligence. It is also suggested that by carefully selecting texts from pertinent knowledge domains, the factual knowledge component of crystallized intelligence could also be measured by the C-Test.

  19. Practical application of the vanishing tetrad test for causal indicator measurement models: an example from health-related quality of life.

    Science.gov (United States)

    Bollen, Kenneth A; Lennox, Richard D; Dahly, Darren L

    2009-05-01

    Researchers are often faced with the task of trying to measure abstract concepts. The most common approach is to use multiple indicators that reflect an underlying latent variable. However, this 'effect indicator' measurement model is not always appropriate; sometimes the indicators instead cause the construct of interest. While the notion of 'causal indicators' has been known for some time, it is still too often ignored. However, there are limited means to determine whether a possible indicator should be treated as a cause or an effect of the latent construct of interest. Perhaps the best empirical way is to use the vanishing tetrad test (VTT), yet this method is still often overlooked. We speculate that one reason for this is the lack of published examples of its use in practice, written for an audience without extensive statistical training. The goal of this paper was to help fill this gap in the literature-to provide a basic example of how to use the VTT. We illustrated the VTT by looking at multiple items from a health related quality of life instrument that seem more likely to cause the latent variable rather than the other way around.

  20. What do educational test scores really measure?

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    measure of pure cognitive ability. We find that variables which are not closely associated with traditional notions of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture, attitudes...

  1. Phishing IQ Tests Measure Fear, Not Ability

    Science.gov (United States)

    Anandpara, Vivek; Dingman, Andrew; Jakobsson, Markus; Liu, Debin; Roinestad, Heather

    We argue that phishing IQ tests fail to measure susceptibility to phishing attacks. We conducted a study where 40 subjects were asked to answer a selection of questions from existing phishing IQ tests in which we varied the portion (from 25% to 100%) of the questions that corresponded to phishing emails. We did not find any correlation between the actual number of phishing emails and the number of emails that the subjects indicated were phishing. Therefore, the tests did not measure the ability of the subjects. To further confirm this, we exposed all the subjects to existing phishing education after they had taken the test, after which each subject was asked to take a second phishing test, with the same design as the first one, but with different questions. The number of stimuli that were indicated as being phishing in the second test was, again, independent of the actual number of phishing stimuli in the test. However, a substantially larger portion of stimuli was indicated as being phishing in the second test, suggesting that the only measurable effect of the phishing education (from the point of view of the phishing IQ test) was an increased concern—not an increased ability.

  2. Locating Tests and Measurement Instruments for Assessment

    Science.gov (United States)

    Mastel, Kristen; Morris-Knower, Jim; Marsalis, Scott

    2016-01-01

    Extension educators, staff, and specialists need to use surveys and other measurement instruments to assess their programming and conduct other research. Challenges in locating tests and measurement tools, however, include lack of time and lack of familiarity with techniques that can be used to find them. This article discusses library resources…

  3. Measuring and Test Equipment through barcode technology

    Energy Technology Data Exchange (ETDEWEB)

    Crockett, J.D.; Carr, C.C.

    1993-06-01

    Over the past several years, the use, trace methodology, and documentation of Measuring and Test Equipment has become a major concern. Current regulations are forcing companies to develop new policies, providing use history and traceability of Measuring and Test Equipment. The US Department of Energy and Environmental Organizations are driving Westinghouse Hanford Company to comply with the more stringent environmental guidelines and recent modifications in Department of Energy Orders. This paper discusses how the Fast Flux Test Facility at Westinghouse Hanford Company overcame these obstacles by using a computerized system through barcode technology.

  4. What do educational test scores really measure?

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    Latent class Poisson count models are used to analyze a sample of Danish test score results from a cohort of individuals born in 1954-55 and tested in 1968. The procedure takes account of unobservable effects as well as excessive zeros in the data. The bulk of unobservable effects are uncorrelate...

  5. High-voltage test and measuring techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hauschild, Wolfgang; Lemke, Eberhard

    2014-04-01

    Reflects the unit of both HV testing and measuring technique. Intended as an ''application guide'' for the relevant IEC standards. Refers also to future trends in HV testing and measuring technique. With numerous illustrations. It is the intent of this book to combine high-voltage (HV) engineering with HV testing technique and HV measuring technique. Based on long-term experience gained by the authors as lecturer and researcher as well as member in international organizations, such as IEC and CIGRE, the book will reflect the state of the art as well as the future trends in testing and diagnostics of HV equipment to ensure a reliable generation, transmission and distribution of electrical energy. The book is intended not only for experts but also for students in electrical engineering and high-voltage engineering.

  6. Direct friction measurement in draw bead testing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan Lasson

    2005-01-01

    have been reported in literature. A major drawback in all these studies is that friction is not directly measured, but requires repeated measurements of the drawing force with and without relative sliding between the draw beads and the sheet material. This implies two tests with a fixed draw bead tool...... and a freely rotating tool respectively, an approach, which inevitably implies large uncertainties due to scatter in the experimental conditions. In order to avoid this problem a new draw bead test is proposed by the authors measuring the friction force acting on the tool radius directly by a build......-in piezoelectric torque transducer. This technique results in a very sensitive measurement of friction, which furthermore enables recording of lubricant film breakdown as function of drawing distance. The proposed test is validated in an experimental investigation of the influence of lubricant viscosity...

  7. Use of the Oslo-Potsdam Solution to test the effect of an environmental education model on tangible measures of environmental protection

    Science.gov (United States)

    Short, Philip Craig

    The fundamental goals of environmental education include the creation of an environmentally literate citizenry possessing the knowledge, skills, and motivation to objectively analyze environmental issues and engage in responsible behaviors leading to issue resolution and improved or maintained environmental quality. No existing research, however, has linked educational practices and environmental protection. In an original attempt to quantify the pedagogy - environmental protection relationship, both qualitative and quantitative methods were used to investigate local environmental records and environmental quality indices that reflected the results of student actions. The data were analyzed using an educational adaptation of the "Oslo-Potsdam Solution for International Environmental Regime Effectiveness." The new model, termed the Environmental Education Performance Indicator (EEPI), was developed and evaluated as a quantitative tool for testing and fairly comparing the efficacy of student-initiated environmental projects in terms of environmental quality measures. Five case studies were developed from descriptions of student actions and environmental impacts as revealed by surveys and interviews with environmental education teachers using the IEEIA (Investigating and Evaluating Environmental Issues and Actions) curriculum, former students, community members, and agency officials. Archival information was also used to triangulate the data. In addition to evaluating case study data on the basis of the EEPI model, an expert panel of evaluators consisting of professionals from environmental education, natural sciences, environmental policy, and environmental advocacy provided subjective assessments on the effectiveness of each case study. The results from this study suggest that environmental education interventions can equip and empower students to act on their own conclusions in a manner that leads to improved or maintained environmental conditions. The EEPI model

  8. Testing gravity with pulsar scintillation measurements

    Science.gov (United States)

    Yang, Huan; Nishizawa, Atsushi; Pen, Ue-Li

    2017-04-01

    We propose to use pulsar scintillation measurements to test predictions of alternative theories of gravity. Compared to single-path pulsar timing measurements, the scintillation measurements can achieve an accuracy of one part in a thousand within one wave period, which means picosecond scale resolution in time, due to the effect of multipath interference. Previous scintillation measurements of PSR B 0834 +06 have hours of data acquisition, making this approach sensitive to mHz gravitational waves. Therefore it has unique advantages in measuring the effect of gravity or other mechanisms on light propagation. We illustrate its application in constraining the scalar gravitational-wave background, in which case the sensitivities can be greatly improved with respect to previous limits. We expect much broader applications in testing gravity with existing and future pulsar scintillation observations.

  9. Testing Gravity with Pulsar Scintillation Measurements

    CERN Document Server

    Yang, Huan; Pen, Ue-Li

    2016-01-01

    We propose to use pulsar scintillation measurements to test predictions of alternative theories of gravity. Comparing to single-path pulsar timing measurements, the scintillation measurements can achieve a factor of 10^5 improvement in timing accuracy, due to the effect of multi-path interference. Previous scintillation measurements of PSR B0834+06 have data acquisition for hours, making this approach sensitive to mHz gravitational waves. Therefore it has unique advantages in measuring gravitational effect or other mechanisms (at mHz and above frequencies) on light propagation. We illustrate its application in constraining scalar gravitational-wave background, in which case the sensitivities can be greatly improved with respect to previous limits. We expect much broader applications in testing gravity with existing and future pulsar scintillation observations.

  10. Measurement Error in Maximal Oxygen Uptake Tests

    Science.gov (United States)

    2003-11-14

    Journal of Applied Physiology, 64, 434-436. De Vito, G., Bernardi, Sproviero, E., & Figura , F. (1995). Decrease of endurance performance during...would be unchanged. Alternative models were defined by imposing constraints on the standard errors. Every model imposed the constraint that SEM was...the same for both tests within each sample. Different models were obtained by varying whether equality constraints were imposed across samples

  11. Anaerobic fitness tests: what are we measuring?

    Science.gov (United States)

    Van Praagh, Emmanuel

    2007-01-01

    Anaerobic fitness, during growth and development, has not received the same attention from researchers as aerobic fitness. This is surprising given the level of anaerobic energy used daily during childhood and adolescence. During physical activity and sport, the child is spontaneously more attracted to short-burst movements than to long-term activities. It is, however, well known that in anaerobic activities such as sprint cycling, sprint running or sprint swimming, the child's performance is distinctly poorer than that of the adult. This partly reflects the child's lesser ability to generate mechanical energy from chemical energy sources during short-term high-intensity work or exercise. Direct measurements of the rate or capacity of anaerobic pathways for energy turnover presents several ethical and methodological difficulties. Therefore, rather than measure energy supply, pediatric exercise scientists have concentrated on measuring short-term power output by means of standardized protocol tests such as short-term cycling power tests, running tests or vertical jump tests. There is, however, no perfect test and, therefore, it is important to acknowledge the benefits and limitations of each testing method. Mass-related short-term power output was shown to increase dramatically during growth and development, whereas the corresponding increase in peak blood lactate was considerably lower. This suggests that the observed difference between children and adolescents during short-term power output testing may be related to neuromuscular factors, hormonal factors and improved motor coordination.

  12. Chromatic aberration measurement for transmission interferometric testing.

    Science.gov (United States)

    Seong, Kibyung; Greivenkamp, John E

    2008-12-10

    A method of chromatic aberration measurement is described based on the transmitted wavefront of an optical element obtained from a Mach-Zehnder interferometer. The chromatic aberration is derived from transmitted wavefronts measured at five different wavelengths. Reverse ray tracing is used to remove induced aberrations associated with the interferometer from the measurement. In the interferometer, the wavefront transmitted through the sample is tested against a plano reference, allowing for the absolute determination of the wavefront radius of curvature. The chromatic aberrations of a singlet and a doublet have been measured.

  13. A checklist for testing measurement invariance.

    NARCIS (Netherlands)

    Van de Schoot, R.|info:eu-repo/dai/nl/304833207; Lugtig, P.J.|info:eu-repo/dai/nl/304824658; Hox, J.J.|info:eu-repo/dai/nl/073351431

    2012-01-01

    The analysis of measurement invariance of latent constructs is important in research across groups, or across time. By establishing whether factor loadings, intercepts and residual variances are equivalent in a factor model that measures a latent concept, we can assure that comparisons that are made

  14. Direct friction measurement in draw bead testing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan Lasson

    2005-01-01

    The application of draw beads in sheet metal stamping ensures controlled drawing-in of flange parts. Lubrication conditions in draw beads are severe due to sliding under simultaneous bending. Based on the original draw bead test design by Nine [1] comprehensive studies of friction in draw beads...... have been reported in literature. A major drawback in all these studies is that friction is not directly measured, but requires repeated measurements of the drawing force with and without relative sliding between the draw beads and the sheet material. This implies two tests with a fixed draw bead tool...... and a freely rotating tool respectively, an approach, which inevitably implies large uncertainties due to scatter in the experimental conditions. In order to avoid this problem a new draw bead test is proposed by the authors measuring the friction force acting on the tool radius directly by a build...

  15. Criteria for Analyzing a Test Measuring Learning Progress

    Directory of Open Access Journals (Sweden)

    Jürgen Wilbert

    2011-11-01

    Full Text Available Evaluating learning progress is a vital element of educational interventions for students with learning disabilities. Measuring change imposes considerably different requirements on test construction compared to traditional psychometric diagnostic instruments. The present paper discusses four theoretical challenges for test construction, namely a high reliability, unidimensionality, constant item difficulty, and high test fairness. A procedure is proposed for analyzing tests to fit these four criteria making use of item analyses, confirmatory factor analyses, Rasch modeling, and analyses of differential item functioning. The suggested procedure is exemplified on a newly developed test for measuring language proficiency of students with learning difficulties on the basis of c-tests. The results disclose c-tests as highly suitable for measuring differences in general language development.

  16. Population Propensity Measurement Model

    Science.gov (United States)

    1993-12-01

    school DQ702 Taken elementary algebra DQ703 Taken plane geometry DQ70 Taken computer science DQ706 Taken intermediate algebra DQ707 Taken trigonometry ...with separate models for distributing the arrival of applicants over FY’s, quarters, or months. The primary obstacle in these models is shifting the...to ŕ" = Otherwise DQ706 Binary: 1 = Taken intermediate Q706 is equal to ŕ" algebra, 0 = Otherwise DQ707 Binary: 1 = Taken trigonometry , 0 = Q707 is

  17. Testing for Distortions in Performance Measures

    DEFF Research Database (Denmark)

    Sloof, Randolph; Van Praag, Mirjam

    Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (2008) to detect whether the widely used class of Residual Income based...... performance measures —such as Economic Value Added (EVA)— is distorted, leading to unintended agent behavior. The paper uses a difference-in-differences approach to account for changes in economic circumstances and the self-selection of firms using EVA. Our findings indicate that EVA is a distorted...... performance measure that elicits the gaming response....

  18. Testing for Distortions in Performance Measures

    DEFF Research Database (Denmark)

    Sloof, Randolph; Van Praag, Mirjam

    2015-01-01

    used class of residual income-based performance measures-such as economic value added (EVA)-is distorted, leading to unintended agent behavior. The paper uses a difference-in-differences approach to account for changes in economic circumstances and the self-selection of firms using EVA. Our findings......Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (Review of Economics and Statistics, 90, 428-441) to detect whether the widely...... indicate that EVA is a distorted performance measure that elicits the gaming response....

  19. Measurement system analysis for binary tests

    NARCIS (Netherlands)

    Akkerhuis, T.S.

    2016-01-01

    Binary tests classify items into two categories such as reject/accept, positive/negative or guilty/innocent. A binary test’s proneness to measurement error is usually expressed in terms of the misclassification probabilities FAP (false acceptance probability) and FRP (false rejection probability).

  20. Objectively-Measured Impulsivity and Attention-Deficit/Hyperactivity Disorder (ADHD): Testing Competing Predictions from the Working Memory and Behavioral Inhibition Models of ADHD

    Science.gov (United States)

    Raiker, Joseph S.; Rapport, Mark D.; Kofler, Michael J.; Sarver, Dustin E.

    2012-01-01

    Impulsivity is a hallmark of two of the three DSM-IV ADHD subtypes and is associated with myriad adverse outcomes. Limited research, however, is available concerning the mechanisms and processes that contribute to impulsive responding by children with ADHD. The current study tested predictions from two competing models of ADHD--working memory (WM)…

  1. Computational Modeling of Simulation Tests.

    Science.gov (United States)

    1980-06-01

    Mexico , March 1979. 14. Kinney, G. F.,.::. IeiN, .hoce 1h Ir, McMillan, p. 57, 1962. 15. Courant and Friedrichs, ,U: r. on moca an.: Jho...AD 79 275 NEW MEXICO UNIV ALBUGUERGUE ERIC H WANG CIVIL ENGINE-ETC F/6 18/3 COMPUTATIONAL MODELING OF SIMULATION TESTS.(U) JUN 80 6 LEIGH, W CHOWN, B...COMPUTATIONAL MODELING OF SIMULATION TESTS00 0G. Leigh W. Chown B. Harrison Eric H. Wang Civil Engineering Research Facility University of New Mexico

  2. Measurement of Diameter Changes during Irradiation Testing

    Energy Technology Data Exchange (ETDEWEB)

    Davis, K. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Knudson, D. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Crepeau, J. C. [Univ. of Idaho, Idaho Falls, ID (United States); Solstad, S. [Inst. for Energy Technologoy, Halden (Norway)

    2015-03-01

    New materials are being considered for fuel, cladding, and structures in advanced and existing nuclear reactors. Such materials can experience significant dimensional and physical changes during irradiation. Currently in the US, such changes are measured by repeatedly irradiating a specimen for a specified period of time and then removing it from the reactor for evaluation. The time and labor to remove, examine, and return irradiated samples for each measurement makes this approach very expensive. In addition, such techniques provide limited data and handling may disturb the phenomena of interest. In-pile detection of changes in geometry is sorely needed to understand real-time behavior during irradiation testing of fuels and materials in high flux US Material and Test Reactors (MTRs). This paper presents development results of an advanced Linear Variable Differential Transformer-based test rig capable of detecting real-time changes in diameter of fuel rods or material samples during irradiation in US MTRs. This test rig is being developed at the Idaho National Laboratory and will provide experimenters with a unique capability to measure diameter changes associated with fuel and cladding swelling, pellet-clad interaction, and crud buildup.

  3. Testing models of tree canopy structure

    Energy Technology Data Exchange (ETDEWEB)

    Martens, S.N. (Los Alamos National Laboratory, NM (United States))

    1994-06-01

    Models of tree canopy structure are difficult to test because of a lack of data which are suitability detailed. Previously, I have made three-dimensional reconstructions of individual trees from measured data. These reconstructions have been used to test assumptions about the dispersion of canopy elements in two- and three-dimensional space. Lacunarity analysis has also been used to describe the texture of the reconstructed canopies. Further tests regarding models of the nature of tree branching structures have been made. Results using probability distribution functions for branching measured from real trees show that branching in Juglans is not Markovian. Specific constraints or rules are necessary to achieve simulations of branching structure which are faithful to the originally measured trees.

  4. High-voltage test and measuring techniques

    CERN Document Server

    Hauschild, Wolfgang

    2014-01-01

    It is the intent of this book to combine high-voltage (HV) engineering with HV testing technique and HV measuring technique. Based on long-term experience gained by the authors as lecturer and researcher as well as member in international organizations, such as IEC and CIGRE, the book will reflect the state of the art as well as the future trends in testing and diagnostics of HV equipment to ensure a reliable generation, transmission and distribution of electrical energy. The book is intended not only for experts but also for students in electrical engineering and high-voltage engineering.

  5. Educational Testing as an Accountability Measure

    DEFF Research Database (Denmark)

    Ydesen, Christian

    2013-01-01

    analysis of the origins and impacts of test-based accountability measures applying both top-down and bottom-up perspectives. These historical perspectives offer the opportunity to gain a fuller understanding of this contemporary accountability concept and its potential, appeal, and implications...... for continued use in contemporary educational settings. Accountability measures and practices serve as a way to govern schools; by analysing the history of accountability as the concept has been practised in the education sphere, the article will discuss both pros and cons of such a methodology, particularly......This article reveals perspectives based on experiences from twentieth-century Danish educational history by outlining contemporary, test-based accountability regime characteristics and their implications for education policy. The article introduces one such characteristic, followed by an empirical...

  6. Educational Testing as an Accountability Measure

    DEFF Research Database (Denmark)

    Ydesen, Christian

    2013-01-01

    analysis of the origins and impacts of test-based accountability measures applying both top-down and bottom-up perspectives. These historical perspectives offer the opportunity to gain a fuller understanding of this contemporary accountability concept and its potential, appeal, and implications...... for continued use in contemporary educational settings. Accountability measures and practices serve as a way to govern schools; by analysing the history of accountability as the concept has been practised in the education sphere, the article will discuss both pros and cons of such a methodology, particularly......This article reveals perspectives based on experiences from twentieth-century Danish educational history by outlining contemporary, test-based accountability regime characteristics and their implications for education policy. The article introduces one such characteristic, followed by an empirical...

  7. What does the Wonderlic Personnel Test measure?

    Science.gov (United States)

    Matthews, T Darin; Lassiter, Kerry S

    2007-06-01

    The present investigation examined the concurrent validity of the Wonderlic Personnel Test and Woodcock-Johnson-Revised Tests of Cognitive Ability which were administered to 37 college students, 27 women and 10 men, who ranged in age from 18 to 54 years (M=27.1, SD=8.7). Analysis yielded significant correlation coefficients between the Wonderlic Total score and the score for the WJ-R Broad Cognitive Ability Standard Battery (r = .55) and the Comprehensive Knowledge score (r= .34). Performance on the Wonderlic was not significantly correlated with fluid reasoning skills (r=.26) but was most strongly associated with overall intellectual functioning, as measured by the Woodcock-Johnson Standard Battery IQ score. While scores on the Wonderlic were more strongly associated with crystallized than fluid reasoning abilities, the Wonderlic test scores did not clearly show convergent and divergent validity evidence across these two broad domains of cognitive ability.

  8. Spectral Test Instrument for Color Vision Measurement

    Institute of Scientific and Technical Information of China (English)

    Balázs Vince Nagy; Gy(o)rgy (A)brahám

    2005-01-01

    Common displays such as CRT or LCD screens have limited capabilities in displaying most color spectra correctly. The main disadvantage of these devices is that they work with three primaries and the colors displayed are the mixture of these three colours. Consequently these devices can be confusing in testing human color identification, because the spectral distribution of the colors displayed is the combined spectrum of the three primaries. We have developed a new instrument for spectrally correct color vision measurement. This instrument uses light emitting diodes (LEDs) and is capable of producing all spectra of perceivable colors, thus with appropriate test methods this instrument can be a reliable and useful tool in testing human color vision and in verifying color vision correction.

  9. [Thurstone model application to difference sensory tests].

    Science.gov (United States)

    Angulo, Ofelia; O'Mahony, Michael

    2009-12-01

    Part of understanding why judges perform better on some difference tests than others requires an understanding of how information coming from the mouth to the brain is processed. For some tests it is processed more efficiently than others. This is described by what has been called Thurstonian modeling. This brief review introduces the concepts and ideas involved in Thurstonian modeling as applied to sensory difference measurement. It summarizes the literature concerned with the theorizing and confirmation of Thurstonian models. It introduces the important concept of stimulus variability and the fundamental measure of sensory difference: d'. It indicates how the paradox of discriminatory non-discriminators, which had puzzled researchers for years, can be simply explained using the model. It considers how memory effects and the complex interactions in the mouth can reduce d' by increasing the variance of sensory distributions.

  10. Improved testing inference in mixed linear models

    CERN Document Server

    Melo, Tatiane F N; Cribari-Neto, Francisco; 10.1016/j.csda.2008.12.007

    2011-01-01

    Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Oftentimes, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test and also to a test obtained from a modified profile likelihood function. Our results generalize those in Zucker et al. (Journal of the Royal Statistical Society B, 2000, 62, 827-838) by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report numerical evidence which shows that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presente...

  11. Test Beam Measurements on Picosec Gaseous Detector.

    CERN Document Server

    Sohl, Lukas

    2017-01-01

    In the Picosec project micro pattern gaseous detectors with a time resolution of some ten picoseconds are developed. The detectors are based on Micromegas detectors. With a cherenkov window and a photocathode the time jitter from different position of the primary ionization clusters can be substituted. This reports describes the beam setup and measurements of different Picosec prototypes. A time resolution of under 30 ps has been measured during the test beam. This report gives an overview of my work as a Summer Student. I set up and operated a triple-GEM tracker and a trigger system for the beam. During the beam I measured different prototypes of Picosec detectors and analysed the data.

  12. Nevada Test Site seismic: telemetry measurements

    Energy Technology Data Exchange (ETDEWEB)

    Albright, J N; Parker, L E; Horton, E H

    1983-08-01

    The feasibility and limitations of surface-to-tunnel seismic telemetry at the Nevada Test Site were explored through field measurements using current technology. Range functions for signaling were determined through analysis of monofrequency seismic signals injected into the earth at various sites as far as 70 km (43 mi) from installations of seismometers in the G-Tunnel complex of Rainier Mesa. Transmitted signal power at 16, 24, and 32 Hz was measured at two locations in G-Tunnel separated by 670 m (2200 ft). Transmissions from 58 surface sites distributed primarily along three azimuths from G-Tunnel were studied. The G-Tunnel noise environment was monitored over the 20-day duration of the field tests. Noise-power probability functions were calculated for 20-s and 280-s seismic-record populations. Signaling rates were calculated for signals transmitted from superior transmitter sites to G-Tunnel. A detection threshold of 13 dB re 1 nm/sup 2/ displacement power at 95% reliability was demanded. Consideration of field results suggests that even for the frequency range used in this study, substantially higher signaling rates are likely to be obtained in future work in view of the present lack of information relevant to hardware-siting criteria and the seismic propagation paths at the Nevada Test Site. 12 references.

  13. Empirical Measurement of the Software Testing and Reliability

    Institute of Scientific and Technical Information of China (English)

    Zou Feng-zhong; Xu Ren-zuo

    2004-01-01

    The meanings of parameters of software reliability models are investigated in terms of the process of the software testing and in terms of other measurements of software. Based on the investigation, the empirical estimation of the parameters is addressed. On one hand, these empirical estimates are also measurements of the software, which can be used to control and to optimize the process of the software development. On the other hand, by treating these empirical estimates as Bayes priors, software reliability models are extended such that the engineers' experience can be integrated into and hence to improve the models.

  14. Controlling acquiescence bias in measurement invariance tests

    Directory of Open Access Journals (Sweden)

    Aichholzer Julian

    2015-01-01

    Full Text Available Assessing measurement invariance (MI is an important cornerstone in establishing equivalence of instruments and comparability of constructs. However, a common concern is that respondent differences in acquiescence response style (ARS behavior could entail a lack of MI for the measured constructs. This study investigates if and how ARS impacts MI and the level of MI achieved. Data from two representative samples and two popular short Big Five personality scales were analyzed to study hypothesized ARS differences among educational groups. Multiple-group factor analysis and the random intercept method for controlling ARS are used to investigate MI with and without controlling for ARS. Results suggest that, contrary to expectations, controlling for ARS had little impact on conclusions regarding the level of MI of the instruments. Thus, the results suggest that testing MI is not an appropriate means for detecting ARS differences per se. Implications and further research areas are discussed.

  15. Models Used for Measuring Customer Engagement

    Directory of Open Access Journals (Sweden)

    Mihai TICHINDELEAN

    2013-12-01

    Full Text Available The purpose of the paper is to define and measure the customer engagement as a forming element of the relationship marketing theory. In the first part of the paper, the authors review the marketing literature regarding the concept of customer engagement and summarize the main models for measuring it. One probability model (Pareto/NBD model and one parametric model (RFM model specific for the customer acquisition phase are theoretically detailed. The second part of the paper is an application of the RFM model; the authors demonstrate that there is no statistical significant variation within the clusters formed on two different data sets (training and test set if the cluster centroids of the training set are used as initial cluster centroids for the second test set.

  16. Influence of test and person characteristics on nonparametric appropriateness measurement

    NARCIS (Netherlands)

    Meijer, Rob R.; Molenaar, Ivo W.; Sijtsma, Klaas

    1994-01-01

    Appropriateness measurement in nonparametric item response theory modeling is affected by the reliability of the items, the test length, the type of aberrant response behavior, and the percentage of aberrant persons in the group. The percentage of simulees defined a priori as aberrant responders tha

  17. Influence of Test and Person Characteristics on Nonparametric Appropriateness Measurement

    NARCIS (Netherlands)

    Meijer, Rob R; Molenaar, Ivo W; Sijtsma, Klaas

    1994-01-01

    Appropriateness measurement in nonparametric item response theory modeling is affected by the reliability of the items, the test length, the type of aberrant response behavior, and the percentage of aberrant persons in the group. The percentage of simulees defined a priori as aberrant responders tha

  18. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yin Peili

    2017-08-01

    Full Text Available Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI. The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  19. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Science.gov (United States)

    Yin, Peili; Wang, Jianhua; Lu, Chunxia

    2017-08-01

    Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI). The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI) to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  20. Proceedings Tenth Workshop on Model Based Testing

    OpenAIRE

    Pakulin, Nikolay; Petrenko, Alexander K.; Schlingloff, Bernd-Holger

    2015-01-01

    The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model...

  1. Remote control missile model test

    Science.gov (United States)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  2. Intelligence Is What the Intelligence Test Measures. Seriously

    Directory of Open Access Journals (Sweden)

    Han L. J. van der Maas

    2014-02-01

    Full Text Available The mutualism model, an alternative for the g-factor model of intelligence, implies a formative measurement model in which “g” is an index variable without a causal role. If this model is accurate, the search for a genetic of brain instantiation of “g” is deemed useless. This also implies that the (weighted sum score of items of an intelligence test is just what it is: a weighted sum score. Preference for one index above the other is a pragmatic issue that rests mainly on predictive value.

  3. State of the art hydraulic turbine model test

    Science.gov (United States)

    Fabre, Violaine; Duparchy, Alexandre; Andre, Francois; Larroze, Pierre-Yves

    2016-11-01

    Model tests are essential in hydraulic turbine development and related fields. The methods and technologies used to perform these tests show constant progress and provide access to further information. In addition, due to its contractual nature, the test demand evolves continuously in terms of quantity and accuracy. Keeping in mind that the principal aim of model testing is the transposition of the model measurements to the real machine, the measurements should be performed accurately, and a critical analysis of the model test results is required to distinguish the transposable hydraulic phenomena from the test rig interactions. Although the resonances’ effects are known and described in the IEC standard, their identification is difficult. Leaning on a strong experience of model testing, we will illustrate with a few examples of how to identify the potential problems induced by the test rig. This paper contains some of our best practices to obtain the most accurate, relevant, and independent test-rig measurements.

  4. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  5. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  6. Block-based test data adequacy measurement criteria and test complexity metrics

    Institute of Scientific and Technical Information of China (English)

    陈卫东; 杨建军; 叶澄清; 潘云鹤

    2002-01-01

    On the basis of software testing tools we developed for progrmnming languages, we firstly present a new control flowgraph model based on block. In view of the notion of block, we extend the traditional program-based software test data adequacy measurement criteria, and empirically analyze the subsume relation between these measurement criteria. Then, we define four test complexity metrics based on block. They are J-complexity 0; J-complexity 1 ; J-complexity 1 + ; J-complexity 2. Finally, we show the Kiviat diagram that makes software quality visible.

  7. Block-based test data adequacy measurement criteria and test complexity metrics

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    On the basis of software testing tools we developed for programming languages, we firstly present a new control flowgraph model based on block. In view of the notion of block, we extend the traditional program-based software test data adequacy measurement criteria, and empirically analyze the subsume relation between these measurement criteria. Then, we define four test complexity metrics based on block. They are J-complexity 0; J-complexity 1; J-complexity 1 +; J-complexity 2. Finally, we show the Kiviat diagram that makes software quality visible.

  8. A Blind Test of Hapke's Photometric Model

    Science.gov (United States)

    Helfenstein, P.; Shepard, M. K.

    2003-01-01

    Hapke's bidirectional reflectance equation is a versatile analytical tool for predicting (i.e. forward modeling) the photometric behavior of a particulate surface from the observed optical and structural properties of its constituents. Remote sensing applications of Hapke s model, however, generally seek to predict the optical and structural properties of particulate soil constituents from the observed photometric behavior of a planetary surface (i.e. inverse-modeling). Our confidence in the latter approach can be established only if we ruthlessly test and optimize it. Here, we summarize preliminary results from a blind-test of the Hapke model using laboratory measurements obtained with the Bloomsburg University Goniometer (B.U.G.). The first author selected eleven well-characterized powder samples and measured the spectrophotometric behavior of each. A subset of twenty undisclosed examples of the photometric measurement sets were sent to the second author who fit the data using the Hapke model and attempted to interpret their optical and mechanical properties from photometry alone.

  9. A randomized controlled pilot study of VO2 max testing: a potential model for measuring relative in vivo efficacy of different red blood cell products.

    Science.gov (United States)

    Bennett-Guerrero, Elliott; Lockhart, Evelyn L; Bandarenko, Nicholas; Campbell, Mary L; Natoli, Michael J; Jamnik, Veronika K; Carter, Timothy R; Moon, Richard E

    2017-03-01

    Randomized trials, for example, RECESS, comparing "young" (median, 7-day) versus "middle-aged" (median, 28-day) red blood cells (RBCs), showed no difference in outcome. These data are important; however, they do not inform us about the safety and effectiveness of the oldest RBCs, which some patients receive. It may not be feasible to conduct a clinical trial randomizing patients to receive the oldest blood. Therefore, we propose strenuous exercise (VO2 max testing) as a model to study the relative efficacy to increase oxygen delivery to tissue of different RBC products, for example, extremes of storage duration. In this pilot study, eight healthy subjects had 2 units of leukoreduced RBCs collected by apheresis in AS-3 using standard methods. Subjects were randomized to receive both (2) units of their autologous RBCs at either 7 or 42 days after blood collection. VO2 max testing on a cycle ergometer was performed 2 days before (Monday) and 2 days after (Friday) the transfusion visit (Wednesday). This design avoids confounding effects on intravascular volume from the 2-unit blood transfusion. The primary outcome was the difference in VO2 max between Friday and Monday (delta VO2 max). VO2 max increased more in the 7-day RBC arm (8.7 ± 6.9% vs. 1.9 ± 6.5%, p = 0.202 for comparison between arms). Exercise duration (seconds) increased in the 7-day RBC arm (8.4 ± 1.7%) but actually decreased in the 42-day arm (-2.6 ± 3.6%, p = 0.002). This pilot study suggests that VO2 max testing has potential as a rigorous and quantitative in vivo functional assay of RBC function. Our preliminary results suggest that 42-day RBCs are inferior to 7-day RBCs at delivering oxygen to tissues. © 2016 AABB.

  10. Models of Credit Risk Measurement

    OpenAIRE

    Hagiu Alina

    2011-01-01

    Credit risk is defined as that risk of financial loss caused by failure by the counterparty. According to statistics, for financial institutions, credit risk is much important than market risk, reduced diversification of the credit risk is the main cause of bank failures. Just recently, the banking industry began to measure credit risk in the context of a portfolio along with the development of risk management started with models value at risk (VAR). Once measured, credit risk can be diversif...

  11. Cold Test Measurements on the GTF Prototype RF Gun

    Energy Technology Data Exchange (ETDEWEB)

    Gierman, S.M.

    2010-12-03

    The SSRL Gun Test Facility (GTF) was built to develop a high brightness electron injector for the LCLS and has been operational since 1996. Based on longitudinal phase space measurements showing a correlated energy spread the gun was removed and re-characterized in 2002. The low power RF measurements performed on the gun are described below. Perturbative bead measurements were performed to determine the field ratio in the two-cell gun, and network analyzer measurements were made to characterize the mode structure. A second probe was installed to monitor the RF field in the first cell, and a diagnostic was developed to monitor the high-power field ratio. Calibration of the RF probes, a model for analyzing RF measurements, and Superfish simulations of bead and RF measurements are described.

  12. Mechanical Vibrations Modeling and Measurement

    CERN Document Server

    Schmitz, Tony L

    2012-01-01

    Mechanical Vibrations:Modeling and Measurement describes essential concepts in vibration analysis of mechanical systems. It incorporates the required mathematics, experimental techniques, fundamentals of modal analysis, and beam theory into a unified framework that is written to be accessible to undergraduate students,researchers, and practicing engineers. To unify the various concepts, a single experimental platform is used throughout the text to provide experimental data and evaluation. Engineering drawings for the platform are included in an appendix. Additionally, MATLAB programming solutions are integrated into the content throughout the text. This book also: Discusses model development using frequency response function measurements Presents a clear connection between continuous beam models and finite degree of freedom models Includes MATLAB code to support numerical examples that are integrated into the text narrative Uses mathematics to support vibrations theory and emphasizes the practical significanc...

  13. Validation of measured friction by process tests

    DEFF Research Database (Denmark)

    Eriksen, Morten; Henningsen, Poul; Tan, Xincai;

    The objective of sub-task 3.3 is to evaluate under actual process conditions the friction formulations determined by simulative testing. As regards task 3.3 the following tests have been used according to the original project plan: 1. standard ring test and 2. double cup extrusion test. The task ...

  14. Division Quilts: A Measurement Model

    Science.gov (United States)

    Pratt, Sarah S.; Lupton, Tina M.; Richardson, Kerri

    2015-01-01

    As teachers seek activities to assist students in understanding division as more than just the algorithm, they find many examples of division as fair sharing. However, teachers have few activities to engage students in a quotative (measurement) model of division. Efraim Fischbein and his colleagues (1985) defined two types of whole-number…

  15. Testing Strategies for Model-Based Development

    Science.gov (United States)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  16. Acceptance Testing--Course Readiness Measurement.

    Science.gov (United States)

    Kaczka, Eugene; Singer, Frank

    For several semesters, the teaching staff of the administrative statistics course at The University of Massachusetts have engaged in "acceptance testing"--the screening of incoming students for some very basic skills with an open book, unlimited time, repeatable test. Test results demonstrated that successful completion of prerequisite…

  17. Constructing three emotion knowledge tests from the invariant measurement approach

    Directory of Open Access Journals (Sweden)

    Ana R. Delgado

    2017-09-01

    Full Text Available Background Psychological constructionist models like the Conceptual Act Theory (CAT postulate that complex states such as emotions are composed of basic psychological ingredients that are more clearly respected by the brain than basic emotions. The objective of this study was the construction and initial validation of Emotion Knowledge measures from the CAT frame by means of an invariant measurement approach, the Rasch Model (RM. Psychological distance theory was used to inform item generation. Methods Three EK tests—emotion vocabulary (EV, close emotional situations (CES and far emotional situations (FES—were constructed and tested with the RM in a community sample of 100 females and 100 males (age range: 18–65, both separately and conjointly. Results It was corroborated that data-RM fit was sufficient. Then, the effect of type of test and emotion on Rasch-modelled item difficulty was tested. Significant effects of emotion on EK item difficulty were found, but the only statistically significant difference was that between “happiness” and the remaining emotions; neither type of test, nor interaction effects on EK item difficulty were statistically significant. The testing of gender differences was carried out after corroborating that differential item functioning (DIF would not be a plausible alternative hypothesis for the results. No statistically significant sex-related differences were found out in EV, CES, FES, or total EK. However, the sign of d indicate that female participants were consistently better than male ones, a result that will be of interest for future meta-analyses. Discussion The three EK tests are ready to be used as components of a higher-level measurement process.

  18. Measurement of ability emotional intelligence: results for two new tests.

    Science.gov (United States)

    Austin, Elizabeth J

    2010-08-01

    Emotional intelligence (EI) has attracted considerable interest amongst both individual differences researchers and those in other areas of psychology who are interested in how EI relates to criteria such as well-being and career success. Both trait (self-report) and ability EI measures have been developed; the focus of this paper is on ability EI. The associations of two new ability EI tests with psychometric intelligence, emotion perception, and the Mayer-Salovey-Caruso EI test (MSCEIT) were examined. The new EI tests were the Situational Test of Emotion Management (STEM) and the Situational Test of Emotional Understanding (STEU). Only the STEU and the MSCEIT Understanding Emotions branch were significantly correlated with psychometric intelligence, suggesting that only understanding emotions can be regarded as a candidate new intelligence component. These understanding emotions tests were also positively correlated with emotion perception tests, and STEM and STEU scores were positively correlated with MSCEIT total score and most branch scores. Neither the STEM nor the STEU were significantly correlated with trait EI tests, confirming the distinctness of trait and ability EI. Taking the present results as a starting-point, approaches to the development of new ability EI tests and models of EI are suggested.

  19. Laser Tracker Calibration - Testing the Angle Measurement System -

    Energy Technology Data Exchange (ETDEWEB)

    Gassner, Georg; Ruland, Robert; /SLAC

    2008-12-05

    Physics experiments at the SLAC National Accelerator Laboratory (SLAC) usually require high accuracy positioning, e. g. 100 {micro}m over a distance of 150 m or 25 {micro}m in a 10 x 10 x 3 meter volume. Laser tracker measurement systems have become one of the most important tools for achieving these accuracies when mapping components. The accuracy of these measurements is related to the manufacturing tolerances of various individual components, the resolutions of measurement systems, the overall precision of the assembly, and how well imperfections can be modeled. As with theodolites and total stations, one can remove the effects of most assembly and calibration errors by measuring targets in both direct and reverse positions and computing the mean to obtain the result. However, this approach does not compensate for errors originating from the encoder system. In order to improve and gain a better understanding of laser tracker angle measurement tolerances we extended our laboratory's capabilities with the addition of a horizontal angle calibration test stand. This setup is based on the use of a high precision rotary table providing an angular accuracy of better than 0.2 arcsec. Presently, our setup permits only tests of the horizontal angle measurement system. A test stand for vertical angle calibration is under construction. Distance measurements (LECOCQ & FUSS, 2000) are compared to an interferometer bench for distances of up to 32 m. Together both tests provide a better understanding of the instrument and how it should be operated. The observations also provide a reasonable estimate of covariance information of the measurements according to their actual performance for network adjustments.

  20. Linear Logistic Test Modeling with R

    Directory of Open Access Journals (Sweden)

    Purya Baghaei

    2014-01-01

    Full Text Available The present paper gives a general introduction to the linear logistic test model (Fischer, 1973, an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014 functions to estimate the model and interpret its parameters. The applications of the model in test validation, hypothesis testing, cross-cultural studies of test bias, rule-based item generation, and investigating construct irrelevant factors which contribute to item difficulty are explained. The model is applied to an English as a foreign language reading comprehension test and the results are discussed.

  1. Testing Linear Models for Ability Parameters in Item Response Models

    NARCIS (Netherlands)

    Glas, Cees A.W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like

  2. Testing linearity against nonlinear moving average models

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.

    1998-01-01

    Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared

  3. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  4. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  5. Dynamic model of Fast Breeder Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Vaidyanathan, G., E-mail: vaidya@igcar.gov.i [Fast Reactor Technology Group, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Kasinathan, N.; Velusamy, K. [Fast Reactor Technology Group, Indira Gandhi Center for Atomic Research, Kalpakkam (India)

    2010-04-15

    Fast Breeder Test Reactor (FBTR) is a 40 M Wt/13.2 MWe sodium cooled reactor operating since 1985. It is a loop type reactor. As part of the safety analysis the response of the plant to various transients is needed. In this connection a computer code named DYNAM was developed to model the reactor core, the intermediate heat exchanger, steam generator, piping, etc. This paper deals with the mathematical model of the various components of FBTR, the numerical techniques to solve the model, and comparison of the predictions of the code with plant measurements. Also presented is the benign response of the plant to a station blackout condition, which brings out the role of the various reactivity feedback mechanisms combined with a gradual coast down of reactor sodium flow.

  6. Developing and Testing a Bayesian Analysis of Fluorescence Lifetime Measurements

    Science.gov (United States)

    Needleman, Daniel J.

    2017-01-01

    FRET measurements can provide dynamic spatial information on length scales smaller than the diffraction limit of light. Several methods exist to measure FRET between fluorophores, including Fluorescence Lifetime Imaging Microscopy (FLIM), which relies on the reduction of fluorescence lifetime when a fluorophore is undergoing FRET. FLIM measurements take the form of histograms of photon arrival times, containing contributions from a mixed population of fluorophores both undergoing and not undergoing FRET, with the measured distribution being a mixture of exponentials of different lifetimes. Here, we present an analysis method based on Bayesian inference that rigorously takes into account several experimental complications. We test the precision and accuracy of our analysis on controlled experimental data and verify that we can faithfully extract model parameters, both in the low-photon and low-fraction regimes. PMID:28060890

  7. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van

    2007-01-01

    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension ki

  8. Propfan test assessment testbed aircraft flutter model test report

    Science.gov (United States)

    Jenness, C. M. J.

    1987-01-01

    The PropFan Test Assessment (PTA) program includes flight tests of a propfan power plant mounted on the left wind of a modified Gulfstream II testbed aircraft. A static balance boom is mounted on the right wing tip for lateral balance. Flutter analyses indicate that these installations reduce the wing flutter stabilizing speed and that torsional stiffening and the installation of a flutter stabilizing tip boom are required on the left wing for adequate flutter safety margins. Wind tunnel tests of a 1/9th scale high speed flutter model of the testbed aircraft were conducted. The test program included the design, fabrication, and testing of the flutter model and the correlation of the flutter test data with analysis results. Excellent correlations with the test data were achieved in posttest flutter analysis using actual model properties. It was concluded that the flutter analysis method used was capable of accurate flutter predictions for both the (symmetric) twin propfan configuration and the (unsymmetric) single propfan configuration. The flutter analysis also revealed that the differences between the tested model configurations and the current aircraft design caused the (scaled) model flutter speed to be significantly higher than that of the aircraft, at least for the single propfan configuration without a flutter boom. Verification of the aircraft final design should, therefore, be based on flutter predictions made with the test validated analysis methods.

  9. 40 CFR 401.13 - Test procedures for measurement.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Test procedures for measurement. 401.13... AND STANDARDS GENERAL PROVISIONS § 401.13 Test procedures for measurement. The test procedures for measurement which are prescribed at part 136 of this chapter shall apply to expressions of pollutant...

  10. Refine test items for accurate measurement: six valuable tips.

    Science.gov (United States)

    Siroky, Karen; Di Leonardi, Bette Case

    2015-01-01

    Nursing Professional Development (NPD) specialists frequently design test items to assess competence, to measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items and using test results to strengthen validity of measurement. NPD specialists can readily apply these tips and examples to measure knowledge with greater accuracy.

  11. Finds in Testing Experiments for Model Evaluation

    Institute of Scientific and Technical Information of China (English)

    WU Ji; JIA Xiaoxia; LIU Chang; YANG Haiyan; LIU Chao

    2005-01-01

    To evaluate the fault location and the failure prediction models, simulation-based and code-based experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.

  12. Axial force measurement for esophageal function testing

    DEFF Research Database (Denmark)

    Gravesen, Flemming Holbæk; Funch-Jensen, Peter; Gregersen, Hans

    2009-01-01

    force transducers over in-vivo strain gauges of various sizes to electrical impedance based measurements. The amplitude and duration of the axial force has been shown to be as reliable as manometry. Normal, as well as abnormal, manometric recordings occur with normal bolus transit, which have been...... force (force in radial direction) whereas the bolus moves along the length of esophagus in a distal direction. Force measurements in the longitudinal (axial) direction provide a more direct measure of esophageal transport function. The technique used to record axial force has developed from external...... documented using imaging modalities such as radiography and scintigraphy. This inconsistency using manometry has also been documented by axial force recordings. This underlines the lack of information when diagnostics are based on manometry alone. Increasing the volume of a bag mounted on a probe...

  13. Testing of constitutive models in LAME.

    Energy Technology Data Exchange (ETDEWEB)

    Hammerand, Daniel Carl; Scherzinger, William Mark

    2007-09-01

    Constitutive models for computational solid mechanics codes are in LAME--the Library of Advanced Materials for Engineering. These models describe complex material behavior and are used in our finite deformation solid mechanics codes. To ensure the correct implementation of these models, regression tests have been created for constitutive models in LAME. A selection of these tests is documented here. Constitutive models are an important part of any solid mechanics code. If an analysis code is meant to provide accurate results, the constitutive models that describe the material behavior need to be implemented correctly. Ensuring the correct implementation of constitutive models is the goal of a testing procedure that is used with the Library of Advanced Materials for Engineering (LAME) (see [1] and [2]). A test suite for constitutive models can serve three purposes. First, the test problems provide the constitutive model developer a means to test the model implementation. This is an activity that is always done by any responsible constitutive model developer. Retaining the test problem in a repository where the problem can be run periodically is an excellent means of ensuring that the model continues to behave correctly. A second purpose of a test suite for constitutive models is that it gives application code developers confidence that the constitutive models work correctly. This is extremely important since any analyst that uses an application code for an engineering analysis will associate a constitutive model in LAME with the application code, not LAME. Therefore, ensuring the correct implementation of constitutive models is essential for application code teams. A third purpose of a constitutive model test suite is that it provides analysts with example problems that they can look at to understand the behavior of a specific model. Since the choice of a constitutive model, and the properties that are used in that model, have an enormous effect on the results of an

  14. Measurement Invariance Testing of a Three-Factor Model of Parental Warmth, Psychological Control, and Knowledge across European and Asian/Pacific Islander American Youth.

    Science.gov (United States)

    Luk, Jeremy W; King, Kevin M; McCarty, Carolyn A; Stoep, Ann Vander; McCauley, Elizabeth

    2016-06-01

    While the interpretation and effects of parenting on developmental outcomes may be different across European and Asian/Pacific Islander (API) American youth, measurement invariance of parenting constructs has rarely been examined. Utilizing multiple-group confirmatory factor analysis, we examined whether the latent structure of parenting measures are equivalent or different across European and API American youth. Perceived parental warmth, psychological control, and knowledge were reported by a community sample of 325 adolescents (242 Europeans and 83 APIs). Results indicated that one item did not load on mother psychological control for API American youth. After removing this item, we found metric invariance for all parenting dimensions, providing support for cross-cultural consistency in the interpretation of parenting items. Scalar invariance was found for father parenting, whereas three mother parenting items were non-invariant across groups at the scalar level. After taking into account several minor forms of measurement non-invariance, non-invariant factor means suggested that API Americans perceived lower parental warmth and knowledge but higher parental psychological control than European Americans. Overall, the degree of measurement non-invariance was not extensive and was primarily driven by a few parenting items. All but one parenting item included in this study may be used for future studies across European and API American youth.

  15. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  16. Tin Whisker Testing and Modeling

    Science.gov (United States)

    2015-11-01

    contamination , and post-assembly contamination methods 40 5.2.3 Environmental exposure, whisker measurements and metallurgical analysis 42 5.3 Results and...The roughness is partly a result of the shrinkage of the liquid between the primary tin dendrites. The roughness tends to trap contamination , which...diameter, density, and distribution were measured. 5.2.3.3 Metallurgical analysis To examine contamination levels and distributions, as-received

  17. Integrated outburst detector sensor-model tests

    Institute of Scientific and Technical Information of China (English)

    DZIURZY(N)SKI Wac(I)aw; WASILEWSKI Stanis(I)aw

    2011-01-01

    Outbursts of methane and rocks are,similarly to rock bursts,the biggest hazards in deep mines and are equally difficult to predict.The violent process of the outburst itself,along with the scale and range of hazards following the rapid discharge of gas and rocks,requires solutions which would enable quick and unambiguous detection of the hazard,immediate power supply cut-off and evacuation of personnel from potentially hazardous areas.For this purpose,an integrated outburst detector was developed.Assumed functions of the sensor which was equipped with three measuring and detection elements:a chamber for constant measurement of methane concentration,pressure sensor and microphone.Tests of the sensor model were carried out to estimate the parameters which characterize the dynamic properties of the sensor.Given the impossibility of carrying out the full scale experimental outburst,the sensor was tested during the methane and coal dust explosions in the testing gallery at KD Barbara.The obtained results proved that the applied solutions have been appropriate.

  18. Package Hermeticity Testing with Thermal Transient Measurements

    CERN Document Server

    Vass-Varnai, Andras

    2008-01-01

    The rapid incursion of new technologies such as MEMS and smart sensor device manufacturing requires new tailor-made packaging designs. In many applications these devices are exposed to humid environments. Since the penetration of moisture into the package may result in internal corrosion or shift of the operating parameters, the reliability testing of hermetically sealed packages has become a crucial question in the semiconductor industry.

  19. GEOCHEMICAL TESTING AND MODEL DEVELOPMENT - RESIDUAL TANK WASTE TEST PLAN

    Energy Technology Data Exchange (ETDEWEB)

    CANTRELL KJ; CONNELLY MP

    2010-03-09

    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  20. Modeling and testing of ethernet transformers

    Science.gov (United States)

    Bowen, David

    2011-12-01

    Twisted-pair Ethernet is now the standard home and office last-mile network technology. For decades, the IEEE standard that defines Ethernet has required electrical isolation between the twisted pair cable and the Ethernet device. So, for decades, every Ethernet interface has used magnetic core Ethernet transformers to isolate Ethernet devices and keep users safe in the event of a potentially dangerous fault on the network media. The current state-of-the-art Ethernet transformers are miniature (explored which are capable of exceptional miniaturization or on-chip fabrication. This dissertation thoroughly explores the performance of the current commercial Ethernet transformers to both increase understanding of the device's behavior and outline performance parameters for replacement devices. Lumped element and distributed circuit models are derived; testing schemes are developed and used to extract model parameters from commercial Ethernet devices. Transfer relation measurements of the commercial Ethernet transformers are compared against the model's behavior and it is found that the tuned, distributed models produce the best transfer relation match to the measured data. Process descriptions and testing results on fabricated thin-film dielectric-core toroid transformers are presented. The best results were found for a 32-turn transformer loaded with 100Ω, the impedance of twisted pair cable. This transformer gave a flat response from about 10MHz to 40MHz with a height of approximately 0.45. For the fabricated transformer structures, theoretical methods to determine resistance, capacitance and inductance are presented. A special analytical and numerical analysis of the fabricated transformer inductance is presented. Planar cuts of magnetic slope fields around the dielectric-core toroid are shown that describe the effect of core height and winding density on flux uniformity without a magnetic core.

  1. Direct Mass Measurements on the Superallowed Emitter $^{74}$Rb and its Daughter $^{74}$Kr Isospin-Symmetry-Breaking Correction for Standard-Model Tests

    CERN Document Server

    Kellerbauer, A G; Beck, D; Blaum, K; Bollen, G; Brown, B A; Delahaye, P; Guénaut, C; Herfurth, F; Kluge, H J; Lunney, M D; Schwarz, S; Schweikhard, L; Yazidjian, C

    2004-01-01

    The decay energy of the superallowed $\\beta$-decay $^{74}$Rb($\\beta^{+}$)$^{74}$Kr was determined by direct Penning trap mass measurements on both the mother and the daughter nuclide using the time-of-flight resonance technique and found to be $Q = 10416.8(4.5)$ keV. The exotic nuclide $^{74}$Rb, with a half-life of only 65 ms, is the shortest-lived nuclide on which a high-precision mass measurement in a Penning trap has been carried out. Together with existing data for the partial half-life as well as theoretical corrections, the decay energy yields a comparative half-life of $Ft$ = 3084(15) s for this decay, in agreement with the mean value for the series of the lighter nuclides from $^{10}$C to $^{54}$Co. Assuming conserved vector current, this result allows for an experimental determination of the isospin-symmetry-breaking correction $\\delta$C.

  2. Axial force measurement for esophageal function testing

    Institute of Scientific and Technical Information of China (English)

    Flemming H Gravesen; Peter Funch-Jensen; Hans Gregersen; Asbjφrn Mohr Drewes

    2009-01-01

    The esophagus serves to transport food and fluid from the pharynx to the stomach. Manometry has been the "golden standard" for the diagnosis of esophageal motility diseases for many decades. Hence, esophageal function is normally evaluated by means of manometry even though it reflects the squeeze force (force in radial direction) whereas the bolus moves along the length of esophagus in a distal direction. Force measurements in the longitudinal (axial) direction provide a more direct measure of esophageal transport function. The technique used to record axial force has developed from external force transducers over in-vivo strain gauges of various sizes to electrical impedance based measurements. The amplitude and duration of the axial force has been shown to be as reliable as manometry. Normal, as well as abnormal, manometric recordings occur with normal bolus transit, which have been documented using imaging modalities such as radiography and scintigraphy. This inconsistency using manometry has also been documented by axial force recordings. This underlines the lack of information when diagnostics are based on manometry alone. Increasing the volume of a bag mounted on a probe with combined axial force and manometry recordings showed that axial force amplitude increased by 130% in contrast to an increase of 30% using manometry. Using axial force in combination with manometry provides a more complete picture of esophageal motility, and the current paper outlines the advantages of using this method.

  3. Axial force measurement for esophageal function testing.

    Science.gov (United States)

    Gravesen, Flemming H; Funch-Jensen, Peter; Gregersen, Hans; Drewes, Asbjørn Mohr

    2009-01-14

    The esophagus serves to transport food and fluid from the pharynx to the stomach. Manometry has been the "golden standard" for the diagnosis of esophageal motility diseases for many decades. Hence, esophageal function is normally evaluated by means of manometry even though it reflects the squeeze force (force in radial direction) whereas the bolus moves along the length of esophagus in a distal direction. Force measurements in the longitudinal (axial) direction provide a more direct measure of esophageal transport function. The technique used to record axial force has developed from external force transducers over in-vivo strain gauges of various sizes to electrical impedance based measurements. The amplitude and duration of the axial force has been shown to be as reliable as manometry. Normal, as well as abnormal, manometric recordings occur with normal bolus transit, which have been documented using imaging modalities such as radiography and scintigraphy. This inconsistency using manometry has also been documented by axial force recordings. This underlines the lack of information when diagnostics are based on manometry alone. Increasing the volume of a bag mounted on a probe with combined axial force and manometry recordings showed that axial force amplitude increased by 130% in contrast to an increase of 30% using manometry. Using axial force in combination with manometry provides a more complete picture of esophageal motility, and the current paper outlines the advantages of using this method.

  4. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    A floating model of the Wave Dragon (WD) was built in autumn 1998 by the Danish Maritime Institute in scale 1:50, see Sørensen and Friis-Madsen (1999) for reference. This model was subjected to a series of model tests and subsequent modifications at Aalborg University and in the following...... are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...

  5. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  6. Testing the consistency between cosmological measurements of distance and age

    Directory of Open Access Journals (Sweden)

    Remya Nair

    2015-05-01

    Full Text Available We present a model independent method to test the consistency between cosmological measurements of distance and age, assuming the distance duality relation. We use type Ia supernovae, baryon acoustic oscillations, and observational Hubble data, to reconstruct the luminosity distance DL(z, the angle-averaged distance DV(z and the Hubble rate H(z, using Gaussian processes regression technique. We obtain estimate of the distance duality relation in the redshift range 0.1

  7. Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability

    Directory of Open Access Journals (Sweden)

    Ahmad A. Saifan

    2016-04-01

    Full Text Available Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.

  8. Testing whether the DSM-5 personality disorder trait model can be measured with a reduced set of items: An item response theory investigation of the Personality Inventory for DSM-5.

    Science.gov (United States)

    Maples, Jessica L; Carter, Nathan T; Few, Lauren R; Crego, Cristina; Gore, Whitney L; Samuel, Douglas B; Williamson, Rachel L; Lynam, Donald R; Widiger, Thomas A; Markon, Kristian E; Krueger, Robert F; Miller, Joshua D

    2015-12-01

    The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) includes an alternative model of personality disorders (PDs) in Section III, consisting in part of a pathological personality trait model. To date, the 220-item Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2012) is the only extant self-report instrument explicitly developed to measure this pathological trait model. The present study used item response theory-based analyses in a large sample (n = 1,417) to investigate whether a reduced set of 100 items could be identified from the PID-5 that could measure the 25 traits and 5 domains. This reduced set of PID-5 items was then tested in a community sample of adults currently receiving psychological treatment (n = 109). Across a wide range of criterion variables including NEO PI-R domains and facets, DSM-5 Section II PD scores, and externalizing and internalizing outcomes, the correlational profiles of the original and reduced versions of the PID-5 were nearly identical (rICC = .995). These results provide strong support for the hypothesis that an abbreviated set of PID-5 items can be used to reliably, validly, and efficiently assess these personality disorder traits. The ability to assess the DSM-5 Section III traits using only 100 items has important implications in that it suggests these traits could still be measured in settings in which assessment-related resources (e.g., time, compensation) are limited.

  9. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Best, Ralph E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Maheras, Steven J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jensen, Philip J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); England, Jeffery L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); LeDuc, Dan [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-09-25

    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  10. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B.; Best, Ralph E.; Maheras, Steven J.; Jensen, Philip J.; England, Jeffery L.; LeDuc, Dan

    2014-09-24

    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  11. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  12. Colour Reconnection - Models and Tests

    CERN Document Server

    Christiansen, Jesper R

    2015-01-01

    Recent progress on colour reconnection within the Pythia framework is presented. A new model is introduced, based on the SU(3) structure of QCD and a minimization of the potential string energy. The inclusion of the epsilon structure of SU(3) gives a new baryon production mechanism and makes it possible simultaneously to describe hyperon production at both $e^+e^-$ and pp colliders. Finally, predictions for $e^+e^-$ colliders, both past and potential future ones, are presented.

  13. Model Based Testing for Agent Systems

    Science.gov (United States)

    Zhang, Zhiyong; Thangarajah, John; Padgham, Lin

    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on model based unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system.

  14. TESTING MONETARY EXCHANGE RATE MODELS WITH PANEL COINTEGRATION TESTS

    Directory of Open Access Journals (Sweden)

    Szabo Andrea

    2015-07-01

    Full Text Available The monetary exchange rate models explain the long run behaviour of the nominal exchange rate. Their central assertion is that there is a long run equilibrium relationship between the nominal exchange rate and monetary macro-fundamentals. Although these models are essential tools of international macroeconomics, their empirical validity is ambiguous. Previously, time series testing was prevalent in the literature, but it did not bring convincing results. The power of the unit root and the cointegration tests are too low to reject the null hypothesis of no cointegration between the variables. This power can be enhanced by arranging our data in a panel data set, which allows us to analyse several time series simultaneously and enables us to increase the number of observations. We conducted a weak empirical test of the monetary exchange rate models by testing the existence of cointegration between the variables in three panels. We investigated 6, 10 and 15 OECD countries during the following periods: 1976Q1-2011Q4, 1985Q1-2011Q4 and 1996Q1-2011Q4. We tested the reduced form of the monetary exchange rate models in three specifications; we have two restricted models and an unrestricted model. Since cointegration can only be interpreted among non-stationary processes, we investigate the order of the integration of our variables with IPS, Fisher-ADF, Fisher-PP panel unit root tests and the Hadri panel stationary test. All the variables can be unit root processes; therefore we analyze the cointegration with the Pedroni and Kao panel cointegration test. The restricted models performed better than the unrestricted one and we obtained the best results with the 1985Q1-2011Q4 panel. The Kao test rejects the null hypotheses – there is no cointegration between the variables – in all the specifications and all the panels, but the Pedroni test does not show such a positive picture. Hence we found only moderate support for the monetary exchange rate models.

  15. Test of the Standard Model of electroweak interactions by measuring the anomalous W W gamma couplings at √s = 1.8-TeV

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Michael Lawrence

    1996-04-01

    An analysis of Wγ events found in 73.0 pb-1 collected with the D0 detector during Tevatron Run 1b is presented. Forty-six candidate events are observed with a predicted background of 13.2 events. The total cross section for p$\\bar{p}$ → Wγ + X (for pTγ > 10 GeV/c and ΔR > 0.7) times the branching ratio of W bosons to electrons is measured to be: σ(p$\\bar{p}$ → Wγ + X) x BR(W → ev) = 11.19$+2.66\\atop{-2.32}$ ± 0.61 (syst) ± 0.56 (lum) pb. 95% confidence level limits on the CP-conserving anomalous coupling parameters are set using a fit to the photon transverse energy spectrum of the events with a three-body transverse cluster mass greater than 90 GeV/c2. The results are: -1.4 < Δκ < 1.4 (λ = 0) and -0.5 < λ < 0.5 (Δκ = 0) with similar limits are set on the CP-violating coupling parameters $\\bar{κ}$ and $\\bar{λ}$. These limits were set by assuming a dipole form factor with a scale factor of {Lambda} = 1.5 TeV.

  16. Test of the Standard Model of electroweak interactions by measuring the anomalous W W gamma couplings at s**(1/2) = 1.8-TeV

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Michael Lawrence

    1996-04-01

    An analysis of W{gamma} events found in 73.0 pb{sup -1} collected with the D0 detector during Tevatron Run 1b is presented. Forty-six candidate events are observed with a predicted background of 13.2 events. The total cross section for p{bar p} {yields} W{gamma} + X (for p{sub T}{sup {gamma}} > 10 GeV/c and {Delta}R{sub e{gamma}} > 0.7) times the branching ratio of W bosons to electrons is measured to be: {sigma}(p{bar p} {yields} W{gamma} + X) x BR(W {yields} e{nu}) = 11.19{sub -2.32}{sup +2.66} {+-} 0.61 (syst) {+-} 0.56 (lum) pb. 95% confidence level limits on the CP-conserving anomalous coupling parameters are set using a fit to the photon transverse energy spectrum of the events with a three-body transverse cluster mass greater than 90 GeV/c{sup 2}. The results are: -1.4 < {Delta}{kappa} < 1.4 ({lambda} = 0) and -0.5 < {lambda} < 0.5 ({Delta}{kappa} = 0) with similar limits are set on the CP-violating coupling parameters {bar {kappa}} and {bar {lambda}}. These limits were set by assuming a dipole form factor with a scale factor of {Lambda} = 1.5 TeV.

  17. Test measurements on a secco white-lead containing model samples to assess the effects of exposure to low-fluence UV laser radiation

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, Valentina, E-mail: v.raimondi@ifac.cnr.it [‘Nello Carrara’ Applied Physics Institute - National Research Council of Italy (CNR-IFAC), Firenze (Italy); Andreotti, Alessia; Colombini, Maria Perla [Chemistry and Industrial Chemistry Department (DCCI) - University of Pisa, Pisa (Italy); Cucci, Costanza [‘Nello Carrara’ Applied Physics Institute - National Research Council of Italy (CNR-IFAC), Firenze (Italy); Cuzman, Oana [Institute for the Conservation and Promotion of Cultural Heritage - National Research Council (CNR-ICVBC), Firenze (Italy); Galeotti, Monica [Opificio delle Pietre Dure (OPD), Firenze (Italy); Lognoli, David; Palombi, Lorenzo; Picollo, Marcello [‘Nello Carrara’ Applied Physics Institute - National Research Council of Italy (CNR-IFAC), Firenze (Italy); Tiano, Piero [Institute for the Conservation and Promotion of Cultural Heritage - National Research Council (CNR-ICVBC), Firenze (Italy)

    2015-05-15

    Highlights: • A set of a secco model samples was prepared using white lead and four different organic binders (animal glue and whole egg, whole egg, skimmed milk, egg-oil tempera). • The samples were irradiated with low-fluence UV laser pulses (0.1–1 mJ/cm{sup 2}). • The effects of laser irradiation were analysed by using different techniques. • The analysis did not point out changes due to low-fluence laser irradiation. • High fluence (88 mJ/cm{sup 2}) laser radiation instead yielded a chromatic change ascribed to the inorganic component. - Abstract: Laser-induced fluorescence technique is widely used for diagnostic purposes in several applications and its use could be of advantage for non-invasive on-site characterisation of pigments or other compounds in wall paintings. However, it is well known that long-time exposure to UV and VIS radiation can cause damage to wall paintings. Several studies have investigated the effects of lighting, e.g., in museums: however, the effects of low-fluence laser radiation have not been studied much so far. This paper investigates the effects of UV laser radiation using fluences in the range of 0.1 mJ/cm{sup 2}–1 mJ/cm{sup 2} on a set of a secco model samples prepared with lead white and different type of binders (animal glue and whole egg, whole egg, skimmed milk, egg-oil tempera). The samples were irradiated using a Nd:YAG laser (emission wavelength at 355 nm; pulse width: 5 ns) by applying laser fluences between 0.1 mJ/cm{sup 2} and 1 mJ/cm{sup 2} and a number of laser pulses between 1 and 500. The samples were characterised before and after laser irradiation by using several techniques (colorimetry, optical microscopy, fibre optical reflectance spectroscopy, FT-IR spectroscopy Attenuated Total Reflectance microscopy and gas chromatography/mass spectrometry), to detect variations in the morphological and physico-chemical properties. The results did not point out significant changes in the sample properties after

  18. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  19. Test device for measuring permeability of a barrier material

    Science.gov (United States)

    Reese, Matthew; Dameron, Arrelaine; Kempe, Michael

    2014-03-04

    A test device for measuring permeability of a barrier material. An exemplary device comprises a test card having a thin-film conductor-pattern formed thereon and an edge seal which seals the test card to the barrier material. Another exemplary embodiment is an electrical calcium test device comprising: a test card an impermeable spacer, an edge seal which seals the test card to the spacer and an edge seal which seals the spacer to the barrier material.

  20. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  1. Cost Modeling for SOC Modules Testing

    Directory of Open Access Journals (Sweden)

    Balwinder Singh

    2013-08-01

    Full Text Available The complexity of the system design is increasing very rapidly as the number of transistors on Integrated Circuits (IC doubles as per Moore’s law.There is big challenge of testing this complex VLSI circuit, in which whole system is integrated into a single chip called System on Chip (SOC. Cost of testing the SOC is also increasing with complexity. Cost modeling plays a vital role in reduction of test cost and time to market. This paper includes the cost modeling of the SOC Module testing which contains both analog and digital modules. The various test cost parameters and equations are considered from the previous work. The mathematical relations are developed for cost modeling to test the SOC further cost modeling equations are modeled in Graphical User Interface (GUI in MATLAB, which can be used as a cost estimation tool. A case study is done to calculate the cost of the SOC testing due to Logic Built in Self Test (LBIST and Memory Built in Self Test (MBIST. VLSI Test engineers can take the benefits of such cost estimation tools for test planning.

  2. Biglan Model Test Based on Institutional Diversity.

    Science.gov (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  3. Graphical Models and Computerized Adaptive Testing.

    Science.gov (United States)

    Mislevy, Robert J.; Almond, Russell G.

    This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…

  4. Multivariate Model for Test Response Analysis

    NARCIS (Netherlands)

    Krishnan, Shaji; Krishnan, Shaji; Kerkhoff, Hans G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage

  5. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have a sim...

  6. Multivariate model for test response analysis

    NARCIS (Netherlands)

    Krishnan, S.; Kerkhoff, H.G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai

  7. Port Adriano, 2D-Model Tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke; Jensen, Palle Meinert

    This report present the results of 2D physical model tests (length scale 1:50) carried out in a waveflume at Dept. of Civil Engineering, Aalborg University (AAU).......This report present the results of 2D physical model tests (length scale 1:50) carried out in a waveflume at Dept. of Civil Engineering, Aalborg University (AAU)....

  8. Multivariate model for test response analysis

    NARCIS (Netherlands)

    Krishnan, S.; Kerkhoff, H.G.

    2010-01-01

    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai

  9. Designing healthy communities: Testing the walkability model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran

    2017-03-01

    Full Text Available Research from multiple domains has provided insights into how neighborhood design can be improved to have a more favorable effect on physical activity, a concept known as walkability. The relevant research findings/hypotheses have been integrated into a Walkability Framework, which organizes the design elements into nine walkability categories. The purpose of this study was to test whether this conceptual framework can be used as a model to measure the interactions between the built environment and physical activity. We explored correlations between the walkability categories and physical activity reported through a survey of residents of Tucson, Arizona (n=486. The results include significant correlations between the walkability categories and physical activity as well as between the walkability categories and the two motivations for walking (recreation and transportation. To our knowledge, this is the first study that reports links between walkability and walking for recreation. Additionally, the use of the Walkability Framework allowed us to identify the walkability categories most strongly correlated with the two motivations for walking. The results of this study support the use of the Walkability Framework as a model to measure the built environment in relation to its ability to promote physical activity.

  10. Testing Models for Structure Formation

    CERN Document Server

    Kaiser, N

    1993-01-01

    I review a number of tests of theories for structure formation. Large-scale flows and IRAS galaxies indicate a high density parameter $\\Omega \\simeq 1$, in accord with inflationary predictions, but it is not clear how this meshes with the uniformly low values obtained from virial analysis on scales $\\sim$ 1Mpc. Gravitational distortion of faint galaxies behind clusters allows one to construct maps of the mass surface density, and this should shed some light on the large vs small-scale $\\Omega$ discrepancy. Power spectrum analysis reveals too red a spectrum (compared to standard CDM) on scales $\\lambda \\sim 10-100$ $h^{-1}$Mpc, but the gaussian fluctuation hypothesis appears to be in good shape. These results suggest that the problem for CDM lies not in the very early universe --- the inflationary predictions of $\\Omega = 1$ and gaussianity both seem to be OK; furthermore, the COBE result severely restricts modifications such as tilting the primordial spectrum --- but in the assumed matter content. The power s...

  11. The Couplex test cases: models and lessons

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeat, A. [Lyon-1 Univ., MCS, 69 - Villeurbanne (France); Kern, M. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Schumacher, S.; Talandier, J. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)

    2003-07-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  12. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))

    1990-11-01

    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  13. The Vanishing Tetrad Test: Another Test of Model Misspecification

    Science.gov (United States)

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  14. The Vanishing Tetrad Test: Another Test of Model Misspecification

    Science.gov (United States)

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  15. Computer-adaptive test to measure community reintegration of Veterans.

    Science.gov (United States)

    Resnik, Linda; Tian, Feng; Ni, Pengsheng; Jette, Alan

    2012-01-01

    The Community Reintegration of Injured Service Members (CRIS) measure consists of three scales measuring extent of, perceived limitations in, and satisfaction with community reintegration. Length of the CRIS may be a barrier to its widespread use. Using item response theory (IRT) and computer-adaptive test (CAT) methodologies, this study developed and evaluated a briefer community reintegration measure called the CRIS-CAT. Large item banks for each CRIS scale were constructed. A convenience sample of 517 Veterans responded to all items. Exploratory and confirmatory factor analyses (CFAs) were used to identify the dimensionality within each domain, and IRT methods were used to calibrate items. Accuracy and precision of CATs of different lengths were compared with the full-item bank, and data were examined for differential item functioning (DIF). CFAs supported unidimensionality of scales. Acceptable item fit statistics were found for final models. Accuracy of 10-, 15-, 20-, and variable-item CATs for all three scales was 0.88 or above. CAT precision increased with number of items administered and decreased at the upper ranges of each scale. Three items exhibited moderate DIF by sex. The CRIS-CAT demonstrated promising measurement properties and is recommended for use in community reintegration assessment.

  16. Measurement and Modeling: Infectious Disease Modeling

    NARCIS (Netherlands)

    Kretzschmar, MEE

    2016-01-01

    After some historical remarks about the development of mathematical theory for infectious disease dynamics we introduce a basic mathematical model for the spread of an infection with immunity. The concepts of the model are explained and the model equations are derived from first principles. Using th

  17. Results from laboratory and field testing of nitrate measuring spectrophotometers

    Science.gov (United States)

    Snazelle, Teri T.

    2015-01-01

    Five ultraviolet (UV) spectrophotometer nitrate analyzers were evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) during a two-phase evaluation. In Phase I, the TriOS ProPs (10-millimeter (mm) path length), Hach NITRATAX plus sc (5-mm path length), Satlantic Submersible UV Nitrate Analyzer (SUNA, 10-mm path length), and S::CAN Spectro::lyser (5-mm path length) were evaluated in the HIF Water-Quality Servicing Laboratory to determine the validity of the manufacturer's technical specifications for accuracy, limit of linearity (LOL), drift, and range of operating temperature. Accuracy specifications were met in the TriOS, Hach, and SUNA. The stock calibration of the S::CAN required two offset adjustments before the analyzer met the manufacturer's accuracy specification. Instrument drift was observed only in the S::CAN and was the result of leaching from the optical path insert seals. All tested models, except for the Hach, met their specified LOL in the laboratory testing. The Hach's range was found to be approximately 18 milligrams nitrogen per liter (mg-N/L) and not the manufacturer-specified 25 mg-N/L. Measurements by all of the tested analyzers showed signs of hysteresis in the operating temperature tests. Only the SUNA measurements demonstrated excessive noise and instability in temperatures above 20 degrees Celsius (°C). The SUNA analyzer was returned to the manufacturer at the completion of the Phase II field deployment evaluation for repair and recalibration, and the performance of the sensor improved significantly.

  18. Measuring Writing Ability with the Cloze Test is not Closed.

    Science.gov (United States)

    Esau, Helmut; Yost, Carlson

    This paper describes an experiment that was undertaken to examine the usefulness of the cloze test as an objective measure of a native speaker's writing ability. A modified version of the cloze test used by Oller and others to measure integrative language skills in non-native speakers was given to 100 freshman English students. The test…

  19. Cost Modeling for SOC Modules Testing

    OpenAIRE

    Balwinder Singh; Arun Khosla; Sukhleen B. Narang

    2013-01-01

    The complexity of the system design is increasing very rapidly as the number of transistors on Integrated Circuits (IC) doubles as per Moore’s law.There is big challenge of testing this complex VLSI circuit, in which whole system is integrated into a single chip called System on Chip (SOC). Cost of testing the SOC is also increasing with complexity. Cost modeling plays a vital role in reduction of test cost and time to market. This paper includes the cost modeling of the SOC Module testing...

  20. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  1. From specification to measurement: the bottleneck in analog industrial testing

    NARCIS (Netherlands)

    Rijsinge, van R.J.; Haggenburg, A.A.R.M.; Vries, de C.; Wallinga, H.

    1990-01-01

    The translation of the specification of an analog device into the necessary set of measurements to be carried out by an industrial test facility is discussed. Algorithms are developed to compute the number of test vectors needed to guarantee a certain parameter and to compare several possible test m

  2. Fan Noise Source Diagnostic Test: LDV Measured Flow Field Results

    Science.gov (United States)

    Podboy, Gary C.; Krupar, Martin J.; Hughes, Christopher E.; Woodward, Richard P.

    2003-01-01

    Results are presented of an experiment conducted to investigate potential sources of noise in the flow developed by two 22-in. diameter turbofan models. The R4 and M5 rotors that were tested were designed to operate at nominal take-off speeds of 12,657 and 14,064 RPMC, respectively. Both fans were tested with a common set of swept stators installed downstream of the rotors. Detailed measurements of the flows generated by the two were made using a laser Doppler velocimeter system. The wake flows generated by the two rotors are illustrated through a series of contour plots. These show that the two wake flows are quite different, especially in the tip region. These data are used to explain some of the differences in the rotor/stator interaction noise generated by the two fan stages. In addition to these wake data, measurements were also made in the R4 rotor blade passages. These results illustrate the tip flow development within the blade passages, its migration downstream, and (at high rotor speeds) its merging with the blade wake of the adjacent (following) blade. Data also depict the variation of this tip flow with tip clearance. Data obtained within the rotor blade passages at high rotational speeds illustrate the variation of the mean shock position across the different blade passages.

  3. Bell tests with arbitrarily low photodetection efficiency and homodyne measurements

    CERN Document Server

    Araújo, Mateus; Cavalcanti, Daniel; Santos, Marcelo França; Cabello, Adán; Cunha, Marcelo Terra

    2011-01-01

    We show that hybrid local measurements combining homodyne measurements and photodetection provide violations of a Bell inequality with arbitrarily low photodetection efficiency. This is shown in two different scenarios: when one part receives an atom entangled to the field mode to be measured by the other part and when both parts make similar photonic measurements. Our findings definitely put the hybrid measurement scenario as a strong candidate for the implementation of a loophole-free Bell test.

  4. FIM measurement properties and Rasch model details.

    Science.gov (United States)

    Wright, B D; Linacre, J M; Smith, R M; Heinemann, A W; Granger, C V

    1997-12-01

    To summarize, we take issue with the criticisms of Dickson & Köhler for two main reasons: 1. Rasch analysis provides a model from which to approach the analysis of the FIM, an ordinal scale, as an interval scale. The existence of examples of items or individuals which do not fit the model does not disprove the overall efficacy of the model; and 2. the principal components analysis of FIM motor items as presented by Dickson & Köhler tends to undermine rather than support their argument. Their own analyses produce a single major factor explaining between 58.5 and 67.1% of the variance, depending upon the sample, with secondary factors explaining much less variance. Finally, analysis of item response, or latent trait, is a powerful method for understanding the meaning of a measure. However, it presumes that item scores are accurate. Another concern is that Dickson & Köhler do not address the issue of reliability of scoring the FIM items on which they report, a critical point in comparing results. The Uniform Data System for Medical Rehabilitation (UDSMRSM) expends extensive effort in the training of clinicians of subscribing facilities to score items accurately. This is followed up with a credentialing process. Phase 1 involves the testing of individual clinicians who are submitting data to determine if they have achieved mastery over the use of the FIM instrument. Phase 2 involves examining the data for outlying values. When Dickson & Köhler investigate more carefully the application of the Rasch model to their FIM data, they will discover that the results presented in their paper support rather than contradict their application of the Rasch model! This paper is typical of supposed refutations of Rasch model applications. Dickson & Köhler will find that idiosyncrasies in their data and misunderstandings of the Rasch model are the only basis for a claim to have disproven the relevance of the model to FIM data. The Rasch model is a mathematical theorem (like

  5. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  6. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  7. Model tests on a semi-axial pump turbine

    Energy Technology Data Exchange (ETDEWEB)

    Strohmer, F.; Horacek, G.

    1984-03-01

    Due to their good hydraulic characteristic semi-axial pump turbines are used in the medium head range of pumped storage plants. This paper describes model tests performed on a semiaxial pump turbine model and shows the results of these tests. The aim of the model tests was the optimization of the hydraulic water passage, the measurement of the hydraulic characteristics over the whole operating range, the investigation of the cavitation behaviour, the investigation of the hydraulic forces and torques as well as the proof of the values guaranteed to the customer.

  8. Standardized Tests and Froebel's Original Kindergarten Model

    Science.gov (United States)

    Jeynes, William H.

    2006-01-01

    The author argues that American educators rely on standardized tests at too early an age when administered in kindergarten, particularly given the original intent of kindergarten as envisioned by its founder, Friedrich Froebel. The author examines the current use of standardized tests in kindergarten and the Froebel model, including his emphasis…

  9. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...

  10. Sample Size Determination for Rasch Model Tests

    Science.gov (United States)

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  11. Intelligence is what the intelligence test measures. Seriously

    NARCIS (Netherlands)

    H.L.J. van der Maas; K.-J. Kan; D. Borsboom

    2014-01-01

    The mutualism model, an alternative for the g-factor model of intelligence, implies a formative measurement model in which "g" is an index variable without a causal role. If this model is accurate, the search for a genetic of brain instantiation of "g" is deemed useless. This also implies that the (

  12. Testing for Equivalence: A Methodology for Computational Cognitive Modelling

    Science.gov (United States)

    Stewart, Terrence; West, Robert

    2010-12-01

    The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.

  13. Assimilation of measurement data in hydrodynamic modeling

    Science.gov (United States)

    Karamuz, Emilia; Romanowicz, Renata J.

    2016-04-01

    This study focuses on developing methods to combine ground-based data from operational monitoring with data from satellite imaging to obtain a more accurate evaluation of flood inundation extents. The distributed flow model MIKE 11 was used to determine the flooding areas for a flood event with available satellite data. Model conditioning was based on the integrated use of data from remote measurement techniques and traditional data from gauging stations. Such conditioning of the model improves the quality of fit of the model results. The use of high resolution satellite images (from IKONOS, QuickBird e.t.c) and LiDAR Digital Elevation Model (DEM) allows information on water levels to be extended to practically any chosen cross-section of the tested section of the river. This approach allows for a better assessment of inundation extent, particularly in areas with a scarce network of gauging stations. We apply approximate Bayesian analysis to integrate the information on flood extent originating from different sources. The approach described above was applied to the Middle River Vistula reach, from the Zawichost to Warsaw gauging stations. For this part of the river the detailed geometry of the river bed and floodplain data were available. Finally, three selected sub-sections were analyzed with the most suitable satellite images of inundation area. ACKNOWLEDGEMENTS This research was supported by the Institute of Geophysics Polish Academy of Sciences through the Young Scientist Grant no. 3b/IGF PAN/2015.

  14. A test of the International Personality Item Pool representation of the Revised NEO Personality Inventory and development of a 120-item IPIP-based measure of the five-factor model.

    Science.gov (United States)

    Maples, Jessica L; Guan, Li; Carter, Nathan T; Miller, Joshua D

    2014-12-01

    There has been a substantial increase in the use of personality assessment measures constructed using items from the International Personality Item Pool (IPIP) such as the 300-item IPIP-NEO (Goldberg, 1999), a representation of the Revised NEO Personality Inventory (NEO PI-R; Costa & McCrae, 1992). The IPIP-NEO is free to use and can be modified to accommodate its users' needs. Despite the substantial interest in this measure, there is still a dearth of data demonstrating its convergence with the NEO PI-R. The present study represents an investigation of the reliability and validity of scores on the IPIP-NEO. Additionally, we used item response theory (IRT) methodology to create a 120-item version of the IPIP-NEO. Using an undergraduate sample (n = 359), we examined the reliability, as well as the convergent and criterion validity, of scores from the 300-item IPIP-NEO, a previously constructed 120-item version of the IPIP-NEO (Johnson, 2011), and the newly created IRT-based IPIP-120 in comparison to the NEO PI-R across a range of outcomes. Scores from all 3 IPIP measures demonstrated strong reliability and convergence with the NEO PI-R and a high degree of similarity with regard to their correlational profiles across the criterion variables (rICC = .983, .972, and .976, respectively). The replicability of these findings was then tested in a community sample (n = 757), and the results closely mirrored the findings from Sample 1. These results provide support for the use of the IPIP-NEO and both 120-item IPIP-NEO measures as assessment tools for measurement of the five-factor model. (c) 2014 APA, all rights reserved.

  15. Testing of a Buran flight-model fuel cell

    Science.gov (United States)

    Schautz, M.; Dudley, G.; Baron, F.; Popov, V.; Pospelov, B.

    A demonstration test program has been performed at European Space Research & Technology Center (ESTEC) on a flight-model Russian 'Photon' fuel cell. The tests, conducted at various power levels up to 23 kW, included current/voltage characteristics, transient behavior, autothermal startup, and impedance measurements. In addition, the product water and the purge gas were analyzed. All test goals were met and no electrochemical limitations were apparent.

  16. Directional wave measurements and modelling

    Digital Repository Service at National Institute of Oceanography (India)

    Anand, N.M.; Nayak, B.U.; Bhat, S.S.; SanilKumar, V.

    Some of the results obtained from analysis of the monsoon directional wave data measured over 4 years in shallow waters off the west coast of India are presented. The directional spectrum computed from the time series data seems to indicate...

  17. Application Trend of On-line Measuring and Testing Technology in Engine Plants Adopted Flexible Production Model%汽车发动机厂柔性生产方式下的现场检测技术

    Institute of Scientific and Technical Information of China (English)

    朱正德

    2012-01-01

    The paper points out that modern plants must adopt a multi-line production, namely flexible production model to meet the car market demand of diversity products, especially for power train plants. It is well known that on-line measuring and test technology is an important part of the manufacturing technology. Therefore, in order to acclimatize to flexible production, online testing technique and equipment should possess some features, including flexible inspection between process, Coordinate measuring machine (CMM) in the production site, establishing exclusive measuring room and general utilization and flexible gauge. The paper introduces and explains based on a series of targeted examples.%为满足轿车市场对产品需求的多样性,现代企业必须采取多品种混线生产,即柔性化生产模式,尤其对于动力总成类企业。众所周知,检测技术是制造工艺的一个重要组成。据此,文章提出为适应柔性化生产,在线检测工艺与装备需要具备的一些特点,如:工序间检验夹具的柔性化、坐标测量机配置在生产现场、建立专用的生产测量室以及量规的通用化和柔性化等。文章通过针对性的实例进行了介绍和说明。

  18. Cluster formation as a measure of interpretability in multiple testing.

    Science.gov (United States)

    Shaffer, Juliet Popper

    2008-10-01

    Multiple test procedures are usually compared on various aspects of error control and power. Power is measured as some function of the number of false hypotheses correctly identified as false. However, given equal numbers of rejected false hypotheses, the pattern of rejections, i.e. the particular set of false hypotheses identified, may be crucial in interpreting the results for potential application.In an important area of application, comparisons among a set of treatments based on random samples from populations, two different approaches, cluster analysis and model selection, deal implicitly with such patterns, while traditional multiple testing procedures generally focus on the outcomes of subset and pairwise equality hypothesis tests, without considering the overall pattern of results in comparing methods. An important feature involving the pattern of rejections is their relevance for dividing the treatments into distinct subsets based on some parameter of interest, for example their means. This paper introduces some new measures relating to the potential of methods for achieving such divisions. Following Hartley (1955), sets of treatments with equal parameter values will be called clusters. Because it is necessary to distinguish between clusters in the populations and clustering in sample outcomes, the population clusters will be referred to as P -clusters; any related concepts defined in terms of the sample outcome will be referred to with the prefix outcome. Outcomes of multiple comparison procedures will be studied in terms of their probabilities of leading to separation of treatments into outcome clusters, with various measures relating to the number of such outcome clusters and the proportion of true vs. false outcome clusters. The definitions of true and false outcome clusters and related concepts, and the approach taken here, is in the tradition of hypothesis testing with attention to overall error control and power, but with added consideration of

  19. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels

    2007-01-01

    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  20. Testing inequality constrained hypotheses in SEM Models

    NARCIS (Netherlands)

    Van de Schoot, R.; Hoijtink, H.J.A.; Dekovic, M.

    2010-01-01

    Researchers often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model. It is currently not possible to test these so-called informative hypotheses in structural equation modeling software. We offer a solution to this probl

  1. Modeling Answer Changes on Test Items

    Science.gov (United States)

    van der Linden, Wim J.; Jeon, Minjeong

    2012-01-01

    The probability of test takers changing answers upon review of their initial choices is modeled. The primary purpose of the model is to check erasures on answer sheets recorded by an optical scanner for numbers and patterns that may be indicative of irregular behavior, such as teachers or school administrators changing answer sheets after their…

  2. Modeling Nonignorable Missing Data in Speeded Tests

    Science.gov (United States)

    Glas, Cees A. W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IRT) models: one for the observed response data and…

  3. Rolling Resistance Measurement and Model Development

    DEFF Research Database (Denmark)

    Andersen, Lasse Grinderslev; Larsen, Jesper; Fraser, Elsje Sophia;

    2015-01-01

    There is an increased focus worldwide on understanding and modeling rolling resistance because reducing the rolling resistance by just a few percent will lead to substantial energy savings. This paper reviews the state of the art of rolling resistance research, focusing on measuring techniques, s......, surface and texture modeling, contact models, tire models, and macro-modeling of rolling resistance...

  4. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    Science.gov (United States)

    1992-04-01

    test experiments. Of the three static techniques, 200-4 SOFTWARE TEST TECHNIQUES ST Code Review A Error/ Anamoly Detection T I Structure...anomaly is an unforeseen event , which may not be detected by error-protection mechanisms in time to prevent system failure. The existence of extensive... event , the more difficult it is to make a meaningful prediction. As an example, it can be seen that the reliability of an electronic equipment is known

  5. Development of the GPM Observatory Thermal Vacuum Test Model

    Science.gov (United States)

    Yang, Kan; Peabody, Hume

    2012-01-01

    A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.

  6. Radiative and temperature effects of aerosol simulated by the COSMO-Ru model for different atmospheric conditions and their testing against ground-based measurements and accurate RT simulations

    Science.gov (United States)

    Chubarova, Nataly; Poliukhov, Alexei; Shatunova, Marina; Rivin, Gdali; Becker, Ralf; Muskatel, Harel; Blahak, Ulrich; Kinne, Stefan; Tarasova, Tatiana

    2017-04-01

    We use the operational Russian COSMO-Ru weather forecast model (Ritter and and Geleyn, 1991) with different aerosol input data for the evaluation of radiative and temperature effects of aerosol in different atmospheric conditions. Various aerosol datasets were utilized including Tegen climatology (Tegen et al., 1997), updated Macv2 climatology (Kinne et al., 2013), Tanre climatology (Tanre et al., 1984) as well as the MACC data (Morcrette et al., 2009). For clear sky conditions we compare the radiative effects from the COSMO-Ru model over Moscow (55.7N, 37.5E) and Lindenberg/Falkenberg sites (52.2N, 14.1E) with the results obtained using long-term aerosol measurements. Additional tests of the COSMO RT code were performed against (FC05)-SW model (Tarasova T.A. and Fomin B.A., 2007). The overestimation of about 5-8% of COSMO RT code was obtained. The study of aerosol effect on temperature at 2 meters has revealed the sensitivity of about 0.7-1.1 degree C per 100 W/m2 change in shortwave net radiation due to aerosol variations. We also discuss the radiative impact of urban aerosol properties according to the long-term AERONET measurements in Moscow and Moscow suburb as well as long-term aerosol trends over Moscow from the measurements and Macv2 dataset. References: Kinne, S., O'Donnel D., Stier P., et al., J. Adv. Model. Earth Syst., 5, 704-740, 2013. Morcrette J.-J.,O. Boucher, L. Jones, eet al, J.GEOPHYS. RES.,VOL. 114, D06206, doi:10.1029/2008JD011235, 2009. Ritter, B. and Geleyn, J., Monthly Weather Review, 120, 303-325, 1992. Tanre, D., Geleyn, J., and Slingo, J., A. Deepak Publ., Hampton, Virginia, 133-177, 1984. Tarasova, T., and Fomin, B., Journal of Atmospheric and Oceanic Technology, 24, 1157-1162, 2007. Tegen, I., Hollrig, P., Chin, M., et al., Journal of Geophysical Research- Atmospheres, 102, 23895-23915, 1997.

  7. Shannon Entropy based Randomness Measurement and Test for Image Encryption

    CERN Document Server

    Wu, Yue; Agaian, Sos

    2011-01-01

    The quality of image encryption is commonly measured by the Shannon entropy over the ciphertext image. However, this measurement does not consider to the randomness of local image blocks and is inappropriate for scrambling based image encryption methods. In this paper, a new information entropy-based randomness measurement for image encryption is introduced which, for the first time, answers the question of whether a given ciphertext image is sufficiently random-like. It measures the randomness over the ciphertext in a fairer way by calculating the averaged entropy of a series of small image blocks within the entire test image. In order to fulfill both quantitative and qualitative measurement, the expectation and the variance of this averaged block entropy for a true-random image are strictly derived and corresponding numerical reference tables are also provided. Moreover, a hypothesis test at significance?-level is given to help accept or reject the hypothesis that the test image is ideally encrypted/random-...

  8. Initialization and Setup of the Coastal Model Test Bed: STWAVE

    Science.gov (United States)

    2017-01-01

    coastal numerical models . Pertinent data types, including waves, water levels, nearshore currents, bathymetry, and meteorological measurements, are...correlation coefficients, and other statistics can be calculated between the observed data and the model output for any duration of time using the...ERDC/CHL CHETN-I-93 January 2017 Approved for public release; distribution is unlimited. Initialization and Setup of the Coastal Model Test Bed

  9. Crew Autonomy Measures and Models (CAMM) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SA Technologies will employ a two-part solution including measures and models for evaluating crew autonomy in exploratory space missions. An integrated measurement...

  10. Graded CTL Model Checking for Test Generation

    CERN Document Server

    Napoli, Margherita

    2011-01-01

    Recently there has been a great attention from the scientific community towards the use of the model-checking technique as a tool for test generation in the simulation field. This paper aims to provide a useful mean to get more insights along these lines. By applying recent results in the field of graded temporal logics, we present a new efficient model-checking algorithm for Hierarchical Finite State Machines (HSM), a well established symbolism long and widely used for representing hierarchical models of discrete systems. Performing model-checking against specifications expressed using graded temporal logics has the peculiarity of returning more counterexamples within a unique run. We think that this can greatly improve the efficacy of automatically getting test cases. In particular we verify two different models of HSM against branching time temporal properties.

  11. Recognition or Recall: What Reading Comprehension Tests Really Measure.

    Science.gov (United States)

    Lubliner, Shira; Smetana, Linda

    This study examined format differences in the measurement of fifth grade students' reading comprehension achievement. Children were given a multiple-choice reading comprehension test, followed 4 weeks later by a constructed response test on 2 of the same text passages. Results indicated that little comprehension of text content was transferred…

  12. 76 FR 1136 - Electroshock Weapons Test and Measurement Workshop

    Science.gov (United States)

    2011-01-07

    ... National Institute of Standards and Technology Electroshock Weapons Test and Measurement Workshop AGENCY..., academia, military, test instrument manufacturers, etc.) of electroshock weapons that provide stand-off... requirements for electroshock weapons, the Law Enforcement Standards Office (OLES) at NIST has developed...

  13. The "Test of Financial Literacy": Development and Measurement Characteristics

    Science.gov (United States)

    Walstad, William B.; Rebeck, Ken

    2017-01-01

    The "Test of Financial Literacy" (TFL) was created to measure the financial knowledge of high school students. Its content is based on the standards and benchmarks stated in the "National Standards for Financial Literacy" (Council for Economic Education 2013). The test development process involved extensive item writing and…

  14. Measuring Intelligence with the Goodenough-Harris Drawing Test.

    Science.gov (United States)

    Scott, Linda Howard

    1981-01-01

    Critically evaluates the literature through 1977 on the Goodenough-Harris Drawing Test. Areas reviewed are administration and standardization of the man and woman scales, test ceiling, sex differences, the Quality scale, reliability, criterion validity, validity with measures of academic achievement, cultural variables, and use with the learning…

  15. Laser shaft alignment measurement model

    Science.gov (United States)

    Mo, Chang-tao; Chen, Changzheng; Hou, Xiang-lin; Zhang, Guoyu

    2007-12-01

    Laser beam's track which is on photosensitive surface of the a receiver will be closed curve, when driving shaft and the driven shaft rotate with same angular velocity and rotation direction. The coordinate of arbitrary point which is on the curve is decided by the relative position of two shafts. Basing on the viewpoint, a mathematic model of laser alignment is set up. By using a data acquisition system and a data processing model of laser alignment meter with single laser beam and a detector, and basing on the installation parameter of computer, the state parameter between two shafts can be obtained by more complicated calculation and correction. The correcting data of the four under chassis of the adjusted apparatus moving on the level and the vertical plane can be calculated. This will instruct us to move the apparatus to align the shafts.

  16. Vadose zone measurement and modeling

    OpenAIRE

    Hopmans, J.W.; V. Clausnitzer; K.I. Kosugi; Nielsen,D.R.; Somma, F.

    1997-01-01

    The following treatise is a summary of some of the ongoing research activities in the soil physics program at the University of California in Davis. Each of the four listed areas win be presented at the Workshop on special topics on soil physics and crop modeling in Piracicaba at the University of Sao Paulo. We limited ourselves to a general overview of each area, but will present a more thorough discussion with examples at the Workshop.

  17. Measuring Vocabulary: An overview of four types of vocabulary tests

    OpenAIRE

    Helga Hilmarsdóttir 1985

    2010-01-01

    In this essay four types of vocabulary tests are examined and the focus is on the variety in vocabulary tests. The main incentive with writing this essay was to make an overview of vocabulary measurement tools and to examine whether there existed a standardized vocabulary test. In the first chapter an attempt is made to answer the question of what vocabulary knowledge is. Receptive and productive knowledge of vocabulary is discussed as well as the distinction of vocabulary into breadth and...

  18. Objective measurement of chronic pain by a complex concentration test

    OpenAIRE

    Berg, Anja; Oster, Karen; Janig, Herbert; Likar, Rudolf; Pipam, Wolfgang; Scholz, Anja; Westhoff, Karl

    2009-01-01

    Higher intensity of chronic pain occurs together with the subjective experience of impaired concentration. With a complex test of concentration two facets of concentrated work can be measured reliably and validly: speed of concentrated work and percentage of concentration errors. Two studies were conducted to test whether the Complex-Concentration-Test is suitable for assessing the cognitive deficit caused by chronic pain. In Study I, 60 chronic pain patients in Germany, and in Study II, 86 p...

  19. A Comparison of the Abilities Measured by the Cambridge and Educational Testing Service EFL Test Batteries.

    Science.gov (United States)

    Bachman, Lyle F.; And Others

    1990-01-01

    The abilities measured by the First Certificate of English (FCE) administered by the Cambridge Local Examinations Syndicate are compared with the Test of English as a Foreign Language (TOEFL) administered by the Educational Testing Service. The analyses suggest that the FCE and TOEFL appear to measure the same common aspect of language…

  20. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  1. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as represent

  2. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through suffi

  3. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  4. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus

    2009-01-01

    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and testing is still state of the art in falsification. We show how user-defined abstractions can be integrated into a Java PathFinder setting with tools like AspectJ or Javassist and discuss implications of remaining weaknesses of these tools. We believe that a principled engineering approach to designing...... and implementing abstractions will improve the applicability of model checking in practice....

  5. Computerized adaptive testing for measuring development of young children

    NARCIS (Netherlands)

    Jacobusse, G.; Buuren, S. van

    2007-01-01

    Developmental indicators that are used for routine measurement in The Netherlands are usually chosen to optimally identify delayed children. Measurements on the majority of children without problems are therefore quite imprecise. This study explores the use of computerized adaptive testing (CAT) to

  6. [Electronic eikonometer: Measurement tests displayed on stereoscopic screen].

    Science.gov (United States)

    Bourdy, C; James, Y

    2016-05-01

    We propose the presentation on a stereoscopic screen of the electronic eikonometer tests intended for analysis and measurement of perceptual effects of binocular disparity. These tests, so-called "built-in magnification tests" are constructed according to the same principle as those of preceding eikonometers (disparity variation parameters being included in each test presentation, which allows, for test observation and measurements during the examination, the removing of any intermediate optical system). The images of these tests are presented separately to each eye, according to active or passive stereoscopic screen technology: (1) Ogle Spatial Test to measure aniseikonia; (2) Fixation Disparity test: binocular nonius; (3) retinal correspondence test evaluated by nonius horopter; (4) stereoscopic test using Julesz' random-dot stereograms (RDS). All of these tests, with their variable parameters included, are preprogrammed by means of an associated mini-computer. This new system (a single screen for the presentation of tests for the right eye and left eye) will be much simpler to reproduce and install for all practitioners interested in the functional exploration of binocular vision. We develop the suitable methodology adapted to each type of examination, as well as manipulations to be performed by the operator. We then recall the possibilities for reducing aniseikonia thanks to some theoretical studies previously performed by matrix calculation of the size of the retinal images for different types of eye (emmetropia, axial or conformation anisometropia, aphakia) and for different means of correction (glasses, contact lenses, implants). Software for achieving these different tests is available, on request, at this address: eiconometre.electronique@gmail.com. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  7. Radiometric instrumentation and measurements guide for photovoltaic performance testing

    Energy Technology Data Exchange (ETDEWEB)

    Myers, D.

    1997-04-01

    The Photovoltaic Module and Systems Performance and Engineering Project at the National Renewable Energy Laboratory performs indoor and outdoor standardization, testing, and monitoring of the performance of a wide range of photovoltaic (PV) energy conversion devices and systems. The PV Radiometric Measurements and Evaluation Team (PVSRME) within that project is responsible for measurement and characterization of natural and artificial optical radiation which stimulates the PV effect. The PV manufacturing and research and development community often approaches project members for technical information and guidance. A great area of interest is radiometric instrumentation, measurement techniques, and data analysis applied to understanding and improving PV cell, module, and system performance. At the Photovoltaic Radiometric Measurements Workshop conducted by the PVSRME team in July 1995, the need to communicate knowledge of solar and optical radiometric measurements and instrumentation, gained as a result of NREL`s long-term experiences, was identified as an activity that would promote improved measurement processes and measurement quality in the PV research and manufacturing community. The purpose of this document is to address the practical and engineering need to understand optical and solar radiometric instrument performance, selection, calibration, installation, and maintenance applicable to indoor and outdoor radiometric measurements for PV calibration, performance, and testing applications. An introductory section addresses radiometric concepts and definitions. Next, concepts essential to spectral radiometric measurements are discussed. Broadband radiometric instrumentation and measurement concepts are then discussed. Each type of measurement serves as an important component of the PV cell, module, and system performance measurement and characterization process.

  8. Psychometric Testing of the Method of Generations’ Mentality Type Measurement

    Directory of Open Access Journals (Sweden)

    Vlada I. Pishchik

    2013-01-01

    Full Text Available The article analyzes the mentality phenomenon in psychology and states the lack of clear, definite positions, concerning mentality, presents author's concept of mentality system, including nuclear and peripheral elements. On this basis, the initial survey, helping to develop the method of mentality measurement was conducted. The article presents author's method of generations’ mentality type measurement, carries out psychometric testing of method paragraphs. The paragraphs were tested on reliability-stability, reliability-conformity, factorial, convergent, constructive and empirical validities. The data of method testing on different samples are cited

  9. Surface moisture measurement system hardware acceptance test report

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-05-28

    This document summarizes the results of the hardware acceptance test for the Surface Moisture Measurement System (SMMS). This test verified that the mechanical and electrical features of the SMMS functioned as designed and that the unit is ready for field service. The bulk of hardware testing was performed at the 306E Facility in the 300 Area and the Fuels and Materials Examination Facility in the 400 Area. The SMMS was developed primarily in support of Tank Waste Remediation System (TWRS) Safety Programs for moisture measurement in organic and ferrocyanide watch list tanks.

  10. INFORMATION-MEASURING TEST SYSTEM OF DIESEL LOCOMOTIVE HYDRAULIC TRANSMISSIONS

    Directory of Open Access Journals (Sweden)

    I. V. Zhukovytskyy

    2015-08-01

    Full Text Available Purpose. The article describes the process of developing the information-measuring test system of diesel locomotives hydraulic transmission, which gives the possibility to obtain baseline data to conduct further studies for the determination of the technical condition of diesel locomotives hydraulic transmission. The improvement of factory technology of post-repair tests of hydraulic transmissions by automating the existing hydraulic transmission test stands according to the specifications of the diesel locomotive repair enterprises was analyzed. It is achieved based on a detailed review of existing foreign information-measuring test systems for hydraulic transmission of diesel locomotives, BelAZ earthmover, aircraft tug, slag car, truck, BelAZ wheel dozer, some brands of tractors, etc. The problem for creation the information-measuring test systems for diesel locomotive hydraulic transmission is being solved, starting in the first place from the possibility of automation of the existing test stand of diesel locomotives hydraulic transmission at Dnipropetrovsk Diesel Locomotive Repair Plant "Promteplovoz". Methodology. In the work the researchers proposed the method to create a microprocessor automated system of diesel locomotives hydraulic transmission stand testing in the locomotive plant conditions. It acts by justifying the selection of the necessary sensors, as well as the application of the necessary hardware and software for information-measuring systems. Findings. Based on the conducted analysis there was grounded the necessity of improvement the plant hydraulic transmission stand testing by creating a microprocessor testing system, supported by the experience of developing such systems abroad. Further research should be aimed to improve the accuracy and frequency of data collection by adopting the more modern and reliable sensors in tandem with the use of filtering software for electromagnetic and other interference. Originality. The

  11. Unit testing, model validation, and biological simulation

    Science.gov (United States)

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  12. Robust Design of Reliability Test Plans Using Degradation Measures.

    Energy Technology Data Exchange (ETDEWEB)

    Lane, Jonathan Wesley; Lane, Jonathan Wesley; Crowder, Stephen V.; Crowder, Stephen V.

    2014-10-01

    With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus, it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. Generally, the assumption is made that the error associated with a degradation measure follows a known distribution, usually normal, although in practice cases may arise where that assumption is not valid. In this paper, we examine such degradation measures, both simulated and real, and present non-parametric methods to demonstrate reliability and to develop reliability test plans for the future production of components with this form of degradation.

  13. A Specification Test of Stochastic Diffusion Models

    Institute of Scientific and Technical Information of China (English)

    Shu-lin ZHANG; Zheng-hong WEI; Qiu-xiang BI

    2013-01-01

    In this paper,we propose a hypothesis testing approach to checking model mis-specification in continuous-time stochastic diffusion model.The key idea behind the development of our test statistic is rooted in the generalized information equality in the context of martingale estimating equations.We propose a bootstrap resampling method to implement numerically the proposed diagnostic procedure.Through intensive simulation studies,we show that our approach is well performed in the aspects of type Ⅰ error control,power improvement as well as computational efficiency.

  14. Testing cosmological models with COBE data

    Energy Technology Data Exchange (ETDEWEB)

    Torres, S. [Observatorio Astronomico, Bogota` (Colombia)]|[Centro Internacional de Fisica, Bogota` (Colombia); Cayon, L. [Lawrence Berkeley Laboratory and Center for Particle Astrophysics, Berkeley (United States); Martinez-Gonzalez, E.; Sanz, J. L. [Santander, Univ. de Cantabria (Spain). Instituto de Fisica. Consejo Superior de Investigaciones Cientificas

    1997-02-01

    The authors test cosmological models with {Omega} < 1 using the COBE two-year cross-correlation function by means of a maximum-likelihood test with Monte Carlo realizations of several {Omega} models. Assuming a Harrison-Zel`dovich primordial power spectrum with amplitude {proportional_to} Q, it is found that there is a large region in the ({Omega}, Q), parameter space that fits the data equally well. They find that the flatness of the universe is not implied by the data. A summary of other analyses of COBE data to constrain the shape of the primordial spectrum is presented.

  15. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  16. Experimental Concepts for Testing Seismic Hazard Models

    Science.gov (United States)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  17. Are vocabulary tests measurement invariant between age groups? An item response analysis of three popular tests.

    Science.gov (United States)

    Fox, Mark C; Berry, Jane M; Freeman, Sara P

    2014-12-01

    Relatively high vocabulary scores of older adults are generally interpreted as evidence that older adults possess more of a common ability than younger adults. Yet, this interpretation rests on empirical assumptions about the uniformity of item-response functions between groups. In this article, we test item response models of differential responding against datasets containing younger-, middle-aged-, and older-adult responses to three popular vocabulary tests (the Shipley, Ekstrom, and WAIS-R) to determine whether members of different age groups who achieve the same scores have the same probability of responding in the same categories (e.g., correct vs. incorrect) under the same conditions. Contrary to the null hypothesis of measurement invariance, datasets for all three tests exhibit substantial differential responding. Members of different age groups who achieve the same overall scores exhibit differing response probabilities in relation to the same items (differential item functioning) and appear to approach the tests in qualitatively different ways that generalize across items. Specifically, younger adults are more likely than older adults to leave items unanswered for partial credit on the Ekstrom, and to produce 2-point definitions on the WAIS-R. Yet, older adults score higher than younger adults, consistent with most reports of vocabulary outcomes in the cognitive aging literature. In light of these findings, the most generalizable conclusion to be drawn from the cognitive aging literature on vocabulary tests is simply that older adults tend to score higher than younger adults, and not that older adults possess more of a common ability.

  18. Electroweak tests of the Standard Model

    CERN Document Server

    Erler, Jens

    2012-01-01

    Electroweak precision tests of the Standard Model of the fundamental interactions are reviewed ranging from the lowest to the highest energy experiments. Results from global fits are presented with particular emphasis on the extraction of fundamental parameters such as the Fermi constant, the strong coupling constant, the electroweak mixing angle, and the mass of the Higgs boson. Constraints on physics beyond the Standard Model are also discussed.

  19. Tests of the Electroweak Standard Model

    CERN Document Server

    Erler, Jens

    2012-01-01

    Electroweak precision tests of the Standard Model of the fundamental interactions are reviewed ranging from the lowest to the highest energy experiments. Results from global fits are presented with particular emphasis on the extraction of fundamental parameters such as the Fermi constant, the strong coupling constant, the electroweak mixing angle, and the mass of the Higgs boson. Constraints on physics beyond the Standard Model are also discussed.

  20. Testing mechanistic models of growth in insects

    OpenAIRE

    Maino, James L.; Kearney, Michael R.

    2015-01-01

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compare...

  1. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  2. Preliminary results of steel containment vessel model test

    Energy Technology Data Exchange (ETDEWEB)

    Luk, V.K.; Hessheimer, M.F. [Sandia National Labs., Albuquerque, NM (United States); Matsumoto, T.; Komine, K.; Arai, S. [Nuclear Power Engineering Corp., Tokyo (Japan); Costello, J.F. [Nuclear Regulatory Commission, Washington, DC (United States)

    1998-04-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11--12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.

  3. Testing measurement invariance of composites using partial least squares

    NARCIS (Netherlands)

    Henseler, Jörg; Ringle, Christian M.; Sarstedt, Marko

    2016-01-01

    Purpose – Research on international marketing usually involves comparing different groups of respondents. When using structural equation modeling (SEM), group comparisons can be misleading unless researchers establish the invariance of their measures. While methods have been proposed to analyze meas

  4. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications

    Science.gov (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray

    2012-01-01

    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  5. Microcomputer based instrument for measuring a novel pulmonary function test

    Science.gov (United States)

    Craine, Brian L.; Craine, Eric R.

    1996-08-01

    The design of a prototype instrument for measuring the end-tidal concentration of carbon monoxide during human respiration is presented. The instrument automatically samples the final sixty cubic centimeters of exhaled breath, from successive breathing cycles, by coordinating a pump and the breathing cycle with a set of vacuum and pressure sensors. The concentration of carbon monoxide is measured using a nondispersive infrared spectrophotometer. The amount of carbon monoxide present is measured relative to the source air concentration eliminating the need for calibrating the instrument. The testing protocol and measurements can be controlled by a microcomputer connected to the instrument through a standard RS-232 serial interface. When at equilibrium, the end-tidal concentration of CO can be measured in a simple and reproducible fashion. This simplified technology allows for the construction of a small, portable, easy to use instrument that will allow the application of this new pulmonary function test at the point of contact with patients.

  6. A Cosmic Bell Test with Measurement Settings from Astronomical Sources

    CERN Document Server

    Handsteiner, Johannes; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David; Fink, Matthias; Leung, Calvin; Mark, Anthony; Nguyen, Hien T; Sanders, Isabella; Steinlechner, Fabian; Ursin, Rupert; Wengerowsky, Sören; Guth, Alan H; Kaiser, David I; Scheidl, Thomas; Zeilinger, Anton

    2016-01-01

    Bell's theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell's inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this "freedom of choice" was addressed by ensuring that selection of measurement settings via conventional "quantum random number generators" (QRNGs) was space-like separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomes as recently as mere microseconds before each experimental trial. Here we report on a new experimental test of Bell's inequality that, for the first time, uses distant astronomical sources as "cosmic setting generators." In our tests with polarization-entangled photons, measurement settings were chos...

  7. Performance testing of radiobioassay laboratories: In vivo measurements, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; Traub, R.J.; Olsen, P.C.

    1990-04-01

    A study of two rounds of in vivo laboratory performance testing was undertaken by Pacific Northwest Laboratory (PNL) to determine the appropriateness of the in vivo performance criteria of draft American National Standards Institute (ANSI) standard ANSI N13.3, Performance Criteria for Bioassay.'' The draft standard provides guidance to in vivo counting facilities regarding the sensitivity, precision, and accuracy of measurements for certain categories of commonly assayed radionuclides and critical regions of the body. This report concludes the testing program by presenting the results of the Round Two testing. Testing involved two types of measurements: chest counting for radionuclide detection in the lung, and whole body counting for detection of uniformly distributed material. Each type of measurement was further divided into radionuclide categories as defined in the draft standard. The appropriateness of the draft standard criteria by measuring a laboratory's ability to attain them were judged by the results of both round One and Round Two testing. The testing determined that performance criteria are set at attainable levels, and the majority of in vivo monitoring facilities passed the criteria when complete results were submitted. 18 refs., 18 figs., 15 tabs.

  8. Modeling nonignorable missing data in speeded tests

    NARCIS (Netherlands)

    Glas, Cees A.W.; Pimentel, Jonald L.

    2008-01-01

    In tests with time limits, items at the end are often not reached. Usually, the pattern of missing responses depends on the ability level of the respondents; therefore, missing data are not ignorable in statistical inference. This study models data using a combination of two item response theory (IR

  9. Mechanism test bed. Flexible body model report

    Science.gov (United States)

    Compton, Jimmy

    1991-01-01

    The Space Station Mechanism Test Bed is a six degree-of-freedom motion simulation facility used to evaluate docking and berthing hardware mechanisms. A generalized rigid body math model was developed which allowed the computation of vehicle relative motion in six DOF due to forces and moments from mechanism contact, attitude control systems, and gravity. No vehicle size limitations were imposed in the model. The equations of motion were based on Hill's equations for translational motion with respect to a nominal circular earth orbit and Newton-Euler equations for rotational motion. This rigid body model and supporting software were being refined.

  10. Standard Model measurements with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    Hassani Samira

    2015-01-01

    Full Text Available Various Standard Model measurements have been performed in proton-proton collisions at a centre-of-mass energy of √s = 7 and 8 TeV using the ATLAS detector at the Large Hadron Collider. A review of a selection of the latest results of electroweak measurements, W/Z production in association with jets, jet physics and soft QCD is given. Measurements are in general found to be well described by the Standard Model predictions.

  11. Testing mechanistic models of growth in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2015-11-22

    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes.

  12. Interpretation of test data with dynamic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Biba, P. [Southern California Edison, San Clemente, CA (United States). San Onofre Nuclear Generating Station

    1999-11-01

    The in-service testing of many Important-to-safety components, such as valves, pumps, etc. is often performed while the plant is either shut-down or the particular system is in a test mode. Thus the test conditions may be different from the actual operating conditions under which the components would be required to operate. In addition, the components must function under various postulated accident scenarios, which can not be duplicated during plant normal operation. This paper deals with the method of interpretation of the test data by a dynamic model, which allows the evaluation of the many factors affecting the system performance, in order to assure component and system operability.

  13. Objective test and performance measurement of automotive crash warning systems

    Science.gov (United States)

    Szabo, S.; Norcross, R. J.; Falco, J. A.

    2007-04-01

    The National Institute of Standards and Technology (NIST), under an interagency agreement with the United States Department of Transportation (DOT), is supporting development of objective test and measurement procedures for vehicle-based warning systems intended to warn an inattentive driver of imminent rear-end, road-departure and lane-change crash scenarios. The work includes development of track and on-road test procedures, and development of an independent measurement system, which together provide data for evaluating warning system performance. This paper will provide an overview of DOT's Integrated Vehicle-Based Safety System (IVBSS) program along with a review of the approach for objectively testing and measuring warning system performance.

  14. Position and orientation measurement during Lunar Rover movement test

    Science.gov (United States)

    Yang, Zaihua; Tang, Laiying; Yi, Wangmin; Wan, Bile; Liu, Tao

    2015-02-01

    During the development of the Lunar Rover, a posture tracking measurement scheme was designed to verify its movement control ability and path planning performance. The principle is based on the indoor GPS measurement system. Four iGPS transmitters were set around the test site. By tracking the positions of four receivers that were installed on the rover, the position and orientation of the rover can be acquired in real time. The rotation matrix and translation vector from the Lunar Rover coordinate system to the test site coordinate system were calculated by using the software. The measurement precision reached 0.25mm in the range of 30m2. The real time position and posture datum of the rover was overlaid onto 3-D terrain map of the test site. The trajectory of the rover was displayed, and the time-displacement curve, time-velocity curve, time-acceleration curve were analyzed. The rover's performances were verified.

  15. Computerized Classification Testing under the Generalized Graded Unfolding Model

    Science.gov (United States)

    Wang, Wen-Chung; Liu, Chen-Wei

    2011-01-01

    The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…

  16. Precision tests of quantum chromodynamics and the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, S.J.; Lu, H.J.

    1995-06-01

    The authors discuss three topics relevant to testing the Standard Model to high precision: commensurate scale relations, which relate observables to each other in perturbation theory without renormalization scale or scheme ambiguity, the relationship of compositeness to anomalous moments, and new methods for measuring the anomalous magnetic and quadrupole moments of the W and Z.

  17. Optical Test of Local Hidden-Variable Model

    Institute of Scientific and Technical Information of China (English)

    WU XiaoHua; ZONG HongShi; PANG HouRong

    2001-01-01

    An inequality is deduced from local realism and a supplementary assumption. This inequality defines an experiment that can be actually performed with the present technology to test local hidden-variable models, and it is violated by quantum mechanics with a factor 1.92, while it can be simplified into a form where just two measurements are required.``

  18. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  19. Measurable residual disease testing in acute myeloid leukaemia.

    Science.gov (United States)

    Hourigan, C S; Gale, R P; Gormley, N J; Ossenkoppele, G J; Walter, R B

    2017-07-01

    There is considerable interest in developing techniques to detect and/or quantify remaining leukaemia cells termed measurable or, less precisely, minimal residual disease (MRD) in persons with acute myeloid leukaemia (AML) in complete remission defined by cytomorphological criteria. An important reason for AML MRD-testing is the possibility of estimating the likelihood (and timing) of leukaemia relapse. A perfect MRD-test would precisely quantify leukaemia cells biologically able and likely to cause leukaemia relapse within a defined interval. AML is genetically diverse and there is currently no uniform approach to detecting such cells. Several technologies focused on immune phenotype or cytogenetic and/or molecular abnormalities have been developed, each with advantages and disadvantages. Many studies report a positive MRD-test at diverse time points during AML therapy identifies persons with a higher risk of leukaemia relapse compared with those with a negative MRD-test even after adjusting for other prognostic and predictive variables. No MRD-test in AML has perfect sensitivity and specificity for relapse prediction at the cohort- or subject levels and there are substantial rates of false-positive and -negative tests. Despite these limitations, correlations between MRD-test results and relapse risk have generated interest in MRD-test result-directed therapy interventions. However, convincing proof that a specific intervention will reduce relapse risk in persons with a positive MRD-test is lacking and needs testing in randomized trials. Routine clinical use of MRD-testing requires further refinements and standardization/harmonization of assay platforms and results reporting. Such data are needed to determine whether results of MRD-testing can be used as a surrogate end point in AML therapy trials. This could make drug-testing more efficient and accelerate regulatory approvals. Although MRD-testing in AML has advanced substantially, much remains to be done.

  20. Optical and mechanical nondestructive tests for measuring tomato fruit firmness

    Science.gov (United States)

    Manivel-Chávez, Ricardo A.; Garnica-Romo, M. G.; Arroyo-Correa, Gabriel; Aranda-Sánchez, Jorge I.

    2011-08-01

    Ripening is one of the most important processes to occur in fruits which involve changes in color, flavor, and texture. An important goal in quality control of fruits is to substitute traditional sensory testing methods with reliable nondestructive tests (NDT). In this work we study the firmness of tomato fruits by using optical and mechanical NDT. Optical and mechanical parameters, measured along the tomato shelf life, are shown.

  1. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e. m(

  2. Inversion of thicknesses of multi-layered structures from eddy current testing measurements

    Institute of Scientific and Technical Information of China (English)

    黄平捷; 吴昭同

    2004-01-01

    Luquire et al. ' s impedance change model of a rectangular cross section probe coil above a structure with an arbitrary number of parallel layers was used to study the principle of measuring thicknesses of multi-layered structures in terms of eddy current testing voltage measurements. An experimental system for multi-layered thickness measurement was developed and several fitting models to formulate the relationships between detected impedance/voltage measurements and thickness are put forward using least square method. The determination of multi-layered thicknesses was investigated after inversing the voltage outputs of the detecting system. The best fitting and inversion models are presented.

  3. Inversion of thicknesses of multi-layered structures from eddy current testing measurements

    Institute of Scientific and Technical Information of China (English)

    HUANG Ping-jie(黄平捷); WU Zhao-tong(吴昭同)

    2004-01-01

    Luquire et al.'s impedance change model of a rectangular cross section probe coil above a structure with an arbitrary number of parallel layers was used to study the principle of measuring thicknesses of multi-layered structures in terms of eddy current testing voltage measurements. An experimental system for multi-layered thickness measurement was developed and several fitting models to formulate the relationships between detected impedance/voltage measurements and thickness are put forward using least square method. The determination of multi-layered thicknesses was investigated after inversing the voltage outputs of the detecting system. The best fitting and inversion models are presented.

  4. The Wave Dragon: tests on a modified model

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, Luca; Frigaard, Peter

    1999-09-01

    A modified floating model of the Wave Dragon was tested for movements, overtopping and forces on critical positions. The modifications and consequent testing of the model are part of a R and D programme. 18 tests (repetitions included) were carried out during May 1999. Forces in 7 different positions and movements for three degrees of freedom (heave, pitch and surge) were recorded for 7 wave situations. Total overtopping was measured for 5 different wave situations. Furthermore influence of crest freeboard was tested. Sensitivity to the energy spreading in multidirectional seas was investigated. A typical exponential equation describing overtopping was fitted to the data in case of frequent wave conditions. The formula is compared to the present tests. (au)

  5. Parametric Testing of Launch Vehicle FDDR Models

    Science.gov (United States)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  6. Radio propagation measurement and channel modelling

    CERN Document Server

    Salous, Sana

    2013-01-01

    While there are numerous books describing modern wireless communication systems that contain overviews of radio propagation and radio channel modelling, there are none that contain detailed information on the design, implementation and calibration of radio channel measurement equipment, the planning of experiments and the in depth analysis of measured data. The book would begin with an explanation of the fundamentals of radio wave propagation and progress through a series of topics, including the measurement of radio channel characteristics, radio channel sounders, measurement strategies

  7. Effective UV radiation from model calculations and measurements

    Science.gov (United States)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  8. Inferential permutation tests for maximum entropy models in ecology.

    Science.gov (United States)

    Shipley, Bill

    2010-09-01

    Maximum entropy (maxent) models assign probabilities to states that (1) agree with measured macroscopic constraints on attributes of the states and (2) are otherwise maximally uninformative and are thus as close as possible to a specified prior distribution. Such models have recently become popular in ecology, but classical inferential statistical tests require assumptions of independence during the allocation of entities to states that are rarely fulfilled in ecology. This paper describes a new permutation test for such maxent models that is appropriate for very general prior distributions and for cases in which many states have zero abundance and that can be used to test for conditional relevance of subsets of constraints. Simulations show that the test gives correct probability estimates under the null hypothesis. Power under the alternative hypothesis depends primarily on the number and strength of the constraints and on the number of states in the model; the number of empty states has only a small effect on power. The test is illustrated using two empirical data sets to test the community assembly model of B. Shipley, D. Vile, and E. Garnier and the species abundance distribution models of S. Pueyo, F. He, and T. Zillio.

  9. Lectures on dynamical models for quantum measurements

    NARCIS (Netherlands)

    Nieuwenhuizen, T.M.; Perarnau-llobet, M.; Balian, R.

    2014-01-01

    In textbooks, ideal quantum measurements are described in terms of the tested system only by the collapse postulate and Born's rule. This level of description offers a rather flexible position for the interpretation of quantum mechanics. Here we analyse an ideal measurement as a process of interacti

  10. Lectures on dynamical models for quantum measurements

    NARCIS (Netherlands)

    Nieuwenhuizen, T.M.; Perarnau-llobet, M.; Balian, R.

    2014-01-01

    In textbooks, ideal quantum measurements are described in terms of the tested system only by the collapse postulate and Born's rule. This level of description offers a rather flexible position for the interpretation of quantum mechanics. Here we analyse an ideal measurement as a process of

  11. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  12. Teaching to the Test: A Controversial Issue in Quantitative Measurement

    Directory of Open Access Journals (Sweden)

    Jennifer L. Styron

    2012-10-01

    Full Text Available discussion of the pros and cons of focusing curricular and pedagogical decisions primarily on mastery of those skills and concepts measured by standardized tests. This paper presents scholarly discourse based on testing systems and school accountability, along with a presentation of the advantages and disadvantages of what is commonly referred to as 'teaching to the test.' The authors of this document found research studies to be inconclusive with no clear indication of whether or not there is an advantage or disadvantage to the practice of teaching to the test. But most notably, the actual issue connected to this debate may be the lack of understanding of item-teaching and curricular teaching. In the mind of many educators, item teaching, curriculum teaching and teaching to the test are synonymous.

  13. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  14. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  15. The Musical Ear Test, A New Reliable Test for Measuring Musical Competence

    Science.gov (United States)

    Wallentin, Mikkel; Nielsen, Andreas Hojlund; Friis-Olivarius, Morten; Vuust, Christian; Vuust, Peter

    2010-01-01

    This paper reports results from three experiments using the Musical Ear Test (MET), a new test designed for measuring musical abilities in both musicians and non-musicians in an objective way with a relatively short duration (less than 20 min.). In the first experiment we show how the MET is capable of clearly distinguishing between a group of…

  16. Testing the Correlated Random Coefficient Model*

    Science.gov (United States)

    Heckman, James J.; Schmierer, Daniel; Urzua, Sergio

    2010-01-01

    The recent literature on instrumental variables (IV) features models in which agents sort into treatment status on the basis of gains from treatment as well as on baseline-pretreatment levels. Components of the gains known to the agents and acted on by them may not be known by the observing economist. Such models are called correlated random coe cient models. Sorting on unobserved components of gains complicates the interpretation of what IV estimates. This paper examines testable implications of the hypothesis that agents do not sort into treatment based on gains. In it, we develop new tests to gauge the empirical relevance of the correlated random coe cient model to examine whether the additional complications associated with it are required. We examine the power of the proposed tests. We derive a new representation of the variance of the instrumental variable estimator for the correlated random coefficient model. We apply the methods in this paper to the prototypical empirical problem of estimating the return to schooling and nd evidence of sorting into schooling based on unobserved components of gains. PMID:21057649

  17. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)

    2012-01-15

    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  18. Physical model tests for floating wind turbines

    DEFF Research Database (Denmark)

    Bredmose, Henrik; Mikkelsen, Robert Flemming; Borg, Michael

    Floating offshore wind turbines are relevant at sites where the depth is too large for the installation of a bottom fixed substructure. While 3200 bottom fixed offshore turbines has been installed in Europe (EWEA 2016), only a handful of floating wind turbines exist worldwide and it is still...... an open question which floater concept is the most economically feasible. The design of the floaters for the floating turbines relies heavily on numerical modelling. While several coupled models exist, data sets for their validation are scarce. Validation, however, is important since the turbine behaviour...... is complex due to the combined actions of aero- and hydrodynamic loads, mooring loads and blade pitch control. The present talk outlines two recent test campaigns with a floating wind turbine in waves and wind. Two floater were tested, a compact TLP floater designed at DTU (Bredmose et al 2015, Pegalajar...

  19. Measurement and modeling of oil slick transport

    Science.gov (United States)

    Jones, Cathleen E.; Dagestad, Knut-Frode; Breivik, Åyvind; Holt, Benjamin; Röhrs, Johannes; Christensen, Kai Hâkon; Espeseth, Martine; Brekke, Camilla; Skrunes, Stine

    2016-10-01

    Transport characteristics of oil slicks are reported from a controlled release experiment conducted in the North Sea in June 2015, during which mineral oil emulsions of different volumetric oil fractions and a look-alike biogenic oil were released and allowed to develop naturally. The experiment used the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) to track slick location, size, and shape for ˜8 h following release. Wind conditions during the exercise were at the high end of the range considered suitable for radar-based slick detection, but the slicks were easily detectable in all images acquired by the low noise, L-band imaging radar. The measurements are used to constrain the entrainment length and representative droplet radii for oil elements in simulations generated using the OpenOil advanced oil drift model. Simultaneously released drifters provide near-surface current estimates for the single biogenic release and one emulsion release, and are used to test model sensitivity to upper ocean currents and mixing. Results of the modeling reveal a distinct difference between the transport of the biogenic oil and the mineral oil emulsion, in particular in the vertical direction, with faster and deeper entrainment of significantly smaller droplets of the biogenic oil. The difference in depth profiles for the two types of oils is substantial, with most of the biogenic oil residing below depths of 10 m, compared to the majority of the emulsion remaining above 10 m depth. This difference was key to fitting the observed evolution of the two different types of slicks.

  20. Proficiency testing for sensory profile panels : measuring panel performance

    NARCIS (Netherlands)

    Mcewan, J.A.; Hunter, E.A.; Gemert, L.J. van; Lea, P.

    2002-01-01

    Proficiency testing in sensory analysis is an important step towards demonstrating that results from one sensory panel are consistent with the results of other sensory panels. The uniqueness of sensory analysis poses some specific problems for measuring the proficiency of the human instrument (panel

  1. Cluster bias: Testing measurement invariance in multilevel data

    NARCIS (Netherlands)

    Jak, S.

    2013-01-01

    In this thesis we presented methods and procedures to test and account for measurement bias in multilevel data. Multilevel data are data with a clustered structure, for instance data of children grouped in classrooms, or data of employees in teams. For example, with data of children in classes, we c

  2. Measurement of risk of sexual violence through phallometric testing.

    Science.gov (United States)

    Howes, Richard J

    2009-04-01

    The use of phallometric testing to determine risk of sexual violence is becoming more widely recognized throughout the world. This technique involves the precise measurement of circumferential change in the penis from flaccidity to erection in response to both 'normal' and deviant sexual stimuli. Phallometric testing is the only pure measure of sexual arousal, and unlike other physiological measures such as heart rate and GSR it is not influenced by arousal states such as fear and anger. The current published research compares the phallometric testing profiles of incarcerated sexual offenders with those of incarcerated nonsexual offenders. Specifically, the sexual arousal of 100 convicted rapists, pedophiles, and nonsexual offenders is examined. This research identifies what differentiates these groups and what best predicts risk of sexual aggression. Implications of these results include the possibility of using phallometric testing as a screening tool for those who work with vulnerable populations (e.g., child care workers, teachers). The principal benefit of phallometric testing, however, lies in the identification of those incarcerated men who are at greatest risk to sexually reoffend and who should thus be denied release from jail.

  3. Validation Testing for Automated Solubility Measurement Equipment Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Lachut, J. S. [Washington River Protection Solutions LLC, Richland, WA (United States)

    2016-01-11

    Laboratory tests have been completed to test the validity of automated solubility measurement equipment using sodium nitrate and sodium chloride solutions (see test plan WRPS-1404441, “Validation Testing for Automated Solubility Measurement Equipment”). The sodium nitrate solution results were within 2-3% of the reference values, so the experiment is considered successful using the turbidity meter. The sodium chloride test was done by sight, as the turbidity meter did not work well using sodium chloride. For example, the “clear” turbidity reading was 53 FNU at 80 °C, 107 FNU at 55 °C, and 151 FNU at 20 °C. The sodium chloride did not work because it is granular and large; as the solution was stirred, the granules stayed to the outside of the reactor and just above the stir bar level, having little impact on the turbidity meter readings as the meter was aimed at the center of the solution. Also, the turbidity meter depth has an impact. The salt tends to remain near the stir bar level. If the meter is deeper in the slurry, it will read higher turbidity, and if the meter is raised higher in the slurry, it will read lower turbidity (possibly near zero) because it reads the “clear” part of the slurry. The sodium chloride solution results, as measured by sight rather than by turbidity instrument readings, were within 5-6% of the reference values.

  4. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92...... was on the Dolos breakwater with a high superstructure, where there was almost no overtopping. This case is believed to be the most dangerous one. The test of the Dolos breakwater with a low superstructure was also performed. The objective of the last part of the experiment is to investigate the influence...

  5. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.

    2016-01-01

    Ductile damage modeling within the Small Punch Test (SPT) is extensively investigated. The capabilities ofthe SPT to reliably estimate fracture and damage properties are thoroughly discussed and emphasis isplaced on the use of notched specimens. First, different notch profiles are analyzed...... and constraint conditionsquantified. The role of the notch shape is comprehensively examined from both triaxiality and notchfabrication perspectives. Afterwards, a methodology is presented to extract the micromechanical-basedductile damage parameters from the load-displacement curve of notched SPT samples...

  6. Failing Tests: Commentary on "Adapting Educational Measurement to the Demands of Test-Based Accountability"

    Science.gov (United States)

    Thissen, David

    2015-01-01

    In "Adapting Educational Measurement to the Demands of Test-Based Accountability" Koretz takes the time-honored engineering approach to educational measurement, identifying specific problems with current practice and proposing minimal modifications of the system to alleviate those problems. In response to that article, David Thissen…

  7. Tests and models of nociception and pain in rodents.

    Science.gov (United States)

    Barrot, M

    2012-06-01

    Nociception and pain is a large field of both neuroscience and medical research. Over time, various tests and models were developed in rodents to provide tools for fundamental and translational research on the topic. Tests using thermal, mechanical, and chemical stimuli, measures of hyperalgesia and allodynia, models of inflammatory or neuropathic pain, constitute a toolbox available to researchers. These tests and models allowed rapid progress on the anatomo-molecular basis of physiological and pathological pain, even though they have yet to translate into new analgesic drugs. More recently, a growing effort has been put forth trying to assess pain in rats or mice, rather than nociceptive reflexes, or at studying complex states affected by chronic pain. This aids to further improve the translational value of preclinical research in a field with balanced research efforts between fundamental research, preclinical work, and human studies. This review describes classical tests and models of nociception and pain in rodents. It also presents some recent and ongoing developments in nociceptive tests, recent trends for pain evaluation, and raises the question of the appropriateness between tests, models, and procedures.

  8. Testing and Validation of the Dynamic Inertia Measurement Method

    Science.gov (United States)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  9. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  10. MEASURABILITY OF ORAL SPEECH SAMPLE AS A TEST QUALITY

    Directory of Open Access Journals (Sweden)

    Olena Petrashchuk

    2011-03-01

    Full Text Available Abstract. The article deals with the problem of measurability of oral speech sample as a test quality.Provision of this quality is required for reliability of assessment speaking skills. The main focus is on specificnature of speaking skill including its mental, communication and social aspects. Assessment of speakingskills is analyzed through prism of descriptors of rating scales proposed in ICAO documents. Method of oralproficiency interview is applied to obtain an oral speech sample measurable against the scales. Themeasurability of oral speech sample is considered as a Speaking Test quality alongside with other testqualities such as validity and reliability.Keywords: aviation english language proficiency, ICAO rating scale, measurability of oral speechperformance, oral speech sample, speaking skill.

  11. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.

    2002-07-01

    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  12. Overview of the Standard Model Measurements with the ATLAS Detector

    CERN Document Server

    Liu, Yanwen; The ATLAS collaboration

    2017-01-01

    The ATLAS Collaboration is engaged in precision measurement of fundamental Standard Model parameters, such as the W boson mass, the weak-mixing angle or the strong coupling constant. In addition, the production cross-sections of a large variety of final states involving high energetic jets, photons as well as single and multi vector bosons are measured multi differentially at several center of mass energies. This allows to test perturbative QCD calculations to highest precision. In addition, these measurements allow also to test models beyond the SM, e.g. those leading to anomalous gauge couplings. In this talk, we give a broad overview of the Standard Model measurement campaign of the ATLAS collaboration, where selected topics will be discussed in more detail.

  13. Model of ASTM Flammability Test in Microgravity: Iron Rods

    Science.gov (United States)

    Steinberg, Theodore A; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)

    2000-01-01

    There is extensive qualitative results from burning metallic materials in a NASA/ASTM flammability test system in normal gravity. However, this data was shown to be inconclusive for applications involving oxygen-enriched atmospheres under microgravity conditions by conducting tests using the 2.2-second Lewis Research Center (LeRC) Drop Tower. Data from neither type of test has been reduced to fundamental kinetic and dynamic systems parameters. This paper reports the initial model analysis for burning iron rods under microgravity conditions using data obtained at the LERC tower and modeling the burning system after ignition. Under the conditions of the test the burning mass regresses up the rod to be detached upon deceleration at the end of the drop. The model describes the burning system as a semi-batch, well-mixed reactor with product accumulation only. This model is consistent with the 2.0-second duration of the test. Transient temperature and pressure measurements are made on the chamber volume. The rod solid-liquid interface melting rate is obtained from film records. The model consists of a set of 17 non-linear, first-order differential equations which are solved using MATLAB. This analysis confirms that a first-order rate, in oxygen concentration, is consistent for the iron-oxygen kinetic reaction. An apparent activation energy of 246.8 kJ/mol is consistent for this model.

  14. Precision electroweak tests of the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Renton, Peter B. [Denys Wilkinson Building, Oxford (United Kingdom)]. E-mail: p.renton1@physics.ox.ac.uk

    2002-09-01

    The present status of precision electroweak data is reviewed. These data include measurements of e{sup +}e{sup -}{yields}f-barf, taken at the Z resonance at LEP, which are used to determine the mass and width of the Z-boson. In addition, measurements have also been made of the forward-backward asymmetries for leptons and heavy-quarks, and also the final state polarization of the {tau}-lepton. At SLAC, where the electron beam was polarized, measurements were made of the left-right polarized asymmetry, A{sub LR}, and the left-right forward-backward asymmetries for b- and c-quarks. The mass, m{sub W}, and width, {gamma}{sub W}, of the W-boson have been measured at the Tevatron and at LEP, and the mass of the top-quark, m{sub t}, has been measured at the Tevatron. These data, plus other electroweak data, are used in global electroweak fits in which various Standard Model (SM) parameters are determined. A comparison is made between the results of the direct measurements of m{sub W} and m{sub t} with the indirect results coming from electroweak radiative corrections. Using all precision electroweak data, fits are also made to determine limits on the mass of the Higgs boson, m{sub H}. The influence on these limits of specific measurements, particularly those which are somewhat inconsistent with the SM, is explored. The data are also analysed in terms of the quasi-model-independent {epsilon} variables. Finally, the impact on the electroweak fits of the improvements in the determination of the W-boson and top-quark masses, expected from the Tevatron Run 2, is examined. (author)

  15. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...... as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...

  16. 36Cl bomb peak: comparison of modeled and measured data

    Directory of Open Access Journals (Sweden)

    A. Eichler

    2009-06-01

    Full Text Available The extensive nuclear bomb testing of the fifties and sixties and the final tests in the seventies caused a strong 36Cl peak that has been observed in ice cores world-wide. The measured 36Cl deposition fluxes in eight ice cores (Dye3, Fiescherhorn, Grenzgletscher, Guliya, Huascarán, North GRIP, Inylchek (Tien Shan and Berkner Island were compared with an ECHAM5-HAM general circulation model simulation (1952–1972. We find a good agreement between the measured and the modeled 36Cl fluxes assuming that the bomb test produced global 36Cl input was ~80 kg. The model simulation indicates that the fallout of the bomb test produced 36Cl is largest in the subtropics and mid-latitudes due to the strong stratosphere-troposphere exchange. In Greenland the 36Cl bomb signal is quite large due to the relatively high precipitation rate. In Antarctica the 36Cl bomb peak is small but is visible even in the driest areas. The model suggests that the large bomb tests in the Northern Hemisphere are visible around the globe but the later (end of sixties and early seventies smaller tests in the Southern Hemisphere are much less visible in the Northern Hemisphere. The question of how rapidly and to what extent the bomb produced 36Cl is mixed between the hemispheres depends on the season of the bomb test. The model results give an estimate of the amplitude of the bomb peak around the globe.

  17. 36Cl bomb peak: comparison of modeled and measured data

    Directory of Open Access Journals (Sweden)

    A. Eichler

    2009-01-01

    Full Text Available The extensive nuclear bomb testing of the fifties and sixties and the final tests in the seventies caused a strong 36Cl peak that has been observed in ice cores world-wide. The measured 36Cl deposition fluxes in eight ice cores (Dye3, Fiescherhorn, Grenzgletscher, Guliya, Huascarán, North GRIP, Inylchek (Tien Shan and Berkner Island were compared with an ECHAM5-HAM general circulation model simulation (1952–1972. We find a good agreement between the measured and the modeled 36Cl fluxes assuming that the bomb test produced global 36Cl input was ~80 kg. The model simulation indicates that the fallout of the bomb test produced 36Cl is largest in the subtropics and mid-latitudes due to the strong stratosphere-troposphere exchange. In Greenland the 36Cl bomb signal is quite large due to the relatively high precipitation rate. In Antarctica the 36Cl bomb peak is small but is visible even in the driest areas. The model suggests that the large bomb tests in the Northern Hemisphere are visible around the globe but the later (end of sixties and early seventies smaller tests in the Southern Hemisphere are much less visible in the Northern Hemisphere. The question of how rapidly and to what extent the bomb produced 36Cl is mixed between the hemispheres depends on the season of the bomb test. The model results give an estimate of the amplitude of the bomb peak around the globe.

  18. Test method to measure resistance towards fragmentation by studded tires

    Science.gov (United States)

    Viman, L.

    1995-01-01

    In the Nordic countries most cars are equipped with studded tires during the winter. The studs increase the wear on the road surfaces and a special test method has been developed to measure the aggregates' resistance towards fragmentation by studded tires. The method has proven to correlate very well with the actual wear on the road surfaces. The Nordic countries Sweden, Norway and Finland have agreed to propose this method as a European Standard test method. One reason is to be able to set requirements on aggregates to be used in countries where studded tires are allowed. A cross-testing project regarding this Nordic abrasion test for studded tires was set up in order to determine the repeatability and the reproducibility of the methods to become European Standard.

  19. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  20. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  1. Statistical tests of simple earthquake cycle models

    Science.gov (United States)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  2. Testing the Model of Oscillating Magnetic Traps

    Science.gov (United States)

    Szaforz, Ż.; Tomczak, M.

    2015-01-01

    The aim of this paper is to test the model of oscillating magnetic traps (the OMT model), proposed by Jakimiec and Tomczak ( Solar Phys. 261, 233, 2010). This model describes the process of excitation of quasi-periodic pulsations (QPPs) observed during solar flares. In the OMT model energetic electrons are accelerated within a triangular, cusp-like structure situated between the reconnection point and the top of a flare loop as seen in soft X-rays. We analyzed QPPs in hard X-ray light curves for 23 flares as observed by Yohkoh. Three independent methods were used. We also used hard X-ray images to localize magnetic traps and soft X-ray images to diagnose thermal plasmas inside the traps. We found that the majority of the observed pulsation periods correlates with the diameters of oscillating magnetic traps, as was predicted by the OMT model. We also found that the electron number density of plasma inside the magnetic traps in the time of pulsation disappearance is strongly connected with the pulsation period. We conclude that the observations are consistent with the predictions of the OMT model for the analyzed set of flares.

  3. Model updating of nonlinear structures from measured FRFs

    Science.gov (United States)

    Canbaloğlu, Güvenç; Özgüven, H. Nevzat

    2016-12-01

    There are always certain discrepancies between modal and response data of a structure obtained from its mathematical model and experimentally measured ones. Therefore it is a general practice to update the theoretical model by using experimental measurements in order to have a more accurate model. Most of the model updating methods used in structural dynamics are for linear systems. However, in real life applications most of the structures have nonlinearities, which restrict us applying model updating techniques available for linear structures, unless they work in linear range. Well-established frequency response function (FRF) based model updating methods would easily be extended to a nonlinear system if the FRFs of the underlying linear system (linear FRFs) could be experimentally measured. When frictional type of nonlinearity co-exists with other types of nonlinearities, it is not possible to obtain linear FRFs experimentally by using low level forcing. In this study a method (named as Pseudo Receptance Difference (PRD) method) is presented to obtain linear FRFs of a nonlinear structure having multiple nonlinearities including friction type of nonlinearity. PRD method, calculates linear FRFs of a nonlinear structure by using FRFs measured at various forcing levels, and simultaneously identifies all nonlinearities in the system. Then, any model updating method can be used to update the linear part of the mathematical model. In this present work, PRD method is used to predict the linear FRFs from measured nonlinear FRFs, and the inverse eigensensitivity method is employed to update the linear finite element (FE) model of the nonlinear structure. The proposed method is validated with different case studies using nonlinear lumped single-degree of freedom system, as well as a continuous system. Finally, a real nonlinear T-beam test structure is used to show the application and the accuracy of the proposed method. The accuracy of the updated nonlinear model of the

  4. Testing limits to airflow perturbation device (APD measurements

    Directory of Open Access Journals (Sweden)

    Jamshidi Shaya

    2008-10-01

    Full Text Available Abstract Background The Airflow Perturbation Device (APD is a lightweight, portable device that can be used to measure total respiratory resistance as well as inhalation and exhalation resistances. There is a need to determine limits to the accuracy of APD measurements for different conditions likely to occur: leaks around the mouthpiece, use of an oronasal mask, and the addition of resistance in the respiratory system. Also, there is a need for resistance measurements in patients who are ventilated. Method Ten subjects between the ages of 18 and 35 were tested for each station in the experiment. The first station involved testing the effects of leaks of known sizes on APD measurements. The second station tested the use of an oronasal mask used in conjunction with the APD during nose and mouth breathing. The third station tested the effects of two different resistances added in series with the APD mouthpiece. The fourth station tested the usage of a flexible ventilator tube in conjunction with the APD. Results All leaks reduced APD resistance measurement values. Leaks represented by two 3.2 mm diameter tubes reduced measured resistance by about 10% (4.2 cmH2O·sec/L for control and 3.9 cm H2O·sec/L for the leak. This was not statistically significant. Larger leaks given by 4.8 and 6.4 mm tubes reduced measurements significantly (3.4 and 3.0 cm cmH2O·sec/L, respectively. Mouth resistance measured with a cardboard mouthpiece gave an APD measurement of 4.2 cm H2O·sec/L and mouth resistance measured with an oronasal mask was 4.5 cm H2O·sec/L; the two were not significantly different. Nose resistance measured with the oronasal mask was 7.6 cm H2O·sec/L. Adding airflow resistances of 1.12 and 2.10 cm H2O·sec/L to the breathing circuit between the mouth and APD yielded respiratory resistance values higher than the control by 0.7 and 2.0 cm H2O·sec/L. Although breathing through a 52 cm length of flexible ventilator tubing reduced the APD

  5. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  6. Discrete Spectral Local Measurement Method for Testing Solar Concentrators

    Directory of Open Access Journals (Sweden)

    Huifu Zhao

    2012-01-01

    Full Text Available In order to compensate for the inconvenience and instability of outdoor photovoltaic concentration test system which are caused by the weather changes, we design an indoor concentration test system with a large caliber and a high parallelism, and then verify its feasibility and scientificity. Furthermore, we propose a new concentration test method: the discrete spectral local measurement method. A two-stage Fresnel concentration system is selected as the test object. The indoor and the outdoor concentration experiments are compared. The results show that the outdoor concentration efficiency of the two-stage Fresnel concentration system is 85.56%, while the indoor is 85.45%. The two experimental results are so close that we can verify the scientificity and feasibility of the indoor concentration test system. The light divergence angle of the indoor concentration test system is 0.267° which also matches with sunlight divergence angle. The indoor concentration test system with large diameter (145 mm, simple structure, and low cost will have broad applications in solar concentration field.

  7. Standard test method for measurement of fatigue crack growth rates

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    1.1 This test method covers the determination of fatigue crack growth rates from near-threshold to Kmax controlled instability. Results are expressed in terms of the crack-tip stress-intensity factor range (ΔK), defined by the theory of linear elasticity. 1.2 Several different test procedures are provided, the optimum test procedure being primarily dependent on the magnitude of the fatigue crack growth rate to be measured. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength so long as specimens are of sufficient thickness to preclude buckling and of sufficient planar size to remain predominantly elastic during testing. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size is variable to be adjusted for yield strength and applied force. Specimen thickness may be varied independent of planar size. 1.5 The details of the various specimens and test configurations are shown in Annex A1-Annex A3. Specimen configurations other than t...

  8. Establishing an infrared measurement and modelling capability

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-04-01

    Full Text Available is supplemented with a considerable body of self-study material, tutorial assignments and laboratory demonstrations. A series of six significant experiments was used to demon- strate, in a practical manner, some future test scenarios and to reinforce... The fourth experiment investigated the applicability of the three imaging cameras? spectral ranges for temperature measurement of objects in the open sunlight. A secondary objective was to determine the true temperature and emissivity of the test targets...

  9. Using measurements for evaluation of black carbon modeling

    Directory of Open Access Journals (Sweden)

    S. Gilardoni

    2010-04-01

    probability distribution (PD curves. Simple monthly median comparisons, the Student's t-test, and the Mann-Whitney test are discussed as alternative statistical tools to evaluate the model performance. The agreement measured by the Student's t-test, when applied to the logarithm of EBC concentrations, overestimates the higher PD agreements and underestimates the lower PD agreements; the Mann-Whitney test can be employed to evaluate model performance on a relative scale when the shape of model and experimental distributions are similar.

  10. Using measurements for evaluation of black carbon modeling

    Directory of Open Access Journals (Sweden)

    S. Gilardoni

    2011-01-01

    probability distribution (PD curves. Simple monthly median comparisons, the Student's t-test, and the Mann-Whitney test are discussed as alternative statistical tools to evaluate the model performance. The agreement measured by the Student's t-test, when applied to the logarithm of EBC concentrations, overestimates the higher PD agreements and underestimates the lower PD agreements; the Mann-Whitney test can be employed to evaluate model performance on a relative scale when the shape of model and experimental distributions are similar.

  11. Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data

    Science.gov (United States)

    Lung, Shun-fat; Pak, Chan-gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  12. Experimental temperature measurements for the energy amplifier test

    Energy Technology Data Exchange (ETDEWEB)

    Calero, J. [Centro de Estudios y Experimentacion de Obras Publicas (CEDEX), Madrid (Spain); Cennini, P. [European Laboratory for Particle Physics, CH-1211 Geneva 23 (Switzerland); Gallego, E. [Universidad Politecnica de Madrid (UPM), E-28040 Madrid (Spain); Galvez, J. [European Laboratory for Particle Physics, CH-1211 Geneva 23 (Switzerland)]|[Universidad Autonoma de Madrid (UAM), E-28049 Madrid (Spain); Garcia Tabares, L. [Centro de Estudios y Experimentacion de Obras Publicas (CEDEX), Madrid (Spain); Gonzalez, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), E-28040 Madrid (Spain); Jaren, J. [Universidad Autonoma de Madrid (UAM), E-28049 Madrid (Spain); Lopez, C. [Universidad Autonoma de Madrid (UAM), E-28049 Madrid (Spain); Lorente, A. [Universidad Politecnica de Madrid (UPM), E-28040 Madrid (Spain); Martinez Val, J.M. [Universidad Politecnica de Madrid (UPM), E-28040 Madrid (Spain); Oropesa, J. [European Laboratory for Particle Physics, CH-1211 Geneva 23 (Switzerland); Rubbia, C. [European Laboratory for Particle Physics, CH-1211 Geneva 23 (Switzerland); Rubio, J.A. [European Laboratory for Particle Physics, CH-1211 Geneva 23 (Switzerland)]|[Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), E-28040 Madrid (Spain); Saldana, F. [European Laboratory for Particle Physics, CH-1211 Geneva 23 (Switzerland); Tamarit, J. [Centro de Estudios y Experimentacion de Obras Publicas (CEDEX), Madrid (Spain); Vieira, S. [Universidad Autonoma de Madrid (UAM), E-28049 Madrid (Spain)

    1996-06-21

    A uranium thermometer has been designed and built in order to make local power measurements in the first energy amplifier test (FEAT). Due to the experimental conditions power measurements of tens to hundreds of nW were required, implying a sensitivity in the temperature change measurements of the order of 1 mK. A uranium thermometer accurate enough to match that sensitivity has been built. The thermometer is able to determine the absolute energetic gain obtained in a tiny subcritical uranium assembly exposed to a proton beam of kinetic energies between 600 MeV and 2.75 GeV. In addition, the thermometer measurements have provided information about the spatial power distribution and the shape of the neutron spallation cascade. (orig.).

  13. Standards for measurements and testing of wind turbine power quality

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, P. [Risoe National Lab., Roskilde (Denmark); Gerdes, G.; Klosse, R.; Santjer, F. [DEWI, Wilhelmshaven (Germany); Robertson, N.; Davy, W. [NEL, Glasgow (United Kingdom); Koulouvari, M.; Morfiadakis, E. [CRES, Pikermi (Greece); Larsson, Aa. [Chalmers Univ. of Technology, Goeteborg (Sweden)

    1999-03-01

    The present paper describes the work done in power quality sub-task of the project `European Wind Turbine Testing Procedure Developments` funded by the EU SMT program. The objective of the power quality sub-task has been to make analyses and new recommendation(s) for the standardisation of measurement and verification of wind turbine power quality. The work has been organised in three major activities. The first activity has been to propose measurement procedures and to verify existing and new measurement procedures. This activity has also involved a comparison of the measurements and data processing of the participating partners. The second activity has been to investigate the influence of terrain, grid properties and wind farm summation on the power quality of wind turbines with constant rotor speed. The third activity has been to investigate the influence of terrain, grid properties and wind farm summation on the power quality of wind turbines with variable rotor speed. (au)

  14. 船模水动力学试验中几何参数的不确定度分析研究%On Geometric Parameters in Uncertainty Analysis of Measurement in Ship Model Test

    Institute of Scientific and Technical Information of China (English)

    吴宝山

    2007-01-01

    A series of ITTC recommended procedures for Uncertainty Analysis in experimental fluid dynamic measurement were put into effect since 1999,but there are still a lot of details in UA application to be further discussed or revised by now. The topic in this paper is one of such details. Nondimensional formulae are usually used to express the measured hydrodynamic forces and moments in ship model test, so that various geometric parameters are involved in uncertainty analysis. Several examples are given to illustrate some confusion in the uncertainty analysis concerning the geometric parameters and,it is recommended that it is better to perform the uncertainty analysis based more on physical and engineering consideration than on mathematical formulae.%自从1992年ITTC-QS工作组致函ITTC技术委员会提请ITTC各成员单位依照ANSI/ASME PTC 19.1开展测量不确定度评定以来,不确定度分析一直是热门议题.AIAA于1999年发布了风洞试验不确定度分析的指南.ITTC从1999年起逐步发布了一系列关于船模试验不确定度分析的规程.在这些规程中,由模型加工误差引起的水动力系数不确定度分量一般是通过水动力系数的(无量纲)表达式进行分析计算的.这种分析,从数学上是合理的,但从物理上分析和依据工程判断,有些则是相当不合理的,甚至是分析中的一个误区.文章列举了若干实例对此进行了阐述,并提出:对于因水动力无量纲化而引入的几何参数,其不确定度影响分量不能简单地依据所采用的无量纲表达式进行分析评定,而是要基于水动力学的分析进行合理的评定.

  15. Testing turbulent closure models with convection simulations

    CERN Document Server

    Snellman, J E; Mantere, M J; Rheinhardt, M; Dintrans, B

    2012-01-01

    Aims: To compare simple analytical closure models of turbulent Boussinesq convection for stellar applications with direct three-dimensional simulations both in homogeneous and inhomogeneous (bounded) setups. Methods: We use simple analytical closure models to compute the fluxes of angular momentum and heat as a function of rotation rate measured by the Taylor number. We also investigate cases with varying angles between the angular velocity and gravity vectors, corresponding to locating the computational domain at different latitudes ranging from the pole to the equator of the star. We perform three-dimensional numerical simulations in the same parameter regimes for comparison. The free parameters appearing in the closure models are calibrated by two fit methods using simulation data. Unique determination of the closure parameters is possible only in the non-rotating case and when the system is placed at the pole. In the other cases the fit procedures yield somewhat differing results. The quality of the closu...

  16. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  17. Cosmic Bell Test: Measurement Settings from Milky Way Stars

    Science.gov (United States)

    Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David; Fink, Matthias; Leung, Calvin; Mark, Anthony; Nguyen, Hien T.; Sanders, Isabella; Steinlechner, Fabian; Ursin, Rupert; Wengerowsky, Sören; Guth, Alan H.; Kaiser, David I.; Scheidl, Thomas; Zeilinger, Anton

    2017-02-01

    Bell's theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell's inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this "freedom of choice" was addressed by ensuring that selection of measurement settings via conventional "quantum random number generators" was spacelike separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomes as recently as mere microseconds before each experimental trial. Here we report on a new experimental test of Bell's inequality that, for the first time, uses distant astronomical sources as "cosmic setting generators." In our tests with polarization-entangled photons, measurement settings were chosen using real-time observations of Milky Way stars while simultaneously ensuring locality. Assuming fair sampling for all detected photons, and that each stellar photon's color was set at emission, we observe statistically significant ≳7.31 σ and ≳11.93 σ violations of Bell's inequality with estimated p values of ≲1.8 ×10-13 and ≲4.0 ×10-33, respectively, thereby pushing back by ˜600 years the most recent time by which any local-realist influences could have engineered the observed Bell violation.

  18. Model Tests of Pile Defect Detection

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The pile, as an important foundation style, is being used in engineering practice. Defects of different types and damages of different degrees easily occur during the process of pile construction. So,dietecting defects of the pile is very important. As so far, there are some difficult problems in pile defect detection. Based on stress wave theory, some of these typical difficult problems were studied through model tests. The analyses of the test results are carried out and some significant results of the low-strain method are obtained, when a pile has a gradually-decreasing crosssection part, the amplitude of the reflective signal originating from the defect is dependent on the decreasing value of the rate of crosssection β. No apparent signal reflected from the necking appeares on the velocity response curve when the value of β is less than about 3.5 %.

  19. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou

    1994-01-01

    The rational design diagram for Dolos armour should incorporate both the hydraulic stability and the structural integrity. The previous tests performed by Aalborg University (AU) made available such design diagram for the trunk of Dolos breakwater without superstructures (Burcharth et al. 1992......). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92......-0042. Furthermore, Task IA will give the design diagram for Tetrapod breakwaters without a superstructure. The more complete research results on Dolosse can certainly give some insight into the behaviour of Tetrapods armour layer of the breakwaters with superstructure. The main part of the experiment...

  20. Measured Test-Driven Development: Using Measures to Monitor and Control the Unit Development

    Directory of Open Access Journals (Sweden)

    Y. Dubinsky

    2007-01-01

    Full Text Available We analyze Test Driven Development (TDD from cognitive and social perspectives. Based on our analysis, we suggest a technique for controlling and monitoring the TDD process by examining measures that relate to the size and complexity of both code and tests. We call this approach Measured TDD. The motivation for TDD arose from practitioners' tendency to rush into code production, skipping the required testing needed to manufacture quality products. The motivation for Measured TDD is based on difficulties encountered by practitioners in applying TDD. Specifically, with the need to frequently refactor the unit, after every few test and code steps have been performed. We found that the suggested technique enables developers to gain better control over the development process.

  1. Infrared Thermography for Temperature Measurement and Non-Destructive Testing

    Science.gov (United States)

    Usamentiaga, Rubèn; Venegas, Pablo; Guerediaga, Jon; Vega, Laura; Molleda, Julio; Bulnes, Francisco G.

    2014-01-01

    The intensity of the infrared radiation emitted by objects is mainly a function of their temperature. In infrared thermography, this feature is used for multiple purposes: as a health indicator in medical applications, as a sign of malfunction in mechanical and electrical maintenance or as an indicator of heat loss in buildings. This paper presents a review of infrared thermography especially focused on two applications: temperature measurement and non-destructive testing, two of the main fields where infrared thermography-based sensors are used. A general introduction to infrared thermography and the common procedures for temperature measurement and non-destructive testing are presented. Furthermore, developments in these fields and recent advances are reviewed. PMID:25014096

  2. Friction measurement in MEMS using a new test structure

    Energy Technology Data Exchange (ETDEWEB)

    Crozier, B.T.; De Boer, M.P.; Redmond, J.M.; Bahr, D.F.; Michalske, T.A.

    1999-12-09

    A MEMS test structure capable of measuring friction between polysilicon surfaces under a variety of test conditions has been refined from previous designs. The device is applied here to measuring friction coefficients of polysilicon surfaces under different environmental, loading, and surface conditions. Two methods for qualitatively comparing friction coefficients ({mu}) using the device are presented. Samples that have been coated with a self-assembled monolayer of the lubricating film perfluorinated-decyltrichorosilane (PFTS) have a coefficient of friction that is approximately one-half that of samples dried using super-critical CO{sub 2} (SCCO{sub 2}) drying. Qualitative results indicate that {mu} is independent of normal pressure. Wear is shown to increase {mu} for both supercritically dried samples and PFTS coated samples, though the mechanisms appear to be different. Super critically dried surfaces appear to degrade continuously with increased wear cycles, while PFTS coated samples reach a steady state friction value after about 10{sup 4} cycles.

  3. Spectroscopy Measurements on Ablation Testing in High Enthalpy Plasma Flows

    Science.gov (United States)

    2010-11-01

    stagnation point, are located on the ablative material sample. 3.5 InfraRed THERMOGRAPHY Surface temperature measurement is a topic of great concern...high temperature material at two different narrow wavelengths. The temperature is calculated by building the ratio of the radiation intensities. The...this work is to develop the capability of testing and characterization of ablative materials exposed to high enthalpy plasma flows including both

  4. Skin test reactivity among Danish children measured 15 years apart

    DEFF Research Database (Denmark)

    Thomsen, SF; Ulrik, Charlotte Suppli; Porsbjerg, C;

    2006-01-01

    test (SPT) positivity in Danish children has changed from 1986 to 2001. METHODS: Serial cross-sectional studies of two different random population samples of children aged 7 to 17 years of age, living in urban Copenhagen, Denmark, were performed 15 years apart. The first cohort was investigated in 1986...... (n = 527) and the second in 2001 (n = 480). Skin test reactivity to nine common aeroallergens was measured at both occasions. RESULTS: The prevalence of positive SPT to at least one allergen decreased from 24.1% in 1986 to 18.9% in 2001, (p = 0.05). We found a declining prevalence of sensitization...... to most allergens tested, statistically significant; however, only for mugwort and Alternaria iridis. Among subjects, who were sensitized to only one allergen, we found significantly fewer individuals with reactions to D. pteronyssinus and mugwort. CONCLUSIONS: The prevalence of atopic sensitization...

  5. Model-independent tests of cosmic gravity.

    Science.gov (United States)

    Linder, Eric V

    2011-12-28

    Gravitation governs the expansion and fate of the universe, and the growth of large-scale structure within it, but has not been tested in detail on these cosmic scales. The observed acceleration of the expansion may provide signs of gravitational laws beyond general relativity (GR). Since the form of any such extension is not clear, from either theory or data, we adopt a model-independent approach to parametrizing deviations to the Einstein framework. We explore the phase space dynamics of two key post-GR functions and derive a classification scheme, and an absolute criterion on accuracy necessary for distinguishing classes of gravity models. Future surveys will be able to constrain the post-GR functions' amplitudes and forms to the required precision, and hence reveal new aspects of gravitation.

  6. AULA virtual reality test as an attention measure: convergent validity with Conners' Continuous Performance Test.

    Science.gov (United States)

    Díaz-Orueta, Unai; Garcia-López, Cristina; Crespo-Eguílaz, Nerea; Sánchez-Carpintero, Rocío; Climent, Gema; Narbona, Juan

    2014-01-01

    The majority of neuropsychological tests used to evaluate attention processes in children lack ecological validity. The AULA Nesplora (AULA) is a continuous performance test, developed in a virtual setting, very similar to a school classroom. The aim of the present study is to analyze the convergent validity between the AULA and the Continuous Performance Test (CPT) of Conners. The AULA and CPT were administered correlatively to 57 children, aged 6-16 years (26.3% female) with average cognitive ability (IQ mean = 100.56, SD = 10.38) who had a diagnosis of attention deficit/hyperactivity disorder (ADHD) according to DSM-IV-TR criteria. Spearman correlations analyses were conducted among the different variables. Significant correlations were observed between both tests in all the analyzed variables (omissions, commissions, reaction time, and variability of reaction time), including for those measures of the AULA based on different sensorial modalities, presentation of distractors, and task paradigms. Hence, convergent validity between both tests was confirmed. Moreover, the AULA showed differences by gender and correlation to Perceptual Reasoning and Working Memory indexes of the WISC-IV, supporting the relevance of IQ measures in the understanding of cognitive performance in ADHD. In addition, the AULA (but not Conners' CPT) was able to differentiate between ADHD children with and without pharmacological treatment for a wide range of measures related to inattention, impulsivity, processing speed, motor activity, and quality of attention focus. Additional measures and advantages of the AULA versus Conners' CPT are discussed.

  7. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  8. Performance verification tests of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Murakami, Haruyuki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-11-15

    Highlights: • The performance of the JT-60SA CS model coil was verified. • The CS model coil comprised a quad-pancake wound with a Nb{sub 3}Sn CIC conductor. • The CS model coil met the design requirements. - Abstract: As a final check of the coil manufacturing method of the JT-60 Super Advanced (JT-60SA) central solenoid (CS), we verified the performance of a CS model coil. The model coil comprised a quad-pancake wound with a Nb{sub 3}Sn cable-in-conduit conductor. Measurements of the critical current, joint resistance, pressure drop, and magnetic field were conducted in the verification tests. In the critical-current measurement, the critical current of the model coil coincided with the estimation derived from a strain of −0.62% for the Nb{sub 3}Sn strands. As a result, critical-current degradation caused by the coil manufacturing process was not observed. The results of the performance verification tests indicate that the model coil met the design requirements. Consequently, the manufacturing process of the JT-60SA CS was established.

  9. Field Measurements and Pullout Tests of Reinforced Earth Retaining Wall

    Institute of Scientific and Technical Information of China (English)

    陈群; 何昌荣; 朱分清

    2004-01-01

    In this paper, field measurements and pullout tests of a new type of reinforced earth retaining wall, which is reinforced by trapezoid concrete blocks connected by steel bar, are described. Field measurements included settlements of the earth fill, tensile forces in the ties and earth pressures on the facing panels during the construction and at completion. Based on the measurements, the following statements can be made: ( 1 ) the tensile forces in the ties increased with the height of backfill above the tie and there is a tensile force crest in most ties; (2) at completion, the measured earth pressures along the wall face were between the values of the active earth pressures and the pressures at rest; (3) larger settlements occurred near the face of the wall where a zone of drainage sand and gravel was not compacted properly and smaller settlements occurred in the well-compacted backfill. The results of field pullout tests indicated that the magnitudes of pullout resistances as well as tensile forces induced in the ties were strongly influenced by the relative displacements between the ties and the backfill, and pullout resistances increased with the height of backfill above the ties and the length of ties.

  10. Evaluation of automated blood pressure measurements during exercise testing.

    Science.gov (United States)

    Hossack, K F; Gross, B W; Ritterman, J B; Kusumi, F; Bruce, R A

    1982-11-01

    Measurements of systolic (SBP) and diastolic (DBP) blood pressure were made at rest and during symptom-limited exercise with an automated blood pressure measuring device (EBPM). Comparisons were made between the EBPM readings and those made with mercury manometer. Correlations were high (SBP r = 0.92, DBP r = 0.80) when readings were made in the same arm, but were less satisfactory when the cuffs were on different arms (SBP r = 0.80, DBP r = 0.46). The correlation between two mercury manometer readings was SBP r = 0.90, and DBP r = 0.75. Comparison between EBPM and intra-arterial measurements were similar (SBP r = 0.74, DBP r = 0.79) to comparison between mercury manometer and intra-arterial measurements (SBP r = 0.81, DBP r = 0.61). The EBPM detected SBP at consistently higher levels than did physicians, which may be an advantage in the noisy environment of an exercise test. There was a definite tendency for physicians to record blood pressure to the nearest 10 mm Hg, whereas the frequency distribution curve for EBPM measurements was smoother. The EBPM operated satisfactorily at rest and during maximal exercise and gave as reliable measurements as a physician using a mercury manometer and, in the small number of available cases, detected exertional hypotension more often than the physician.

  11. Dogmatic behavior among students: testing a new measure of dogmatism.

    Science.gov (United States)

    Altemeyer, Bob

    2002-12-01

    The study tested the validity of a new measure of dogmatism by examining university students' evaluations of the Bible. Those who believed that every word in the Bible came directly from God and that the Bible is free of any error, contradiction, or inconsistency scored much higher on this dogmatism measure than students who thought otherwise. Such "true believers" then read the 4 highly varying Gospel accounts of the resurrection of Jesus. The most dogmatic of them still insisted there were no contradictions or inconsistencies in the Bible. The less dogmatic acknowledged that contradictions and inconsistencies exist. These results reinforce those of 4 earlier studies that indicated that the new measure of dogmatism has empirical validity.

  12. Testing THEMIS wave measurements against the cold plasma theory

    Science.gov (United States)

    Taubenschuss, Ulrich; Santolik, Ondrej; Le Contel, Olivier; Bonnell, John

    2016-04-01

    The THEMIS (Time History of Events and Macroscale Interactions during Substorms) mission records a multitude of electromagnetic waves inside Earth's magnetosphere and provides data in the form of high-resolution electric and magnetic waveforms. We use multi-component measurements of whistler mode waves and test them against the theory of wave propagation in a cold plasma. The measured ratio cB/E (c is speed of light in vacuum, B is magnetic wave amplitude, E is electric wave amplitude) is compared to the same quantity calculated from cold plasma theory over linearized Faraday's law. The aim of this study is to get estimates for measurement uncertainties, especially with regard to the electric field and the cold plasma density, as well as evaluating the validity of cold plasma theory inside Earth's radiation belts.

  13. A measurement model of multiple intelligence profiles of management graduates

    Science.gov (United States)

    Krishnan, Heamalatha; Awang, Siti Rahmah

    2017-05-01

    In this study, developing a fit measurement model and identifying the best fitting items to represent Howard Gardner's nine intelligences namely, musical intelligence, bodily-kinaesthetic intelligence, mathematical/logical intelligence, visual/spatial intelligence, verbal/linguistic intelligence, interpersonal intelligence, intrapersonal intelligence, naturalist intelligence and spiritual intelligence are the main interest in order to enhance the opportunities of the management graduates for employability. In order to develop a fit measurement model, Structural Equation Modeling (SEM) was applied. A psychometric test which is the Ability Test in Employment (ATIEm) was used as the instrument to measure the existence of nine types of intelligence of 137 University Teknikal Malaysia Melaka (UTeM) management graduates for job placement purposes. The initial measurement model contains nine unobserved variables and each unobserved variable is measured by ten observed variables. Finally, the modified measurement model deemed to improve the Normed chi-square (NC) = 1.331; Incremental Fit Index (IFI) = 0.940 and Root Mean Square of Approximation (RMSEA) = 0.049 was developed. The findings showed that the UTeM management graduates possessed all nine intelligences either high or low. Musical intelligence, mathematical/logical intelligence, naturalist intelligence and spiritual intelligence contributed highest loadings on certain items. However, most of the intelligences such as bodily kinaesthetic intelligence, visual/spatial intelligence, verbal/linguistic intelligence interpersonal intelligence and intrapersonal intelligence possessed by UTeM management graduates are just at the borderline.

  14. Measuring Student Course Evaluations: The Use of a Loglinear Model

    Science.gov (United States)

    Ting, Ding Hooi; Abella, Mireya Sosa

    2007-01-01

    In this paper, the researchers attempt to incorporate the marketing theory (specifically the service quality model) into the education system. The service quality measurements have been employed to investigate its applicability in the education environment. Most of previous studies employ the regression-based analysis to test the effectiveness of…

  15. Experimental study of a low-thrust measurement system for thruster ground tests.

    Science.gov (United States)

    Gong, Jingsong; Hou, Lingyun; Zhao, Wenhua

    2014-03-01

    The development of thrusters used for the control of position and orbit of micro-satellites requires thrust stands that can measure low thrust. A new method to measure low thrust is presented, and the measuring device is described. The test results show that the thrust range is up to 1000 mN, the measurement error of the device is lower than ±1% of full scale, and the drift of the zero offset is less than ±1% of full scale. Its response rise time is less than 15 ms. It is employed to measure the working process of a model chemical thruster with repeatability.

  16. Port Adriano, 2D-Model tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Meinert, Palle; Andersen, Thomas Lykke

    the crown wall have been measured. The model has been subjected to irregular waves corresponding to typical conditions offshore from the intended prototype location. Characteristic situations have been video recorded. The stability of the toe has been investigated. The wave-generated forces on the caisson...... and the crown have been recorded. The maximum of horizontal wave force and the related tilting moment together with the pressure distribution are documented for waves in the range of design conditions. The parameters and results in the report are given in full-scale values, if nothing else is stated....

  17. Using Radiocarbon to Test Models of Ecosystem Carbon Cycling

    Science.gov (United States)

    Trumbore, S.; Lin, H.; Randerson, J.

    2007-05-01

    The radiocarbon content of carbon stored in and respired by ecosystems provides a direct measure of ecosystem carbon dynamics that can be directly compared to model predictions. Because carbon cycles through ecosystems on a variety of timescales, the mean age of C in standing biomass and soil organic matter pools is older than the mean age of microbially respired carbon. In turn, each pathway for C transit through ecosystems my respond differently to edaphic conditions; for example, soil organic matter mean age is controlled by factors affecting stabilization of C on very long timescales, such as mineralogy, while a factor like litter quality that effects decomposition rates reflects vegetation and climate characteristics. We compare the radiocarbon signature of heterotrophically respired CO2 across a number of ecosystems with models predicted using the CASA ecosystem model. The major controls of microbially respired CO2 from ecosystems include the residence time of C in living plant pools (i.e. the age of C in litter inputs to soil) and factors that control decomposition rates (litter quality and climate). Major differences between model and measured values at low latitudes are related to how woody debris pools are treated differently in models and measurements. The time lag between photosynthesis and respiration is a key ecosystem property that defines its potential to store or release carbon given variations in annual net primary production. Radiocarbon provides a rare case where models can be directly compared with measurements to provide a test of this parameter.

  18. Study of indoor radon distribution using measurements and CFD modeling.

    Science.gov (United States)

    Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K

    2014-10-01

    Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment.

  19. Thermal tests for laser Doppler perfusion measurements in Raynaud's syndrome

    Science.gov (United States)

    Kacprzak, Michal; Skora, A.; Obidzinska, J.; Zbiec, A.; Maniewski, Roman; Staszkiewicz, W.

    2004-07-01

    The laser Doppler method offers a non-invasive, real time technique for monitoring of blood perfusion in microcirculation. In practical measurements the perfusion index is given only in relative values. Thus, accurate and reproducible results can be only obtained when using a well controlled stimulation test. The aim of this study was evaluation of the thermal stimulation test, which is frequently used to investigate microcirculation in patients with Raynaud's syndrome. Three types of thermal tests, in which air or water with temperature in range 5°C - 40°C were used. Ten normal volunteers and fifteen patients with clinical symptoms of the primary Raynaud's syndrome were enrolled in this study. To estimate skin microcirculation changes during the thermal test, the multichannel laser Doppler system and laser Doppler scanner were used. The obtained results were analyzed from the point of view of the efficiency of these methods and the thermal provocative tests in differentiation of normal subjects and patient with Raynaud's syndrome.

  20. Applying Model Checking to Generate Model-Based Integration Tests from Choreography Models

    Science.gov (United States)

    Wieczorek, Sebastian; Kozyura, Vitaly; Roth, Andreas; Leuschel, Michael; Bendisposto, Jens; Plagge, Daniel; Schieferdecker, Ina

    Choreography models describe the communication protocols between services. Testing of service choreographies is an important task for the quality assurance of service-based systems as used e.g. in the context of service-oriented architectures (SOA). The formal modeling of service choreographies enables a model-based integration testing (MBIT) approach. We present MBIT methods for our service choreography modeling approach called Message Choreography Models (MCM). For the model-based testing of service choreographies, MCMs are translated into Event-B models and used as input for our test generator which uses the model checker ProB.

  1. Accuracy and reproducibility of measurements on plaster models and digital models created using an intraoral scanner.

    Science.gov (United States)

    Camardella, Leonardo Tavares; Breuning, Hero; de Vasconcellos Vilella, Oswaldo

    2017-05-01

    The purpose of the present study was to evaluate the accuracy and reproducibility of measurements made on digital models created using an intraoral color scanner compared to measurements on dental plaster models. This study included impressions of 28 volunteers. Alginate impressions were used to make plaster models, and each volunteers' dentition was scanned with a TRIOS Color intraoral scanner. Two examiners performed measurements on the plaster models using a digital caliper and measured the digital models using Ortho Analyzer software. The examiners measured 52 distances, including tooth diameter and height, overjet, overbite, intercanine and intermolar distances, and the sagittal relationship. The paired t test was used to assess intra-examiner performance and measurement accuracy of the two examiners for both plaster and digital models. The level of clinically relevant differences between the measurements according to the threshold used was evaluated and a formula was applied to calculate the chance of finding clinically relevant errors on measurements on plaster and digital models. For several parameters, statistically significant differences were found between the measurements on the two different models. However, most of these discrepancies were not considered clinically significant. The measurement of the crown height of upper central incisors had the highest measurement error for both examiners. Based on the interexaminer performance, reproducibility of the measurements was poor for some of the parameters. Overall, our findings showed that most of the measurements on digital models created using the TRIOS Color scanner and measured with Ortho Analyzer software had a clinically acceptable accuracy compared to the same measurements made with a caliper on plaster models, but the measuring method can affect the reproducibility of the measurements.

  2. Efficiency of Switch-Mode Power Audio Amplifiers - Test Signals and Measurement Techniques

    DEFF Research Database (Denmark)

    Iversen, Niels Elkjær; Knott, Arnold; Andersen, Michael A. E.

    2016-01-01

    Switch-mode technology is greatly used for audio amplification. This is mainly due to the great efficiency this technology offers. Normally the efficiency of a switch-mode audio amplifier is measured using a sine wave input. However this paper shows that sine waves represent real audio very poorly....... An alternative signal is proposed for test purposes. The efficiency of a switch-mode power audio amplifier is modelled and measured with both sine wave and the proposed test signal as inputs. The results show that the choice of switching devices with low on resistances are unfairly favored when measuring...

  3. An exploration of measures for comparing measurements with the results from meteorological models for Mexico City

    Energy Technology Data Exchange (ETDEWEB)

    Williams, M.D.; Brown, M.J.

    1995-12-31

    Los Alamos National Laboratory and Instituto Mexicano del Petroleo have completed a joint study of options for improving air quality in Mexico City. We used a three-dimensional, prognostic, higher-order turbulence model for atmospheric circulation (HOTMAC) to treat domains that include an urbanized area. We tested the model against routine measurements and those of a major field program. During the field program, measurements included: (1) lidar measurements of aerosol transport and dispersion, (2) aircraft measurements of winds, turbulence, and chemical species aloft, (3) aircraft measurements of skin temperatures, and (4) Tethersonde measurements of winds and ozone. We made both graphical and statistical comparisons and we have reported some of the comparisons to provide insight into the meaning of statistical parameters including the index of agreement.

  4. Cognitive mechanisms of mindfulness: A test of current models.

    Science.gov (United States)

    Isbel, Ben; Mahar, Doug

    2015-12-15

    Existing models of mindfulness describe the self-regulation of attention as primary, leading to enhanced decentering and ability to access and override automatic cognitive processes. This study compared 23 experienced and 21 non-meditators on tests of mindfulness, attention, decentering, and ability to override automatic cognitive processes to test the cognitive mechanisms proposed to underlie mindfulness practice. Experienced meditators had significantly higher mindfulness and decentering than non-meditators. No significant difference between groups was found on measures of attention or ability to override automatic processes. These findings support the prediction that mindfulness leads to enhanced decentering, but do not support the cognitive mechanisms proposed to underlie such enhancement. Since mindfulness practice primarily involves internally directed attention, it may be the case that cognitive tests requiring externally directed attention and timed responses do not accurately assess mindfulness-induced cognitive changes. Implications for the models of mindfulness and future research are discussed.

  5. Ablative Rocket Deflector Testing and Computational Modeling

    Science.gov (United States)

    Allgood, Daniel C.; Lott, Jeffrey W.; Raines, Nickey

    2010-01-01

    A deflector risk mitigation program was recently conducted at the NASA Stennis Space Center. The primary objective was to develop a database that characterizes the behavior of industry-grade refractory materials subjected to rocket plume impingement conditions commonly experienced on static test stands. The program consisted of short and long duration engine tests where the supersonic exhaust flow from the engine impinged on an ablative panel. Quasi time-dependent erosion depths and patterns generated by the plume impingement were recorded for a variety of different ablative materials. The erosion behavior was found to be highly dependent on the material s composition and corresponding thermal properties. For example, in the case of the HP CAST 93Z ablative material, the erosion rate actually decreased under continued thermal heating conditions due to the formation of a low thermal conductivity "crystallization" layer. The "crystallization" layer produced near the surface of the material provided an effective insulation from the hot rocket exhaust plume. To gain further insight into the complex interaction of the plume with the ablative deflector, computational fluid dynamic modeling was performed in parallel to the ablative panel testing. The results from the current study demonstrated that locally high heating occurred due to shock reflections. These localized regions of shock-induced heat flux resulted in non-uniform erosion of the ablative panels. In turn, it was observed that the non-uniform erosion exacerbated the localized shock heating causing eventual plume separation and reversed flow for long duration tests under certain conditions. Overall, the flow simulations compared very well with the available experimental data obtained during this project.

  6. What do tests of formal reasoning actually measure?

    Science.gov (United States)

    Lawson, Anton E.

    Tests of formal operational reasoning derived from Piagetian theory have been found to be effective predictors of academic achievement. Yet Piaget's theory regarding the underlying nature of formal operations and their employment in specific contexts has run into considerable empirical difficulty. The primary purpose of this study was to present the core of an alternative theory of the nature of advanced scientific reasoning. That theory, referred to as the multiple-hypothesis theory, argues that tests of formal operational reasoning actually measure the extent to which persons have acquired the ability to initiate reasoning with more than one specific antecedent condition, or if they are unable to imagine more than one antecedent condition, they are aware that more than one is possible; therefore conclusions that are drawn are tempered by this possibility. As a test of this multiple-hypothesis theory of advanced reasoning and the contrasting Piagetian theory of formal operations, a sample of 922 college students were first classified as concrete operational, transitional, or formal operational, based upon responses to standard Piagetian measures of formal operational reasoning. They were then administered seven logic tasks. Actual response patterns to the tasks were analyzed and found to be similar to predicted response patterns derived from the multiple-hypothesis theory and were different from those predicted by Piagetian theory. Therefore, support was obtained for the multiple-hypothesis theory. The terms intuitive and reflective were suggested to replace the terms concrete operational and formal operational to refer to persons at varying levels of intellectual development.

  7. Measurement and modeling of unsaturated hydraulic conductivity

    Science.gov (United States)

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  8. Standardization of Solar Mirror Reflectance Measurements - Round Robin Test: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Meyen, S.; Lupfert, E.; Fernandez-Garcia, A.; Kennedy, C.

    2010-10-01

    Within the SolarPaces Task III standardization activities, DLR, CIEMAT, and NREL have concentrated on optimizing the procedure to measure the reflectance of solar mirrors. From this work, the laboratories have developed a clear definition of the method and requirements needed of commercial instruments for reliable reflectance results. A round robin test was performed between the three laboratories with samples that represent all of the commercial solar mirrors currently available for concentrating solar power (CSP) applications. The results show surprisingly large differences in hemispherical reflectance (sh) of 0.007 and specular reflectance (ss) of 0.004 between the laboratories. These differences indicate the importance of minimum instrument requirements and standardized procedures. Based on these results, the optimal procedure will be formulated and validated with a new round robin test in which a better accuracy is expected. Improved instruments and reference standards are needed to reach the necessary accuracy for cost and efficiency calculations.

  9. Skin test reactivity among Danish children measured 15 years apart

    DEFF Research Database (Denmark)

    Thomsen, SF; Ulrik, Charlotte Suppli; Porsbjerg, C

    2006-01-01

    BACKGROUND: Knowledge of secular trends in the prevalence of allergy among children stems in large part from questionnaire surveys, whereas repeated cross-sectional studies using objective markers of atopic sensitization are sparse. OBJECTIVES: To investigate whether the prevalence of skin prick...... (n = 527) and the second in 2001 (n = 480). Skin test reactivity to nine common aeroallergens was measured at both occasions. RESULTS: The prevalence of positive SPT to at least one allergen decreased from 24.1% in 1986 to 18.9% in 2001, (p = 0.05). We found a declining prevalence of sensitization...... to most allergens tested, statistically significant; however, only for mugwort and Alternaria iridis. Among subjects, who were sensitized to only one allergen, we found significantly fewer individuals with reactions to D. pteronyssinus and mugwort. CONCLUSIONS: The prevalence of atopic sensitization...

  10. Psychometric Measurement Models and Artificial Neural Networks

    Science.gov (United States)

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  11. Measurements and Information in Spin Foam Models

    CERN Document Server

    Garcia-Islas, J Manuel

    2012-01-01

    We present a problem relating measurements and information theory in spin foam models. In the three dimensional case of quantum gravity we can compute probabilities of spin network graphs and study the behaviour of the Shannon entropy associated to the corresponding information. We present a general definition, compute the Shannon entropy of some examples, and find some interesting inequalities.

  12. Relevant Criteria for Testing the Quality of Turbulence Models

    DEFF Research Database (Denmark)

    Frandsen, Sten; Jørgensen, Hans E.; Sørensen, John Dalsgaard

    2007-01-01

    turbines when seeking wind characteristics that correspond to one blade and the entire rotor, respectively. For heights exceeding 50-60m the gust factor increases with wind speed. For heights larger the 60-80m, present assumptions on the value of the gust factor are significantly conservative, both for 3......Seeking relevant criteria for testing the quality of turbulence models, the scale of turbulence and the gust factor have been estimated from data and compared with predictions from first-order models of these two quantities. It is found that the mean of the measured length scales is approx. 10......% smaller than the IEC model, for wind turbine hub height levels. The mean is only marginally dependent on trends in time series. It is also found that the coefficient of variation of the measured length scales is about 50%. 3sec and 10sec pre-averaging of wind speed data are relevant for MW-size wind...

  13. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  14. Solar system tests of brane world models

    CERN Document Server

    Boehmer, Christian G; Lobo, Francisco S N

    2008-01-01

    The classical tests of general relativity (perihelion precession, deflection of light, and the radar echo delay) are considered for the Dadhich, Maartens, Papadopoulos and Rezania (DMPR) solution of the spherically symmetric static vacuum field equations in brane world models. For this solution the metric in the vacuum exterior to a brane world star is similar to the Reissner-Nordstrom form of classical general relativity, with the role of the charge played by the tidal effects arising from projections of the fifth dimension. The existing observational solar system data on the perihelion shift of Mercury, on the light bending around the Sun (obtained using long-baseline radio interferometry), and ranging to Mars using the Viking lander, constrain the numerical values of the bulk tidal parameter and of the brane tension.

  15. Solar system tests of brane world models

    Energy Technology Data Exchange (ETDEWEB)

    Boehmer, Christian G [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Harko, Tiberiu [Department of Physics and Center for Theoretical and Computational Physics, University of Hong Kong, Pok Fu Lam Road (Hong Kong); Lobo, Francisco S N [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 2EG (United Kingdom)], E-mail: c.boehmer@ucl.ac.uk, E-mail: harko@hkucc.hku.hk, E-mail: francisco.lobo@port.ac.uk

    2008-02-21

    The classical tests of general relativity (perihelion precession, deflection of light and the radar echo delay) are considered for the Dadhich, Maartens, Papadopoulos and Rezania (DMPR) solution of the spherically symmetric static vacuum field equations in brane world models. For this solution the metric in the vacuum exterior to a brane world star is similar to the Reissner-Nordstroem form of classical general relativity, with the role of the charge played by the tidal effects arising from projections of the fifth dimension. The existing observational solar system data on the perihelion shift of Mercury, on the light bending around the Sun (obtained using long-baseline radio interferometry), and ranging to Mars using the Viking lander, constrain the numerical values of the bulk tidal parameter and of the brane tension.

  16. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range...... of temperature and pressure. Reliable experimental solubility measurements under conditions similar to those found in reality will help the development of strong and consistent models. Chapter 1 is a short introduction to the problem of scale formation, the model chosen to study it, and the experiments performed...... the thermodynamic model used in this Ph.D. project. A review of alternative activity coefficient models an earlier work on scale formation is provided. A guideline to the parameter estimation procedure and the number of parameters estimated in the present work are also described. The prediction of solid...

  17. Innovative testing and measurement solutions for smart grid

    CERN Document Server

    Huang, Qi; Yi, Jianbo; Zhen, Wei

    2015-01-01

    Focuses on sensor applications and smart meters in the newly developing interconnected smart grid Focuses on sensor applications and smart meters in the newly developing interconnected smart grid Presents the most updated technological developments in the measurement and testing of power systems within the smart grid environment Reflects the modernization of electric utility power systems with the extensive use of computer, sensor, and data communications technologies, providing benefits to energy consumers and utility companies alike The leading author heads a group of researchers focusing on

  18. Skin test reactivity among Danish children measured 15 years apart

    DEFF Research Database (Denmark)

    Thomsen, SF; Ulrik, Charlotte Suppli; Porsbjerg, C

    2006-01-01

    BACKGROUND: Knowledge of secular trends in the prevalence of allergy among children stems in large part from questionnaire surveys, whereas repeated cross-sectional studies using objective markers of atopic sensitization are sparse. OBJECTIVES: To investigate whether the prevalence of skin prick...... (n = 527) and the second in 2001 (n = 480). Skin test reactivity to nine common aeroallergens was measured at both occasions. RESULTS: The prevalence of positive SPT to at least one allergen decreased from 24.1% in 1986 to 18.9% in 2001, (p = 0.05). We found a declining prevalence of sensitization...

  19. Emittance Measurements of the SSRL Gun Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, Michael; Clendenin, James; Fisher, Alan; Miller, Roger; Palmer, Dennis; Park, Sam; Schmerge, John; Weaver, Jim; Wiedemann, Helmut; Winick, Herman; Yeremian, Dian; /SLAC; Meyerhofer, David; Reis, David; /Rochester U.

    2011-09-01

    A photocathode RF gun test stand is under construction in the injector vault of the Stanford Synchrotron Radiation Laboratory at SLAC. The goal of this facility is to produce an electron beam with a normalized emittance of 1-3[mm-mr], a longitudinal bunch duration of the order of 10[ps] FWHM and approximately 1[nC] of charge per bunch. The beam will be generated from a laser driven copper photocathode RF gun developed in collaboration with BNL, LBL and UCLA. The 3-5[MeV] beam from the gun will be accelerated using a SLAC three meter S-band accelerator section. The emittance of the electron beam will be measured through the use of quadrupole scans with phosphor screens and also a wire scanner. The details of the experimental setup will be discussed, and first measurements will be presented and compared with results from PARMELA simulations.

  20. Vibrational measurement for commissioning SRF Accelerator Test Facility at Fermilab

    CERN Document Server

    McGee, M W; Martinez, A; Pischalnikov, Y; Schappert, W

    2012-01-01

    The commissioning of two cryomodule components is underway at Fermilab's Superconducting Radio Frequency (SRF) Accelerator Test Facility. The research at this facility supports the next generation high intensity linear accelerators such as the International Linear Collider (ILC), a new high intensity injector (Project X) and other future machines. These components, Cryomodule #1 (CM1) and Capture Cavity II (CC2), which contain 1.3 GHz cavities are connected in series in the beamline and through cryogenic plumbing. Studies regarding characterization of ground motion, technical and cultural noise continue. Mechanical transfer functions between the foundation and critical beamline components have been measured and overall system displacement characterized. Baseline motion measurements given initial operation of cryogenic, vacuum systems and other utilities are considered.

  1. Vibrational measurement for commissioning SRF Accelerator Test Facility at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    McGee, M.W.; Leibfritz, J.; Martinez, A.; Pischalnikov, Y.; Schappert, W.; /Fermilab

    2011-03-01

    The commissioning of two cryomodule components is underway at Fermilab's Superconducting Radio Frequency (SRF) Accelerator Test Facility. The research at this facility supports the next generation high intensity linear accelerators such as the International Linear Collider (ILC), a new high intensity injector (Project X) and other future machines. These components, Cryomodule No.1 (CM1) and Capture Cavity II (CC2), which contain 1.3 GHz cavities are connected in series in the beamline and through cryogenic plumbing. Studies regarding characterization of ground motion, technical and cultural noise continue. Mechanical transfer functions between the foundation and critical beamline components have been measured and overall system displacement characterized. Baseline motion measurements given initial operation of cryogenic, vacuum systems and other utilities are considered.

  2. SHEAR STRENGTH MEASURING EQUIPMENT EVALUATION AT THE COLD TEST FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    MEACHAM JE

    2009-09-09

    Retrievals under current criteria require that approximately 2,000,000 gallons of double-shell tank (DST) waste storage space not be used to prevent creating new tanks that might be susceptible to buoyant displacement gas release events (BDGRE). New criteria are being evaluated, based on actual sludge properties, to potentially show that sludge wastes do not exhibit the same BDGRE risk. Implementation of the new criteria requires measurement of in situ waste shear strength. Cone penetrometers were judged the best equipment for measuring in situ shear strength and an A.P. van den berg Hyson 100 kN Light Weight Cone Penetrometer (CPT) was selected for evaluation. The CPT was procured and then evaluated at the Hanford Site Cold Test Facility. Evaluation demonstrated that the equipment with minor modification was suitable for use in Tank Farms.

  3. [Using the Implicit Association Test (IAT) to measure implicit shyness].

    Science.gov (United States)

    Aikawa, Atsushi; Fujii, Tsutomu

    2011-04-01

    Previous research has shown that implicitly measured shyness predicted spontaneous shy behavior in social situations, while explicit self-ratings of shyness predicted controlled shy behavior (Asendorpf, Banse, & Mücke, 2002). The present study examined whether these same results would be replicated in Japan. In Study 1, college students (N=47) completed a shyness Implicit Association Test (IAT for shyness) and explicit self-ratings of shyness. In Study 2, friends (N=69) of the Study 1 participants rated those participants on various personality scales. Covariance structure analysis, revealed that only implicit self-concept measured by the shyness IAT predicted other-rated high interpersonal tension (spontaneous shy behavior). Also, only explicit self-concept predicted other-rated low praise seeking (controlled shy behavior). The results of this study are similar to the findings of the previous research.

  4. Performance Improvement of a Measurement Station for Superconducting Cable Test

    CERN Document Server

    Arpaia, P; Montenero, G; Le Naour, S

    2012-01-01

    A fully digital system, improving measurements flexibility, integrator drift, and current control of superconducting transformers for cable test, is proposed. The system is based on a high-performance integration of Rogowski coil signal and a flexible direct control of the current into the secondary windings. This allows state-of-the-art performance to be overcome by means of out-of-the-shelf components: on a full-scale of 32 kA, current measurement resolution of 1 A, stability below 0.25 Amin-1, and controller ripple less than 50 ppm. The system effectiveness has been demonstrated experimentally on the superconducting transformer of the Facility for the Research of Superconducting Cables at the European Organization for Nuclear Research (CERN).

  5. Two Bayesian tests of the GLOMOsys Model.

    Science.gov (United States)

    Field, Sarahanne M; Wagenmakers, Eric-Jan; Newell, Ben R; Zeelenberg, René; van Ravenzwaaij, Don

    2016-12-01

    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMO(sys) model. (PsycINFO Database Record

  6. Reducing the Cost of Model-Based Testing through Test Case Diversity

    Science.gov (United States)

    Hemmati, Hadi; Arcuri, Andrea; Briand, Lionel

    Model-based testing (MBT) suffers from two main problems which in many real world systems make MBT impractical: scalability and automatic oracle generation. When no automated oracle is available, or when testing must be performed on actual hardware or a restricted-access network, for example, only a small set of test cases can be executed and evaluated. However, MBT techniques usually generate large sets of test cases when applied to real systems, regardless of the coverage criteria. Therefore, one needs to select a small enough subset of these test cases that have the highest possible fault revealing power. In this paper, we investigate and compare various techniques for rewarding diversity in the selected test cases as a way to increase the likelihood of fault detection. We use a similarity measure defined on the representation of the test cases and use it in several algorithms that aim at maximizing the diversity of test cases. Using an industrial system with actual faults, we found that rewarding diversity leads to higher fault detection compared to the techniques commonly reported in the literature: coverage-based and random selection. Among the investigated algorithms, diversification using Genetic Algorithms is the most cost-effective technique.

  7. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  8. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  9. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...

  10. Boron-10 ABUNCL Models of Fuel Testing

    Energy Technology Data Exchange (ETDEWEB)

    Siciliano, Edward R.; Lintereur, Azaree T.; Kouzes, Richard T.; Ely, James H.

    2013-10-01

    The Department of Energy Office of Nuclear Safeguards and Security (NA-241) is supporting the project Coincidence Counting With Boron-Based Alternative Neutron Detection Technology at Pacific Northwest National Laboratory (PNNL) for the development of a 3He proportional counter alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a system based upon 10B-lined proportional tubes in a configuration typical for 3He-based coincidence counter applications. This report provides results from MCNP simulations of the General Electric Reuter-Stokes Alternative Boron-Based Uranium Neutron Coincidence Collar (ABUNCL) active configuration model with fuel pins previously measured at Los Alamos National Laboratory. A comparison of the GE-ABUNCL simulations and simulations of 3He based UNCL-II active counter (the system for which the GE-ABUNCL was targeted to replace) with the same fuel pin assemblies is also provided.

  11. A Coupled THMC model of FEBEX mock-up test

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Liange; Samper, Javier

    2008-09-15

    FEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project for the engineered barrier system (EBS) of a radioactive waste repository in granite. It includes two full-scale heating and hydration tests: the in situ test performed at Grimsel (Switzerland) and a mock-up test operating at CIEMAT facilities in Madrid (Spain). The mock-up test provides valuable insight on thermal, hydrodynamic, mechanical and chemical (THMC) behavior of EBS because its hydration is controlled better than that of in situ test in which the buffer is saturated with water from the surrounding granitic rock. Here we present a coupled THMC model of the mock-up test which accounts for thermal and chemical osmosis and bentonite swelling with a state-surface approach. The THMC model reproduces measured temperature and cumulative water inflow data. It fits also relative humidity data at the outer part of the buffer, but underestimates relative humidities near the heater. Dilution due to hydration and evaporation near the heater are the main processes controlling the concentration of conservative species while surface complexation, mineral dissolution/precipitation and cation exchanges affect significantly reactive species as well. Results of sensitivity analyses to chemical processes show that pH is mostly controlled by surface complexation while dissolved cations concentrations are controlled by cation exchange reactions.

  12. The Rasch model for speed tests and some extensions with applications to incomplete designs

    NARCIS (Netherlands)

    Jansen, MGH

    1997-01-01

    In psychological measurement a distinction can be made between speed and power tests. Although most tests are partially speeded, the speed element is usually neglected. Here, the focus will be on latent trait models for pure speed tests. A particularly simple model has been developed by Rasch for th

  13. Development of multiple choice pictorial test for measuring the dimensions of knowledge

    Science.gov (United States)

    Nahadi, Siswaningsih, Wiwi; Erna

    2017-05-01

    This study aims to develop a multiple choice pictorial test as a tool to measure dimension of knowledge in chemical equilibrium subject. The method used is Research and Development and validation that was conducted in the preliminary studies and model development. The product is multiple choice pictorial test. The test was developed by 22 items and tested to 64 high school students in XII grade. The quality of test was determined by value of validity, reliability, difficulty index, discrimination power, and distractor effectiveness. The validity of test was determined by CVR calculation using 8 validators (4 university teachers and 4 high school teachers) with average CVR value 0,89. The reliability of test has very high category with value 0,87. Discrimination power of items with a very good category is 32%, 59% as good category, and 20% as sufficient category. This test has a varying level of difficulty, item with difficult category is 23%, the medium category is 50%, and the easy category is 27%. The distractor effectiveness of items with a very poor category is 1%, poor category is 1%, medium category is 4%, good category is 39%, and very good category is 55%. The dimension of knowledge that was measured consist of factual knowledge, conceptual knowledge, and procedural knowledge. Based on the questionnaire, students responded quite well to the developed test and most of the students like this kind of multiple choice pictorial test that include picture as evaluation tool compared to the naration tests was dominated by text.

  14. The role of observational uncertainties in testing model hypotheses

    Science.gov (United States)

    Westerberg, I. K.; Birkel, C.

    2012-12-01

    Knowledge about hydrological processes and the spatial and temporal distribution of water resources is needed as a basis for managing water for hydropower, agriculture and flood-protection. Conceptual hydrological models may be used to infer knowledge on catchment functioning but are affected by uncertainties in the model representation of reality as well as in the observational data used to drive the model and to evaluate model performance. Therefore, meaningful hypothesis testing of the hydrological functioning of a catchment requires such uncertainties to be carefully estimated and accounted for in model calibration and evaluation. The aim of this study was to investigate the role of observational uncertainties in hypothesis testing, in particular whether it was possible to detect model-structural representations that were wrong in an important way given the uncertainties in the observational data. We studied the relatively data-scarce tropical Sarapiqui catchment in Costa Rica, Central America, where water resources play a vital part for hydropower production and livelihood. We tested several model structures of varying complexity as hypotheses about catchment functioning, but also hypotheses about the nature of the modelling errors. The tests were made within a learning framework for uncertainty estimation which enabled insights into data uncertainties, suitable model-structural representations and appropriate likelihoods. The observational uncertainty in discharge data was estimated from a rating-curve analysis and precipitation measurement errors through scenarios relating the error to, for example, canopy interception, wind-driven rain and the elevation gradient. The hypotheses were evaluated in a posterior analysis of the simulations where the performance of each simulation was analysed relative to the observational uncertainties for the entire hydrograph as well as for different aspects of the hydrograph (e.g. peak flows, recession periods, and base flow

  15. Contact sponge water absorption test implemented for in situ measures

    Science.gov (United States)

    Gaggero, Laura; Scrivano, Simona

    2016-04-01

    The contact sponge method is a non-destructive in-situ methodology used to estimate a water uptake coefficient. The procedure, unlike other in-situ measurement was proven to be directly comparable to the water uptake laboratory measurements, and was registered as UNI 11432:2011. The UNI Normal procedure requires to use a sponge with known density, soaked in water, weighed, placed on the material for 1 minute (UNI 11432, 2011; Pardini & Tiano, 2004), then weighed again. Difficulties arise in operating on test samples or on materials with porosity varied for decay. While carrying on the test, fluctuations in the bearing of the environmental parameters were negligible, but not the pressure applied to the surface, that induced the release of different water amounts towards the material. For this reason we designed a metal piece of the same diameter of the plate carrying the sponge, to be screwed at the tip of a pocket penetrometer. With this instrument the sponge was kept in contact with the surface for 1 minute applying two different loads, at first pushed with 0.3 kg/cm2 in order to press the sponge, but not its holder, against the surface. Then, a load of 1.1 kg/ cm2 was applied, still avoiding deviating the load to the sponge holder. We applied both the current and our implemented method to determine the water absorption by contact sponge on 5 fresh rock types (4 limestones: Fine - and Coarse grained Pietra di Vicenza, Rosso Verona, Breccia Aurora, and the silicoclastic Macigno sandstone). The results show that 1) the current methodology imply manual skill and experience to produce a coherent set of data; the variable involved are in fact not only the imposed pressure but also the compression mechanics. 2) The control on the applied pressure allowed reproducible measurements. Moreover, 3) the use of a thicker sponge enabled to apply the method even on rougher surfaces, as the device holding the sponge is not in contact with the tested object. Finally, 4) the

  16. Testing ocean tide models using GGP superconducting gravimeter observations

    Science.gov (United States)

    Baker, T.; Bos, M.

    2003-04-01

    Observations from the global network of superconducting gravimeters in the Global Geodynamics Project (GGP) are used to test 10 ocean tide models (SCHW; FES94.1, 95.2, 98, 99; CSR3.0, 4.0; TPXO.5; GOT99.2b; and NAO.99b). In addition, observations are used from selected sites with LaCoste and Romberg gravimeters with electrostatic feedback, where special attention has been given to achieving a calibration accuracy of 0.1%. In Europe, there are several superconducting gravimeter stations in a relatively small area and this can be used to advantage in testing the ocean (and body) tide models and in identifying sites with anomalous observations. At some of the superconducting gravimeter sites there are anomalies in the in-phase components of the main tidal harmonics, which are due to calibration errors of up to 0.3%. It is shown that the recent ocean tide models are in better agreement with the tidal gravity observations than were the earlier models of Schwiderski and FES94.1. However, no single ocean tide model gives completely satisfactory results in all areas of the world. For example, for M2 the TPXO.5 and NAO99b models give anomalous results in Europe, whereas the FES95.2, FES98 and FES99 models give anomalous results in China and Japan. It is shown that the observations from this improved set of tidal gravity stations will provide an important test of the new ocean tide models that will be developed in the next few years. For further details see Baker, T.F. and Bos, M.S. (2003). "Validating Earth and ocean tide models using tidal gravity measurements", Geophysical Journal International, 152.

  17. Mathematical model of radon activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paschuk, Sergei A.; Correa, Janine N.; Kappke, Jaqueline; Zambianchi, Pedro, E-mail: sergei@utfpr.edu.br, E-mail: janine_nicolosi@hotmail.com [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil); Denyak, Valeriy, E-mail: denyak@gmail.com [Instituto de Pesquisa Pele Pequeno Principe, Curitiba, PR (Brazil)

    2015-07-01

    Present work describes a mathematical model that quantifies the time dependent amount of {sup 222}Rn and {sup 220}Rn altogether and their activities within an ionization chamber as, for example, AlphaGUARD, which is used to measure activity concentration of Rn in soil gas. The differential equations take into account tree main processes, namely: the injection of Rn into the cavity of detector by the air pump including the effect of the traveling time Rn takes to reach the chamber; Rn release by the air exiting the chamber; and radioactive decay of Rn within the chamber. Developed code quantifies the activity of {sup 222}Rn and {sup 220}Rn isotopes separately. Following the standard methodology to measure Rn activity in soil gas, the air pump usually is turned off over a period of time in order to avoid the influx of Rn into the chamber. Since {sup 220}Rn has a short half-life time, approximately 56s, the model shows that after 7 minutes the activity concentration of this isotope is null. Consequently, the measured activity refers to {sup 222}Rn, only. Furthermore, the model also addresses the activity of {sup 220}Rn and {sup 222}Rn progeny, which being metals represent potential risk of ionization chamber contamination that could increase the background of further measurements. Some preliminary comparison of experimental data and theoretical calculations is presented. Obtained transient and steady-state solutions could be used for planning of Rn in soil gas measurements as well as for accuracy assessment of obtained results together with efficiency evaluation of chosen measurements procedure. (author)

  18. Computerized classification testing with the Rasch model

    NARCIS (Netherlands)

    Eggen, Theo J.H.M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the

  19. Perceived game realism: a test of three alternative models.

    Science.gov (United States)

    Ribbens, Wannes

    2013-01-01

    Perceived realism is considered a key concept in explaining the mental processing of media messages and the societal impact of media. Despite its importance, little is known about its conceptualization and dimensional structure, especially with regard to digital games. The aim of this study was to test a six-factor model of perceived game realism comprised of simulational realism, freedom of choice, perceptual pervasiveness, social realism, authenticity, and character involvement and to assess it against an alternative single- and five-factor model. Data were collected from 380 male digital game users who judged the realism of the first-person shooter Half-Life 2 based upon their previous experience with the game. Confirmatory factor analysis was applied to investigate which model fits the data best. The results support the six-factor model over the single- and five-factor solutions. The study contributes to our knowledge of perceived game realism by further developing its conceptualization and measurement.

  20. Flavor release measurement from gum model system

    DEFF Research Database (Denmark)

    Ovejero-López, I.; Haahr, Anne-Mette; van den Berg, Frans W.J.

    2004-01-01

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio...... composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory...

  1. Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.

    Science.gov (United States)

    Yuan, Ke-Hai; Chan, Wai

    2016-09-01

    Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record

  2. Testing competing measures of profitability for mobile resources.

    Science.gov (United States)

    Barrette, Maryse; Wu, Gi-Mick; Brodeur, Jacques; Giraldeau, Luc-Alain; Boivin, Guy

    2009-01-01

    Optimal diet theory often fails to predict a forager's diet choice when prey are mobile. Because they escape or defend themselves, mobile prey are likely to increase the forager's handling time, thereby decreasing its fitness gain rate. Many animals have been shown to select their prey so as to maximize either their fitness gain or their fitness gain rate. However, no study has yet compared directly these two measures of profitability by generating testable predictions about the choice of the forager. Under laboratory conditions, we compared these two measures of profitability, using the aphid parasitoid Aphidius colemani and its host, Myzus persicae. Fitness gain was calculated for parasitoids developing in each host instar by measuring life-history traits such as developmental time, sex ratio and fecundity. Fitness gain rate was estimated by dividing fitness gain by handling time, the time required to subdue the host. Fourth instar aphids provided the best fitness gain to parasitoids, whereas second instar aphids were the most profitable in terms of fitness gain rate. Host choice tests showed that A. colemani females preferred second instar hosts, suggesting that their decision maximizes fitness gain rate over fitness gain. Our results indicate that fitness gain rate is a reliable predictor of animal's choice for foragers exploiting resources that impose additional time cost due to their mobility.

  3. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  4. Magnetic field measurements of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Murakami, Haruyuki; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-01-15

    Highlights: • Magnetic fields of the JT-60SA CS model coil were measured. • While the coil current was held constant at 20 kA, magnetic fields varied slightly with several different long time constants. • We investigated coils consisting of CIC conductors and having long time constants. - Abstract: In a cold test of the JT-60SA CS model coil, which has a quad-pancake configuration consisting of a Nb{sub 3}Sn cable-in-conduit (CIC) conductor, magnetic fields were measured using Hall sensors. For a holding coil current of 20 kA, measured magnetic fields varied slightly with long time constants in the range 17–571 s, which was much longer than the time constant derived from a measurement using a short straight sample. To validate the measurements, the magnetic fields of the model coil were calculated using a computational model representing the positions of Nb{sub 3}Sn strands inside the CIC conductor. The calculated results were in good agreement with the measurements. Consequently, the validity of the magnetic field measurements was confirmed. Next, we investigated other coils consisting of CIC conductors and having long time constants. The only commonality among the coils was the use of CIC conductors. At present, there is no obvious way to prevent generation of such magnetic-field variations with long time constants.

  5. Model year 2010 Ford Fusion Level-1 testing report.

    Energy Technology Data Exchange (ETDEWEB)

    Rask, E.; Bocci, D.; Duoba, M.; Lohse-Busch, H.; Energy Systems

    2010-11-23

    As a part of the US Department of Energy's Advanced Vehicle Testing Activity (AVTA), a model year 2010 Ford Fusion was procured by eTec (Phoenix, AZ) and sent to ANL's Advanced Powertrain Research Facility for the purposes of vehicle-level testing in support of the Advanced Vehicle Testing Activity. Data was acquired during testing using non-intrusive sensors, vehicle network information, and facilities equipment (emissions and dynamometer). Standard drive cycles, performance cycles, steady-state cycles, and A/C usage cycles were conducted. Much of this data is openly available for download in ANL's Downloadable Dynamometer Database. The major results are shown in this report. Given the benchmark nature of this assessment, the majority of the testing was done over standard regulatory cycles and sought to obtain a general overview of how the vehicle performs. These cycles include the US FTP cycle (Urban) and Highway Fuel Economy Test cycle as well as the US06, a more aggressive supplemental regulatory cycle. Data collection for this testing was kept at a fairly high level and includes emissions and fuel measurements from an exhaust emissions bench, high-voltage and accessory current/voltage from a DC power analyzer, and CAN bus data such as engine speed, engine load, and electric machine operation. The following sections will seek to explain some of the basic operating characteristics of the MY2010 Fusion and provide insight into unique features of its operation and design.

  6. A multivariate multilevel approach to the modeling of accuracy and speed of test takers

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    Response times on test items are easily collected in modern computerized testing. When collecting both (binary) responses and (continuous) response times on test items, it is possible to measure the accuracy and speed of test takers. To study the relationships between these two constructs, the model

  7. Operational Testing of Satellite based Hydrological Model (SHM)

    Science.gov (United States)

    Gaur, Srishti; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.

    2017-04-01

    gauging sites as reference, viz., Muri, Jamshedpur and Ghatshila. Individual model set-up has been prepared for these sub-basins and calibration and validation using Split-sample test, first level of operational testing scheme is in progress. Subsequently for geographic transposability, Proxy-basin test will be done using Muri and Jamshedpur as proxy basins. Climatic transposability will be tested for dry and wet years using Differential split-sample test. For incorporating both geographic and climatic transposability Proxy-basin differential split sample test will be used. For quantitative evaluation of SHM, during Split-sample test Nash-Sutcliffe efficiency (NSE), Coefficient of Determination (R R^2)) and Percent BIAS (PBIAS) are being used. However, for transposability, a productive approach involving these performance measures, i.e. NSE*R R^2)*PBIAS will be used to decide the best value of parameters. Keywords: SHM, credibility, operational testing, transposability.

  8. Measuring Visual Closeness of 3-D Models

    KAUST Repository

    Gollaz Morales, Jose Alejandro

    2012-09-01

    Measuring visual closeness of 3-D models is an important issue for different problems and there is still no standardized metric or algorithm to do it. The normal of a surface plays a vital role in the shading of a 3-D object. Motivated by this, we developed two applications to measure visualcloseness, introducing normal difference as a parameter in a weighted metric in Metro’s sampling approach to obtain the maximum and mean distance between 3-D models using 3-D and 6-D correspondence search structures. A visual closeness metric should provide accurate information on what the human observers would perceive as visually close objects. We performed a validation study with a group of people to evaluate the correlation of our metrics with subjective perception. The results were positive since the metrics predicted the subjective rankings more accurately than the Hausdorff distance.

  9. Transonic Cascade Measurements to Support Analytical Modeling

    Science.gov (United States)

    2007-11-02

    RECEIVED JUL 0 12005 FINAL REPORT FOR: AFOSR GRANT F49260-02-1-0284 TRANSONIC CASCADE MEASUREMENTS TO SUPPORT ANALYTICAL MODELING Paul A. Durbin ...PAD); 650-723-1971 (JKE) durbin @vk.stanford.edu; eaton@vk.stanford.edu submitted to: Attn: Dr. John Schmisseur Air Force Office of Scientific Research...both spline and control points for subsequent wall shape definitions. An algebraic grid generator was used to generate the grid for the blade-wall

  10. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    Science.gov (United States)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  11. Combination of the H1 and ZEUS inclusive cross-section measurements at proton beam energies of 460 GeV and 575 GeV and tests of low Bjorken-x phenomenological models

    Energy Technology Data Exchange (ETDEWEB)

    Belov, Pavel

    2013-06-15

    A combination is presented of the inclusive neutral current e{sup {+-}}p scattering cross section data collected by the H1 and ZEUS collaborations during the last months of the HERA II operation period with proton beam energies E{sub p} of 460 and 575 GeV. The kinematic range of the cross section data covers low absolute four-momentum transfers squared, 1.5 GeV{sup 2} {<=} Q{sup 2} {<=} 110 GeV{sup 2}, small values of Bjorken-x, 2.8.10{sup -5} {<=} x {<=} 1.5.10{sup -2}, and high inelasticity y {<=} 0.85. The combination algorithm is based on the method of least squares and takes into account correlations of the systematic uncertainties. The combined data are used in the QCD fits to extract the parton distribution functions. The phenomenological low-x dipole models are tested and parameters of the models are obtained. A good description of the data by the dipole model taking into account the evolution of the gluon distribution is observed. The longitudinal structure function F{sub L} is extracted from the combination of the currently used H1 and ZEUS reduced proton beam energy data with previously published H1 nominal proton beam energy data of 920 GeV. A precision of the obtained values of F{sub L} is improved at medium Q{sup 2} compared to the published results of the H1 collaboration.

  12. Item Construction Using Reflective, Formative, or Rasch Measurement Models: Implications for Group Work

    Science.gov (United States)

    Peterson, Christina Hamme; Gischlar, Karen L.; Peterson, N. Andrew

    2017-01-01

    Measures that accurately capture the phenomenon are critical to research and practice in group work. The vast majority of group-related measures were developed using the reflective measurement model rooted in classical test theory (CTT). Depending on the construct definition and the measure's purpose, the reflective model may not always be the…

  13. The Wave Dragon: 3D overtopping tests on a floating model

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, Luca; Frigaard, Peter

    1999-05-01

    In order to investigate the overtopping likely to occur in the Wave Dragon (WD), some tests were carried out measuring the total overtopping volumes on a floating 1:50 model for 5 selected wave conditions. The model set-up and the tests are first described. The aim of the tests is to measure all the water overtopping the crest freeboard and, if possible, to obtain some elements to enhance the WD efficiency. A typical exponential equation describing overtopping has been fitted to the data. Tests results are then given and compared to previous tests performed in a 2D non-floating model of the structure. (au)

  14. Limited measurement dependence in multiple runs of a Bell test

    Science.gov (United States)

    Pope, James E.; Kay, Alastair

    2013-09-01

    The assumption of free will—the ability of an experimentalist to make random choices—is central to proving the indeterminism of quantum resources, the primary tool in quantum cryptography. Relaxing the assumption in a Bell test allows violation of the usual classical threshold by correlating the random number generators used to select measurements with the devices that perform them. In this paper, we examine not only these correlations, but those across multiple runs of the experiment. This enables an explicit exposition of the optimal cheating strategy and how the correlations manifest themselves within this strategy. Similar to other recent results, we prove that there remain Bell violations for a sufficiently high, yet nonmaximal degree of free will which cannot be simulated by a classical attack, regardless of how many runs of the experiment those choices are correlated over.

  15. Large-Scale Tests of the DGP Model

    CERN Document Server

    Song, Y S; Hu, W; Song, Yong-Seon; Sawicki, Ignacy; Hu, Wayne

    2006-01-01

    The self-accelerating braneworld model (DGP) can be tested from measurements of the expansion history of the universe and the formation of structure. Current constraints on the expansion history from supernova luminosity distances, the CMB, and the Hubble constant exclude the simplest flat DGP model at about 3sigma. The best-fit open DGP model is, however, only a marginally poorer fit to the data than flat LCDM. Its substantially different expansion history raises structure formation challenges for the model. A dark-energy model with the same expansion history would predict a highly significant discrepancy with the baryon oscillation measurement due the high Hubble constant required and a large enhancement of CMB anisotropies at the lowest multipoles due to the ISW effect. For the DGP model to satisfy these constraints new gravitational phenomena would have to appear at the non-linear and cross-over scales respectively. A prediction of the DGP expansion history in a region where the phenomenology is well unde...

  16. Measuring individuals' response quality in self-administered psychological tests: an introduction to Gendre's functional method.

    Science.gov (United States)

    Dupuis, Marc; Meier, Emanuele; Capel, Roland; Gendre, Francis

    2015-01-01

    The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.

  17. Theoretical Explanation and Improvement to the Flare Model of Lithography Based on the Kirk Test

    Institute of Scientific and Technical Information of China (English)

    CHEN De-Liang; CAO Yi-Ping; HUANG Zhen-Fen

    2011-01-01

    @@ The Kirk test has good precision for measuring stray light in optical lithography and is the usual method of measuring stray light.However, Kirk did not provide a theoretical explanation to his simulation model.We attempt to give Kirk's model a kind of theoretical explanation and a little improvement based on the model of point spread function of scattering and the theory of statistical optics.It is indicated by simulation that the improved model fits Kirk's measurement data better.

  18. Testing biomechanical models of human lumbar lordosis variability.

    Science.gov (United States)

    Castillo, Eric R; Hsu, Connie; Mair, Ross W; Lieberman, Daniel E

    2017-05-01

    Lumbar lordosis (LL) is a key adaptation for bipedalism, but factors underlying curvature variations remain unclear. This study tests three biomechanical models to explain LL variability. Thirty adults (15 male, 15 female) were scanned using magnetic resonance imaging (MRI), a standing posture analysis was conducted, and lumbar range of motion (ROM) was assessed. Three measures of LL were compared. The trunk's center of mass was estimated from external markers to calculate hip moments (Mhip ) and lumbar flexion moments. Cross-sectional areas of lumbar vertebral bodies and trunk muscles were measured from scans. Regression models tested associations between LL and the Mhip moment arm, a beam bending model, and an interaction between relative trunk strength (RTS) and ROM. Hip moments were not associated with LL. Beam bending was moderately predictive of standing but not supine LL (R(2)  = 0.25). Stronger backs and increased ROM were associated with greater LL, especially when standing (R(2)  = 0.65). The strength-flexibility model demonstrates the differential influence of RTS depending on ROM: individuals with high ROM exhibited the most LL variation with RTS, while those with low ROM showed reduced LL regardless of RTS. Hip moments appear constrained suggesting the possibility of selection, and the beam model explains some LL variability due to variations in trunk geometry. The strength-flexibility interaction best predicted LL, suggesting a tradeoff in which ROM limits the effects of back strength on LL. The strength-flexibility model may have clinical relevance for spinal alignment and pathology. This model may also suggest that straight-backed Neanderthals had reduced lumbar mobility. © 2017 Wiley Periodicals, Inc.

  19. Fabrication and Testing of Viscosity Measuring Instrument (Viscometer

    Directory of Open Access Journals (Sweden)

    A. B. HASSAN

    2006-01-01

    Full Text Available This paper presents the fabrication and testing of a simple and portable viscometer for the measurement of bulk viscosity of different Newtonian fluids. It is aimed at making available the instrument in local markets and consequently reducing or eliminating the prohibitive cost of importation. The method employed is the use of a D.C motor to rotate a disc having holes for infra-red light to pass through and fall on a photo-diode thus undergoing amplification and this signal being translated on a moving-coil meter as a deflection. The motor speed is kept constant but varies with changes in viscosity of the fluid during stirring, which alter signals being read on the meter. The faster is revolution per minute of the disc, the less the deflection on the meter and vise-versa. From the results of tests conducted on various sample fluids using data on standard Newtonian fluids as reliable guide the efficiency of the viscometer was 76.5%.

  20. Complementary cosmological tests of RSII brane models

    CERN Document Server

    Holanda, R F L; Dahia, F

    2013-01-01

    In this paper we explore observational bounds on flat and non-flat cosmological models in Type II Randall-Sundrum (RSII) branes. In a first analysis, we consider current measurements of the expansion rate H(z) (with two priors on the local Hubble parameter) and 288 Type Ia supernovae from the Sloan Digital Sky Survey (within the framework of the mlcs2k2 light-curve fitting method). We find that the joint analysis involving these data is an interesting tool to impose limits on the brane tension density parameter (Omega_{lambda}) and that the spatial curvature has a negligible influence on Omega_{lambda} estimates. In order to obtain stronger bounds for the contribution of the $\\Omega_{\\lambda}$ we also add in our analysis the baryon oscillation peak (BAO) and cosmic microwave background radiation (CMB) observations by using the so-called CMB/BAO ratio. From this analysis we find that the Omega_{lambda} contribution is less than 4.10^{-5} (1sigma).

  1. Modeling motive activation in the Operant Motives Test

    DEFF Research Database (Denmark)

    Runge, J. Malte; Lang, Jonas W. B.; Engeser, Stefan

    2016-01-01

    The Operant Motive Test (OMT) is a picture-based procedure that asks respondents to generate imaginative verbal behavior that is later coded for the presence of affiliation, power, and achievement-related motive content by trained coders. The OMT uses a larger number of pictures and asks...... respondents to provide more brief answers than earlier and more traditional picture-based implicit motive measures and has therefore become a frequently used measurement instrument in both research and practice. This article focuses on the psychometric response mechanism in the OMT and builds on recent...... measures (Lang, 2014) and reports the first analysis of which we are aware that applies this model to OMT data (N = 633) and studies dynamic motive activation in the OMT. Results of this analysis yielded evidence for dynamic motive activation in the OMT and showed that simulated IRT reliabilities based...

  2. Reliability of brain volume measurements: a test-retest dataset.

    Science.gov (United States)

    Maclaren, Julian; Han, Zhaoying; Vos, Sjoerd B; Fischbein, Nancy; Bammer, Roland

    2014-01-01

    Evaluation of neurodegenerative disease progression may be assisted by quantification of the volume of structures in the human brain using magnetic resonance imaging (MRI). Automated segmentation software has improved the feasibility of this approach, but often the reliability of measurements is uncertain. We have established a unique dataset to assess the repeatability of brain segmentation and analysis methods. We acquired 120 T1-weighted volumes from 3 subjects (40 volumes/subject) in 20 sessions spanning 31 days, using the protocol recommended by the Alzheimer's Disease Neuroimaging Initiative (ADNI). Each subject was scanned twice within each session, with repositioning between the two scans, allowing determination of test-retest reliability both within a single session (intra-session) and from day to day (inter-session). To demonstrate the application of the dataset, all 3D volumes were processed using FreeSurfer v5.1. The coefficient of variation of volumetric measurements was between 1.6% (caudate) and 6.1% (thalamus). Inter-session variability exceeded intra-session variability for lateral ventricle volume (P<0.0001), indicating that ventricle volume in the subjects varied between days.

  3. Demonstration test of burner liner strain measurements using resistance strain gages

    Science.gov (United States)

    Grant, H. P.; Anderson, W. L.

    1984-01-01

    A demonstration test of burner liner strain measurements using resistance strain gages as well as a feasibility test of an optical speckle technique for strain measurement are presented. The strain gage results are reported. Ten Kanthal A-1 wire strain gages were used for low cycle fatigue strain measurements to 950 K and .002 apparent strain on a JT12D burner can in a high pressure (10 atmospheres) burner test. The procedure for use of the strain gages involved extensive precalibration and postcalibration to correct for cooling rate dependence, drift, and temperature effects. Results were repeatable within + or - .0002 to .0006 strain, with best results during fast decels from 950 K. The results agreed with analytical prediction based on an axisymmetric burner model, and results indicated a non-uniform circumferential distribution of axial strain, suggesting temperature streaking.

  4. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  5. Forced and natural gradient tracer tests in a highly heterogeneous porous aquifer: instrumentation and measurements

    Science.gov (United States)

    Ptak, T.; Teutsch, G.

    1994-07-01

    At the Horkheimer Insel experimental field site, several short to intermediate distance forced and natural gradient tracer tests with depth-integrated and multilevel sampling were conducted to characterize the aquifer transport properties. Compared with other test sites, the aquifer at the Horkheimer Insel is highly heterogeneous and highly conductive. Hence, new tracer measurement techniques had to be developed. This paper presents some of the instrumentation developed together with measurements and their initial interpretation. The results demonstrate that for contaminant transport predictions in highly heterogeneous and highly conductive aquifers, investigation techniques with a high resolution in time and space are needed. The aquifer heterogeneity is evident from the spatial variability of peak concentration, transport velocity and longitudinal macrodispersivity values obtained from the tracer tests. Furthermore, the tracer test results indicate that at the observation scale investigated, a complex numerical flow and transport model is needed to describe adequately mass transport within the heterogeneous aquifer.

  6. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna

    2017-01-01

    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting output

  7. Hydraulic model tests on modified Wave Dragon. Phase 3

    Energy Technology Data Exchange (ETDEWEB)

    Hald, T.; Lynggaard, J.

    2002-11-01

    The purpose of this report is to describe the model tests conducted with a new designed 2. generation WD model as well as obtained model test results. Tests are conducted as sequential reconstruction followed by physical model tests. All details concerning the reconstruction are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: 'Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequential tests of changes to the model geometry and mass distribution parameters' sponsored by the Danish Energy Agency (DEA) wave energy programme. The tests will establish a well documented basis for the development of a 1:4.5 scale prototype planned for testing Nissum Bredning, a sea inlet on the Danish West Coast. (au)

  8. ATLAS Standard Model Measurements Using Jet Grooming and Substructure

    CERN Document Server

    Ucchielli, Giulia; The ATLAS collaboration

    2017-01-01

    Boosted topologies allow to explore Standard Model processes in kinematical regimes never tested before. In such LHC challenging environments, standard reconstruction techniques quickly hit the wall. Targeting hadronic final states means to properly reconstruct energy and multiplicity of the jets in the event. In order to be able to identify the decay product of boosted objects, i.e. W bosons, $t\\bar{t}$ pairs or Higgs produced in association with $t\\bar{t}$ pairs, ATLAS experiment is currently exploiting several algorithms using jet grooming and jet substructure. This contribution will mainly cover the following ATLAS measurements: $t\\bar{t}$ differential cross section production and jet mass using the soft drop procedure. Standard Model measurements offer the perfect field to test the performances of new jet tagging techniques which will become even more important in the search for new physics in highly boosted topologies.”

  9. Testing AGN feedback models in galaxy evolution

    Science.gov (United States)

    Shin, Min-Su

    Galaxy formation and evolution have been one of the most challenging problems in astrophysics. A single galaxy has various components (stars, atomic and molecular gas, a supermassive black hole, and dark matter) and has interacted with its cosmic environment throughout its history. A key issue in understanding galaxy evolution is to find the dominant physical processes in the interactions between the components of a galaxy and between a galaxy and its environment. AGN feedback has been proposed as a key process to suppress late star formation in massive elliptical galaxies and as a general consequence of galaxy mergers and interactions. In this thesis, I investigate feedback effects from active galactic nuclei (AGN) using a new simulation code and data from the Sloan Digital Sky Survey. In the first chapter, I test purely mechanical AGN feedback models via a nuclear wind around the central SMBH in elliptical galaxies by comparing simulation results to four well-defined observational constraints: the mass ratio between the SMBH and its host galaxy, the lifetime of the quasar phase, the X-ray luminosity from the hot interstellar medium, and the mass fraction of young stars. Even though purely mechanical AGN feedback is commonly assumed in cosmological simulations, I find that it is inadequate, and cannot reproduce all four observational constraints simultaneously. This result suggests that both mechanical and radiative feedback modes are important physical processes. In the second chapter, I simulate the coevolution of the SMBH and its host galaxy under different environments, represented by different amounts of gas stripping. Though the connection between environment and galaxy evolution has been well-studied, environmental effects on the growth of the SMBH have not been answered yet. I find that strong gas stripping, which satellite galaxies might experience, highly suppresses SMBH mass accretion and AGN activity. Moreover, the suppression of the SMBH growth is

  10. Measuring naphthenic acid corrosion potential with the Fe powder test

    Directory of Open Access Journals (Sweden)

    Hau, J. L.

    2003-12-01

    Full Text Available Results are presented of experiments performed using a new method to measure the naphthenic acid corrosion potential. The method consists of adding pure iron powder into a small autoclave containing the crude or oil sample. The test is then performed at a given temperature for one hour, after which the oil sample is filtered and the remaining liquid is sent for iron content determination (ppm. The tests are run at 7 different temperature levels, 3 more are run as repeated tests. A best-fitted curve is drawn through these 10 experimental points and the maximum point is thus determined. This becomes the main outcome of the test and it is used to give a measure of the naphthenic acid corrosion potential. The same general trends as observed in the past using the neutralization number or TAN (Total Acid Number is obtained. However, this new test seems capable oí detecting anomalous cases where oil samples having larger values of TAN exhibit less corrosivity than others having much lower values of TAN or where they show completely different corrosivity despite having similar or the same TAN.

    Se presentan los resultados de experimentos realizados con un nuevo método para medir el potencial de corrosión por ácidos nafténicos. El método consiste en añadir hierro puro en polvo a una muestra de crudo o aceite, en un autoclave pequeño. El ensayo se realiza a una temperatura dada durante una hora; luego se filtra la muestra de aceite y el líquido filtrado se envía para medir la concentración de hierro disuelto (ppm. El ensayo se reitera para siete temperaturas diferentes y se repiten tres más para determinar su reproducibilidad. Con estos 10 puntos experimentales se dibuja la mejor curva de ajuste y se determina un máximo de hierro disuelto. Este máximo se convierte en el resultado principal del ensayo y se usa para dar una medida del potencial de corrosión por ácidos nafténicos. Se obtienen las mismas tendencias generales observadas

  11. Model calibration and validation of an impact test simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, F. M. (François M.); Wilson, A. C. (Amanda C.); Havrilla, G. N. (George N.)

    2001-01-01

    This paper illustrates the methodology being developed at Los Alamos National Laboratory for the validation of numerical simulations for engineering structural dynamics. The application involves the transmission of a shock wave through an assembly that consists of a steel cylinder and a layer of elastomeric (hyper-foam) material. The assembly is mounted on an impact table to generate the shock wave. The input acceleration and three output accelerations are measured. The main objective of the experiment is to develop a finite element representation of the system capable of reproducing the test data with acceptable accuracy. Foam layers of various thicknesses and several drop heights are considered during impact testing. Each experiment is replicated several times to estimate the experimental variability. Instead of focusing on the calibration of input parameters for a single configuration, the numerical model is validated for its ability to predict the response of three different configurations (various combinations of foam thickness and drop height). Design of Experiments is implemented to perform parametric and statistical variance studies. Surrogate models are developed to replace the computationally expensive numerical simulation. Variables of the finite element model are separated into calibration variables and control variables, The models are calibrated to provide numerical simulations that correctly reproduce the statistical variation of the test configurations. The calibration step also provides inference for the parameters of a high strain-rate dependent material model of the hyper-foam. After calibration, the validity of the numerical simulation is assessed through its ability to predict the response of a fourth test setup.

  12. Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests

    Energy Technology Data Exchange (ETDEWEB)

    Niles, A M; Garcia, F; Greenwood, D W; Forbes, J W; Tarver, C M; Chidester, S K; Garza, R G; Swizter, L L

    2001-05-31

    The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 {micro}s after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.

  13. Balancing model complexity and measurements in hydrology

    Science.gov (United States)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  14. Test and Sensitivity Analysis of Hydrological Modeling in the Coupled WRF-Urban Modeling System

    Science.gov (United States)

    Wang, Z.; yang, J.

    2013-12-01

    Rapid urbanization has emerged as the source of many adverse effects that challenge the environmental sustainability of cities under changing climatic patterns. One essential key to address these challenges is to physically resolve the dynamics of urban-land-atmospheric interactions. To investigate the impact of urbanization on regional climate, physically-based single layer urban canopy model (SLUCM) has been developed and implemented into the Weather Research and Forecasting (WRF) platform. However, due to the lack of realistic representation of urban hydrological processes, simulation of urban climatology by current coupled WRF-SLUCM is inevitably inadequate. Aiming at improving the accuracy of simulations, recently we implemented urban hydrological processes into the model, including (1) anthropogenic latent heat, (2) urban irrigation, (3) evaporation over impervious surface, and (4) urban oasis effect. In addition, we couple the green roof system into the model to verify its capacity in alleviating urban heat island effect at regional scale. Driven by different meteorological forcings, offline tests show that the enhanced model is more accurate in predicting turbulent fluxes arising from built terrains. Though the coupled WRF-SLUCM has been extensively tested against various field measurement datasets, accurate input parameter space needs to be specified for good model performance. As realistic measurements of all input parameters to the modeling framework are rarely possible, understanding the model sensitivity to individual parameters is essential to determine the relative importance of parameter uncertainty to model performance. Thus we further use an advanced Monte Carlo approach to quantify relative sensitivity of input parameters of the hydrological model. In particular, performance of two widely used soil hydraulic models, namely the van Genuchten model (based on generic soil physics) and an empirical model (viz. the CHC model currently adopted in WRF

  15. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia

    2014-06-01

    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  16. Ares I-X Launch Vehicle Modal Test Measurements and Data Quality Assessments

    Science.gov (United States)

    Templeton, Justin D.; Buehrle, Ralph D.; Gaspar, James L.; Parks, Russell A.; Lazor, Daniel R.

    2010-01-01

    The Ares I-X modal test program consisted of three modal tests conducted at the Vehicle Assembly Building at NASA s Kennedy Space Center. The first test was performed on the 71-foot 53,000-pound top segment of the Ares I-X launch vehicle known as Super Stack 5 and the second test was performed on the 66-foot 146,000- pound middle segment known as Super Stack 1. For these tests, two 250 lb-peak electro-dynamic shakers were used to excite bending and shell modes with the test articles resting on the floor. The third modal test was performed on the 327-foot 1,800,000-pound Ares I-X launch vehicle mounted to the Mobile Launcher Platform. The excitation for this test consisted of four 1000+ lb-peak hydraulic shakers arranged to excite the vehicle s cantilevered bending modes. Because the frequencies of interest for these modal tests ranged from 0.02 to 30 Hz, high sensitivity capacitive accelerometers were used. Excitation techniques included impact, burst random, pure random, and force controlled sine sweep. This paper provides the test details for the companion papers covering the Ares I-X finite element model calibration process. Topics to be discussed include test setups, procedures, measurements, data quality assessments, and consistency of modal parameter estimates.

  17. The Internationalization of Testing and New Models of Test Delivery on the Internet

    Science.gov (United States)

    Bartram, Dave

    2006-01-01

    The Internet has opened up a whole new set of opportunities for advancing the science of psychometrics and the technology of testing. It has also created some new challenges for those of us involved in test design and testing. In particular, we are seeing impacts from internationalization of testing and new models for test delivery. These are…

  18. MINERvA neutrino detector response measured with test beam data

    CERN Document Server

    Aliaga, L; Del Castillo, C Araujo; Bagby, L; Bellantoni, L; Bergan, W F; Bodek, A; Bradford, R; Bravar, A; Budd, H; Butkevich, A; Caicedo, D A Martinez; Carneiro, M F; Christy, M E; Chvojka, J; da Motta, H; Devan, J; Diaz, G A; Dytman, S A; Eberly, B; Felix, J; Fields, L; Fine, R; Flight, R; Gago, A M; Golan, T; Gomez, A; Gran, R; Harris, D A; Higuera, A; Howley, I J; Hurtado, K; Kleykamp, J; Kordosky, M; Lanari, M; Le, T; Leister, A J; Lovlein, A; Maher, E; Mann, W A; Marshall, C M; McFarland, K S; McGivern, C L; McGowan, A M; Messerly, B; Miller, J; Miller, W; Mislivec, A; Morfin, J G; Mousseau, J; Muhlbeier, T; Naples, D; Nelson, J K; Norrick, A; Ochoa, N; OConnor, C D; Osmanov, B; Osta, J; Paolone, V; Patrick, C E; Patrick, L; Perdue, G N; Lara, C E Perez; Rakotondravohitra, L; Ramirez, M A; Ray, H; Ren, L; Rodrigues, P A; Rubinov, P; Rude, C R; Ruterbories, D; Schellman, H; Schmitz, D W; Salinas, C J Solano; Tagg, N; Tice, B G; Urrutia, Z; Valencia, E; Walton, T; Westerberg, A; Wolcott, J; Woodward, N; Wospakrik, M; Zavala, G; Zhang, D; Ziemer, B P

    2015-01-01

    The MINERvA collaboration operated a scaled-down replica of the solid scintillator tracking and sampling calorimeter regions of the MINERvA detector in a hadron test beam at the Fermilab Test Beam Facility. This article reports measurements with samples of protons, pions, and electrons from 0.35 to 2.0 GeV/c momentum. The calorimetric response to protons, pions, and electrons are obtained from these data. A measurement of the parameter in Birks' law and an estimate of the tracking efficiency are extracted from the proton sample. Overall the data are well described by a Geant4-based Monte Carlo simulation of the detector and particle interactions with agreements better than 4%, though some features of the data are not precisely modeled. These measurements are used to tune the MINERvA detector simulation and evaluate systematic uncertainties in support of the MINERvA neutrino cross section measurement program.

  19. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  20. TESTING FOR VARYING DISPERSION IN DISCRETE EXPONENTIAL FAMILY NONLINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    LinJinguan; WeiBocheng; ZhangNansong

    2003-01-01

    It is necessary to test for varying dispersion in generalized nonlinear models. Wei ,et al(1998) developed a likelihood ratio test,a score test and their adjustments to test for varying dispersion in continuous exponential family nonlinear models. This type of problem in the framework of general discrete exponential family nonlinear models is discussed. Two types of varying dispersion, which are random coefficients model and random effects model, are proposed,and corresponding score test statistics are constructed and expressed in simple ,easy to use ,matrix formulas.

  1. Testing Geyser Models using Down-vent Data

    Science.gov (United States)

    Wang, C.; Munoz, C.; Ingebritsen, S.; King, E.

    2013-12-01

    Geysers are often studied as an analogue to magmatic volcanoes because both involve the transfer of mass and energy that leads to eruption. Several conceptual models have been proposed to explain geyser eruption, but no definitive test has been performed largely due to scarcity of down-vent data. In this study we compare simulated time histories of pressure and temperature against published data for the Old Faithful geyser in the Yellowstone National Park and new down-vent measurements from geysers in the El Tatio geyser field of northern Chile. We test two major types of geyser models by comparing simulated and field results. In the chamber model, the geyser system is approximated as a fissure-like conduit connected to a subsurface chamber of water and steam. Heat supplied to the chamber causes water to boil and drives geyser eruptions. Here the Navier-Stokes equation is used to simulate the flow of water and steam. In the fracture-zone model, the geyser system is approximated as a saturated fracture zone of high permeability and compressibility, surrounded by rock matrix of relatively low permeability and compressibility. Heat supply from below causes pore water to boil and drives geyser eruption. Here a two-phase form of Darcy's law is assumed to describe the flow of water and steam (Ingebritsen and Rojstaczer, 1993). Both models can produce P-T time histories qualitatively similar to field results, but the simulations are sensitive to assumed parameters. Results from the chamber model are sensitive to the heat supplied to the system and to the width of the conduit, while results from the fracture-zone model are most sensitive to the permeability of the fracture zone and the adjacent wall rocks. Detailed comparison between field and simulated results, such as the phase lag between changes of pressure and temperature, may help to resolve which model might be more realistic.

  2. Measuring and modeling twilight's purple light

    Science.gov (United States)

    Lee, Raymond L.; Hernández-Andrés, Javier

    2003-01-01

    During many clear twilights, much of the solar sky is dominated by pastel purples. This purple light's red component has long been ascribed to transmission through and scattering by stratospheric dust and other aerosols. Clearly the vivid purples of post-volcanic twilights are related to increased stratospheric aerosol loading. Yet our time-series measurements of purple-light spectra, combined with radiative transfer modeling and satellite soundings, indicate that background stratospheric aerosols by themselves do not redden sunlight enough to cause the purple light's reds. Furthermore, scattering and extinction in both the troposphere and the stratosphere are needed to explain most purple lights.

  3. Measuring College Students' Reading Comprehension Ability Using Cloze Tests

    Science.gov (United States)

    Williams, Rihana Shiri; Ari, Omer; Santamaria, Carmen Nicole

    2011-01-01

    Recent investigations challenge the construct validity of sustained silent reading tests. Performance of two groups of post-secondary students (e.g. struggling and non-struggling) on a sustained silent reading test and two types of cloze test (i.e. maze and open-ended) was compared in order to identify the test format that contributes greater…

  4. X-33 Metal Model Testing In Low Turbulence Pressure Tunnel

    Science.gov (United States)

    1997-01-01

    The countrys next generation of space transportation, a reusable launch vehicle (RLV), continues to undergo wind tunnel testing at NASA Langley Research Center, Hampton, Va. All four photos are a metal model of the X-33 reusable launch vehicle (about 15 inches long by 15 inches wide) being tested for Lockheed Martin Skunk Works in the Low Turbulence Pressure Tunnel (LTPT) at NASA Langley Research Center. Tests are being conducted by members of the Aerothermodynamics Branch. According to Kelly Murphy of Langleys Aerothermodynamics Branch, the aluminum and stainless steel model of the X-33 underwent aerodynamic testing in the tunnel. *The subsonic tests were conducted at the speed of Mach 25,* she said. *Force and moment testing and measurement in this tunnel lasted about one week.* Future testing of the metal model is scheduled for Langleys 16-Foot Transonic Tunnel, from the end of March to mid-April 1997, and the Unitary Wind Tunnel, from mid-April to the beginning of May. Other tunnel testing for X-33 models are scheduled from the present through June in the hypersonic tunnels, and the 14- by 22-Foot Tunnel from about mid-June to mid-July. Since 1991 Marshall Space Flight Center in Huntsville, Ala. has been the lead center for coordinating the Agencys X-33 Reusable Launch Vehicle (RLV) Program, an industry-led effort, which NASA Administrator Daniel S. Goldin has declared the agency's highest priority new program. The RLV Technology Program is a partnership among NASA, the United States Air Force and private industry to develop world leadership in low-cost space transportation. The goal of the program is to develop technologies and new operational concepts that can radically reduce the cost of access to space. The RLV program also hopes to speed the commercialization of space and improve U.S. economic competitiveness by making access to space as routine and reliable as today's airline industry, while reducing costs and enhancing safety and reliability. The RLV

  5. Optimization models for flight test scheduling

    Science.gov (United States)

    Holian, Derreck

    As threats around the world increase with nations developing new generations of warfare technology, the Unites States is keen on maintaining its position on top of the defense technology curve. This in return indicates that the U.S. military/government must research, develop, procure, and sustain new systems in the defense sector to safeguard this position. Currently, the Lockheed Martin F-35 Joint Strike Fighter (JSF) Lightning II is being developed, tested, and deployed to the U.S. military at Low Rate Initial Production (LRIP). The simultaneous act of testing and deployment is due to the contracted procurement process intended to provide a rapid Initial Operating Capability (IOC) release of the 5th Generation fighter. For this reason, many factors go into the determination of what is to be tested, in what order, and at which time due to the military requirements. A certain system or envelope of the aircraft must be assessed prior to releasing that capability into service. The objective of this praxis is to aide in the determination of what testing can be achieved on an aircraft at a point in time. Furthermore, it will define the optimum allocation of test points to aircraft and determine a prioritization of restrictions to be mitigated so that the test program can be best supported. The system described in this praxis has been deployed across the F-35 test program and testing sites. It has discovered hundreds of available test points for an aircraft to fly when it was thought none existed thus preventing an aircraft from being grounded. Additionally, it has saved hundreds of labor hours and greatly reduced the occurrence of test point reflight. Due to the proprietary nature of the JSF program, details regarding the actual test points, test plans, and all other program specific information have not been presented. Generic, representative data is used for example and proof-of-concept purposes. Apart from the data correlation algorithms, the optimization associated

  6. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  7. Reliability of a new supination resistance measurement device and validation of the manual supination resistance test.

    Science.gov (United States)

    Griffiths, Ian B; McEwan, Islay M

    2012-01-01

    Kinematic observations are inconsistent in predicting lower-extremity injury risk, and research suggests that kinetic variables may be more important in this regard. Before kinetics can be prospectively investigated, we need reliable ways of measuring them clinically. A measurement instrument was manufactured that closely mirrors a manual test used to clinically estimate supination resistance force. The reliability of the instrument and the validity of the clinical test were investigated. The left feet of 26 healthy individuals (17 men and 9 women; mean ± SD age, 25.9 ± 9.2 years; mean ± SD weight, 77.7 ± 13.3 kg) were assessed. Foot Posture Index (FPI-6), manual supination resistance, and machine supination resistance were measured. Intrarater and interrater reliability of all of the measurements were calculated. Correlations of the supination resistance measured by the device with FPI-6, the manual supination resistance test, and body weight were investigated. Interrater reliability of all of the measurements was generally poor. The supination resistance machine correlated highly with the manual supination test for the rater experienced with its use. Supination resistance measurements correlated poorly with the FPI-6 and weakly with body weight. The supination resistance machine was shown to have sufficient limits of agreement for the study, but improvements need to be made for more meaningful research going forward. In this study, the force required to supinate a foot was independent of its posture, and approximately 12% of it was explained by body weight. Further work is required with a much larger sample size to build regression models that sufficiently predict supination resistance force and that will be of clinical use. The manual supination test is a valid clinical test for clinicians experienced in its use.

  8. Weather model verification using Sodankylä mast measurements

    Directory of Open Access Journals (Sweden)

    M. Kangas

    2015-12-01

    Full Text Available Sodankylä, in the heart of Arctic Research Centre of the Finnish Meteorological Institute (FMI ARC in northern Finland, is an ideal site for atmospheric and environmental research in the boreal and sub-arctic zone. With temperatures ranging from −50 to +30 °C, it provides a challenging testing ground for numerical weather forecasting (NWP models as well as weather forecasting in general. An extensive set of measurements has been carried out in Sodankylä for more than 100 years. In 2000, a 48 m high micrometeorological mast was erected in the area. In this article, the use of Sodankylä mast measurements in NWP model verification is described. Started in 2000 with NWP model HIRLAM and Sodankylä measurements, the verification system has now been expanded to include comparisons between 12 NWP models and seven measurement masts. A case study, comparing forecasted and observed radiation fluxes, is also presented. It was found that three different radiation schemes, applicable in NWP model HARMONIE-AROME, produced during cloudy days somewhat different downwelling long-wave radiation fluxes, which however did not change the overall cold bias of the predicted screen-level temperature.

  9. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  10. Model year 2010 Honda insight level-1 testing report.

    Energy Technology Data Exchange (ETDEWEB)

    Rask, E.; Bocci, D.; Duoba, M.; Lohse-Busch, H. (Energy Systems)

    2011-03-22

    As a part of the US Department of Energy's Advanced Vehicle Testing Activity (AVTA), a model year 2010 Honda Insight was procured by eTec (Phoenix, AZ) and sent to ANL's Advanced Powertrain Research Facility for the purposes of vehicle-level testing in support of the Advanced Vehicle Testing Activity (AVTA). Data was acquired during testing using non-intrusive sensors, vehicle network information, and facilities equipment (emissions and dynamometer data). Standard drive cycles, performance cycles, steady-state cycles and A/C usage cycles were tested. Much of this data is openly available for download in ANL's Downloadable Dynamometer Database (D3). The major results are shown here in this report. Given the preliminary nature of this assessment, the majority of the testing was done over standard regulatory cycles and seeks to obtain a general overview of how the vehicle performs. These cycles include the US FTP cycle (Urban) and Highway Fuel Economy Test cycle as well as the US06, a more aggressive supplemental regulatory cycle. Data collection for this testing was kept at a fairly high level and includes emissions and fuel measurements from an exhaust emissions bench, high-voltage and accessory current and voltage from a DC power analyzer, and CAN bus data such as engine speed, engine load, and electric machine operation when available. The following sections will seek to explain some of the basic operating characteristics of the MY2010 Insight and provide insight into unique features of its operation and design.

  11. Modeling dune response using measured and equilibrium bathymetric profiles

    Science.gov (United States)

    Fauver, Laura A.; Thompson, David M.; Sallenger, Asbury H.

    2007-01-01

    Coastal engineers typically use numerical models such as SBEACH to predict coastal change due to extreme storms. SBEACH model inputs include pre-storm profiles, wave heights and periods, and water levels. This study focuses on the sensitivity of SBEACH to the details of pre-storm bathymetry. The SBEACH model is tested with two initial conditions for bathymetry, including (1) measured bathymetry from lidar, and (2) calculated equilibrium profiles. Results show that longshore variability in the predicted erosion signal is greater over measured bathymetric profiles, due to longshore variations in initial surf zone bathymetry. Additionally, patterns in predicted erosion can be partially explained by the configuration of the inner surf zone from the shoreline to the trough, with surf zone slope accounting for 67% of the variability in predicted erosion volumes.

  12. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Brorsen, Michael

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006......-shaped access platforms on piles. The Model tests include mainly regular waves and a few irregular wave tests. These tests have been conducted at Aalborg University from 9. November, 2006 to 17. November, 2006....

  13. Testing for Causality in Variance Usinf Multivariate GARCH Models

    OpenAIRE

    Christian M. Hafner; Herwartz, Helmut

    2008-01-01

    Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...

  14. Testing for causality in variance using multivariate GARCH models

    OpenAIRE

    Hafner, Christian; Herwartz, H.

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...

  15. Analyzing the Measurement Equivalence of a Translated Test in a Statewide Assessment Program

    Directory of Open Access Journals (Sweden)

    Jorge Carvajal-Espinoza

    2016-09-01

    Full Text Available When tests are translated into one or more languages, the question of the equivalence of items across language forms arises. This equivalence can be assessed at the scale level by means of a multiple group confirmatory factor analysis (CFA in the context of structural equation modeling. This study examined the measurement equivalence of a Spanish translated version of a statewide Mathematics test originally constructed in English by using a multi-group CFA approach. The study used samples of native speakers of the target language of the translation taking the test in both the source and target language, specifically Hispanics taking the test in English and Spanish. Test items were grouped in twelve facet-representative parcels. The parceling was accomplished by grouping items that corresponded to similar content and computing an average for each parcel. Four models were fitted to examine the equivalence of the test across groups. The multi-group CFA fixed factor loadings across groups and results supported the equivalence of the two language versions (English and Spanish of the test. The statistical techniques implemented in this study can also be used to address the performance on a test based on dichotomous or dichotomized variables such as gender, socioeconomic status, geographic location and other variables of interest.

  16. Testing Cosmological Models with Type Ic Super Luminous Supernovae

    CERN Document Server

    Wei, Jun-Jie; Melia, Fulvio

    2015-01-01

    The use of type Ic Super Luminous Supernovae (SLSN Ic) to examine the cosmological expansion introduces a new standard ruler with which to test theoretical models. The sample suitable for this kind of work now includes 11 SLSNe Ic, which have thus far been used solely in tests involving $\\Lambda$CDM. In this paper, we broaden the base of support for this new, important cosmic probe by using these observations to carry out a one-on-one comparison between the $R_{\\rm h}=ct$ and $\\Lambda$CDM cosmologies. We individually optimize the parameters in each cosmological model by minimizing the $\\chi^{2}$ statistic. We also carry out Monte Carlo simulations based on these current SLSN Ic measurements to estimate how large the sample would have to be in order to rule out either model at a $\\sim 99.7\\%$ confidence level. The currently available sample indicates a likelihood of $\\sim$$70-80\\%$ that the $R_{\\rm h}=ct$ Universe is the correct cosmology versus $\\sim$$20-30\\%$ for the standard model. These results are suggest...

  17. Multilevel Factor Analysis by Model Segregation: New Applications for Robust Test Statistics

    Science.gov (United States)

    Schweig, Jonathan

    2014-01-01

    Measures of classroom environments have become central to policy efforts that assess school and teacher quality. This has sparked a wide interest in using multilevel factor analysis to test measurement hypotheses about classroom-level variables. One approach partitions the total covariance matrix and tests models separately on the…

  18. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  19. A Lagrange Multiplier Test for Testing the Adequacy of the Constant Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Catani, Paul; Teräsvirta, Timo; Yin, Meiqun

    A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which...

  20. Development, reliability, and validity testing of the Ethical Behavior Test: a measure for nurses' ethical behavior.

    Science.gov (United States)

    Dierckx de Casterlé, B; Grypdonck, M; Vuylsteke-Wauters, M

    1997-01-01

    The need for reliable and valid measures for ethical behavior of nurses has encouraged the authors to develop a new instrument to measure students' ethical behavior in daily nursing dilemmas. Characteristic of the instrument presented is the inclusion of two fundamental components of ethical behavior: (1) ethical reasoning (and the resulting decision), and (2) the actual implementation of the ethical decision. As for many instruments, Kohlberg's theory of moral development has been used as the conceptual framework. However, Kohlberg's abstract justice orientation was refined by a care perspective and representative nursing dilemmas were used to make the instrument conceptually more appropriate for measuring nurses' ethical behavior. The analysis of the psychometric properties of the instrument has provided several relevant indications for the reliability and validity of the ethical reasoning and implementation scores. The revealed inconsistencies in the Ethical Behavior Test could be satisfactorily interpreted in terms of Kohlberg's theory and related empirical research findings, supporting the reliability of the ethical behavior scores. The content validity rests upon the careful development of the instrument resulting in an optimal mix of dilemmas, arguments and care situations to reveal nurses' ethical behavior and in a substantial degree of correspondence between the concept and operationalization. The congruency between the patterns of ethical behavior and Kohlberg's theoretical insights about ethical reasoning and practice support the construct validity of the instrument.