Sample records for testing substellar models

  1. Brown Dwarf Binaries as Tests of Substellar Evolution (United States)

    Martin, Eduardo


    We propose to obtain STIS spectroscopy of two brown dwarf binaries for which dynamical masses are being obtained by monitoring the orbital motion using ground-based telescopes with adaptive optics. The HST/STIS spectra will allow to study the LiI resonance line at 670.8 nm. The lithium depletion of the members of these binaries will be estimated with the aid of synthetic spectra. These observations will be compared to model predictions of lithium depletion as a function of age and mass, and hence will provide an observational test to the theory of substellar objects. Spin-offs will be the measurement of the strength of Halpha emission, an indicator of chromospheric activity in cool atmospheres, and comparing the shape of the optical continuum with model spectra with different dust opacities.

  2. Towards high accuracy tests on the substellar IMF in young clusters. A survey in NGC 2024. (United States)

    Da Rio, Nicola


    Measuring the Initial Mass Function in young clusters, and testing its universality, is a fundamental benchmark to constrain the physical processes and theoretical models of star formation. The shape and universality of the stellar IMF are well known. Our observational characterization of the substellar IMF, on the other hand, remains more uncertain, along with its possible environmental variations. Because of this, the physical processes that play a role in the formation of brown dwarfs are not fully constrained. In Cycle 22 we were awarded HST time to carry out the deepest spectro-photometric census of BDs in a young cluster: the Orion Nebula Cluster. Through deep WFC3/IR narrow band imaging, we are able to obtain Teff and A_V down to 15Mjup. Preliminary analysis limited to a portion of the total field of view allows us to classify several hundreds BDs, place them in the HRD and obtain, for an extinction limited sample, the complete and consistent IMF down to planetary masses. The substellar slope is consistent with the Galactic IMF but a rapid drop is found at the H-burning limit. We propose to carry out a nearly identical survey with HST in a younger, less massive nearby cluster: NGC2024 in the Flame Nebula. This will allow us to derive the complete census of the young population down to planetary masses, derive the IMF, enabling a consistent comparison with the results in the ONC. We will specifically look for statistically significant IMF variations with environmental properties (cluster mass, density) and investigate primordial mass segregation in the substellar regime. These results will significantly help to constrain the mechanisms involved in BD formation.

  3. Testing the Formation Mechanism of Sub-Stellar Objects in Lupus (A SOLA Team Study) (United States)

    De Gregorio-Monsalvo, Itziar; Lopez, C.; Takahashi, S.; Santamaria-Miranda


    The international SOLA team (Soul of Lupus with ALMA) has identified a set of pre- and proto-stellar candidates in Lupus 1 and 3 of substellar nature using 1.1mm ASTE/AzTEC maps and our optical to submillimeter database. We have observed with ALMA the most promising pre- and proto-brown dwarfs candidates. Our aims are to provide insights on how substellar objects form and evolve, from the equivalent to the pre-stellar cores to the Class II stage in the low mass regime of star formation. Our sample comprises 33 pre-stellar objects, 7 Class 0 and I objects, and 22 Class II objects.

  4. Confirming a substellar companion candidate around a neutron star (United States)

    Posselt, Bettina; Luhman, Kevin


    In a search for substellar companions around young neutron stars, we found an indication for a very faint near-infrared source at the position of the isolated neutron star RXJ0806.4-4123. The suspected near-IR source cannot be the neutron star itself because the latter is much too faint to be detected. Recent Herschel 160 microm observations of the field point to an additional dusty belt around the neutron star. The outer location of the dusty belt could be explained by the presence of a substellar companion around the neutron star. We propose deeper near-infrared observations with FLAMINGOS-2 to confirm that the near-infrared source is real. The observation could provide the first direct detection of a substellar companion around a neutron star. However, even a non-detection would be interesting to constrain evolution models of the dusty belt around the neutron star.

  5. Measuring the Substellar Boundary (United States)

    Cancino, Adolfo Andrew; Dupuy, Trent


    Brown dwarfs are not massive enough to undergo hydrogen fusion and therefore constantly lose heat and change luminosity over their lifetime. Because of this, brown dwarfs do not follow the same pattern that stars on the main sequence follow. Brown dwarfs can have similar luminosities but widely differing masses, or vice versa, while stars follow a tight relationship between mass and luminosity. In principle, mass and luminosity measurements straddling the boundary between stars and brown dwarfs could be used to directly measure this dividing line in mass. We present a method for determining this boundary accurately given a limited sample size. We tested our method with Monte Carlo simulated samples of brown dwarfs and stars with randomly drawn masses and ages, using evolutionary models to infer luminosities. In our simulation designed to mimic the largest current sample of such mass measurements (37 objects; Dupuy & Liu 2017), we find that the uncertainty in the dividing line that can be inferred from the data is ± 4 MJup. This implies that distinguishing between competing evolutionary model predictions for the boundary (~70-80 MJup) will be difficult given the current sample size of mass measurements.

  6. Understanding sub-stellar populations using wide-field infrared surveys

    Directory of Open Access Journals (Sweden)

    Hewett P.C.


    Full Text Available This paper discusses benchmark brown dwarfs in various environments, and focuses on those in wide binary systems. We present a summary of the recently discovered T dwarf population from the UKIDSS Large Area Survey, and describe the constraints that it places on our knowledge of the sub-stellar initial mass function. We also present some exciting results from our ongoing search for wide companions to this sample, that has so far revealed an M4-T8.5 binary system at ∼12 parsecs and also the first ever Tdwarf-white dwarf binary system. The T dwarfs in these binaries have their properties constrained by the primary object and are thus benchmark objects that are already testing the predictions of theoretical model atmospheres.

  7. Substellar objects in nearby young clusters (SONYC). VIII. Substellar population in Lupus 3

    Energy Technology Data Exchange (ETDEWEB)

    Mužić, Koraljka [European Southern Observatory, Alonso de Córdova 3107, Casilla 19, Santiago 19001 (Chile); Scholz, Alexander; Geers, Vincent C. [School of Cosmic Physics, Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2 (Ireland); Jayawardhana, Ray [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Martí, Belén López, E-mail: [Centro de Astrobiología (INTA-CSIC), Departamento de Astrofísica, P.O. Box 78, E-28261 Villanueva de la Cañada, Madrid (Spain)


    SONYC—Substellar Objects in Nearby Young Clusters—is a survey program to investigate the frequency and properties of substellar objects in nearby star-forming regions. We present a new imaging and spectroscopic survey conducted in the young (∼1 Myr), nearby (∼200 pc) star-forming region Lupus 3. Deep optical and near-infrared images were obtained with MOSAIC-II and NEWFIRM at the CTIO 4 m telescope, covering ∼1.4 deg{sup 2} on the sky. The i-band completeness limit of 20.3 mag is equivalent to 0.009-0.02 M {sub ☉}, for A{sub V} ≤ 5. Photometry and 11-12 yr baseline proper motions were used to select candidate low-mass members of Lupus 3. We performed a spectroscopic follow-up of 123 candidates, using VIMOS at the Very Large Telescope, and we identify 7 probable members, among which 4 have spectral type later than M6.0 and T {sub eff} ≤ 3000 K, i.e., are probably substellar in nature. Two of the new probable members of Lupus 3 appear underluminous for their spectral class and exhibit emission line spectrum with strong H{sub α} or forbidden lines associated with active accretion. We derive a relation between the spectral type and effective temperature: T {sub eff} = (4120 ± 175) – (172 ± 26) × SpT, where SpT refers to the M spectral subtype between 1 and 9. Combining our results with the previous works on Lupus 3, we show that the spectral type distribution is consistent with that in other star-forming regions, as well as the derived star-to-brown dwarf ratio of 2.0-3.3. We compile a census of all spectroscopically confirmed low-mass members with spectral type M0 or later.

  8. The Impact of Clouds and Hazes in Substellar Atmospheres (United States)

    Morley, Caroline; Fortney, Jonathan J.; Marley, Mark S.


    The formation of clouds significantly alters the spectra of cool substellar atmospheres from terrestrial planets to brown dwarfs. In cool planets like Earth and Jupiter, volatile species like water and ammonia condense to form ice clouds. In hot planets and brown dwarfs, iron and silicates instead condense, forming dusty clouds. Irradiated methane-rich planets may have substantial hydrocarbon hazes. During my thesis, I have studied the impact of clouds and hazes in a variety of substellar objects. First, I present results for cool brown dwarfs including clouds previously neglected in model atmospheres. Model spectra that include sulfide and salt clouds can match the spectra of T dwarf atmospheres; water ice clouds will alter the spectra of the newest and coldest brown dwarfs, the Y dwarfs. These sulfide/salt and ice clouds potentially drive spectroscopic variability in these cool objects, and this variability should be distinguishable from variability caused by hot spots.Next, I present results for small, cool exoplanets between the size of Earth and Neptune, so-called super Earths. They likely have sulfide and salt clouds and also have photochemical hazes caused by stellar irradiation. Vast resources have been dedicated to characterizing the handful of super Earths accessible to current telescopes, yet of the planets smaller than Neptune studied to date, all have radii in the near-infrared consistent with being constant in wavelength, likely showing that these small planets are consistently enshrouded in thick hazes and clouds. Very thick, lofted clouds of salts or sulfides in high metallicity (1000× solar) atmospheres create featureless transmission spectra in the near-infrared. Photochemical hazes with a range of particle sizes also create featureless transmission spectra at lower metallicities. I show that despite these challenges, there are promising avenues for understanding this class of small planets: by observing the thermal emission and reflectivity of

  9. Mid infrared observations of Van Maanen 2: no substellar companion.

    Energy Technology Data Exchange (ETDEWEB)

    Farihi, J; Becklin, E; Macintosh, B


    The results of a comprehensive infrared imaging search for the putative 0.06 M{sub {circle_dot}} astrometric companion to the 4.4 pc white dwarf van Mannen 2 are reported. Adaptive optics images acquired at 3.8 {micro}m reveal a diffraction limited core of 0.09 inch and no direct evidence of a secondary. Models predict that at 5 Gyr, a 50 M{sub J} brown dwarf would be only 1 magnitude fainter than van Maanen 2 at this wavelength and the astrometric analysis suggested a separation of 0.2 inch. In the case of a chance alignment along the line of sight, a 0.4 mag excess should be measured. An independent photometric observation at the same wavelength reveals no excess. In addition, there exist published ISO observations of van Maanen 2 at 6.8 {micro}m and 15.0 {micro}m which are consistent with photospheric flux of a 6750 K white dwarf. If recent brown dwarf models are correct, there is no substellar companion with T{sub eff} {approx}> 500 K.


    Energy Technology Data Exchange (ETDEWEB)

    Norris, Jackson M.; Wright, Jason T.; Mahadevan, Suvrath; Gettel, Sara [Also at the Center for Exoplanets and Habitable Worlds, 525 Davey Laboratory, Pennsylvania State University, University Park, PA 16802 (United States); Wade, Richard A. [Department of Astronomy and Astrophysics, 525 Davey Laboratory, Pennsylvania State University, University Park, PA 16802 (United States)


    It has been argued that a substellar companion may significantly influence the evolution of the progenitors of subdwarf B (sdB) stars. Recently, the bright sdB star HD 149382 has been claimed to host a substellar (possibly planetary) companion with a period of 2.391 days. This has important implications for the evolution of the progenitors of sdB stars as well as the source of the UV excess seen in elliptical galaxies. In order to verify this putative companion, we made 10 radial velocity measurements of HD 149382 over 17 days with the High Resolution Spectrograph at the Hobby-Eberly Telescope. Our data conclusively demonstrate that the putative substellar companion does not exist, and they exclude the presence of almost any substellar companion with P < 28 days and Msin i {approx}> 1 M{sub Jup}.

  11. Loglinear Rasch model tests

    NARCIS (Netherlands)

    Kelderman, Hendrikus


    Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch

  12. Acoustic Model Testing Chronology (United States)

    Nesman, Tom


    Scale models have been used for decades to replicate liftoff environments and in particular acoustics for launch vehicles. It is assumed, and analyses supports, that the key characteristics of noise generation, propagation, and measurement can be scaled. Over time significant insight was gained not just towards understanding the effects of thruster details, pad geometry, and sound mitigation but also to the physical processes involved. An overview of a selected set of scale model tests are compiled here to illustrate the variety of configurations that have been tested and the fundamental knowledge gained. The selected scale model tests are presented chronologically.

  13. The stellar and substellar mass function in central region of the old open cluster Praesepe from deep LBT observations

    Directory of Open Access Journals (Sweden)

    Goldman B.


    Full Text Available Studies of the mass function of open clusters of different ages allow us to study the efficiency with which brown dwarfs are evaporated from clusters to populate the field. Surveys in relatively old clusters (age ≳100 Myr do not suffer from problems found in young clusters, such as intra-cluster extinction and large uncertainties in brown dwarf models. In this paper, we present the results of a photometric survey to study the mass function of the old open cluster Praesepe (age of ~590 Myr and distance of ~190 pc, down to the substellar regime. We have performed optical (riz and Y-band photometric survey of Praesepe with the Large Binocular Telescope Camera, for a spatial coverage of 0.61 deg2 from ~90 MJ down to a 5σ detection limit at 40 MJ.

  14. Model-Based Testing

    NARCIS (Netherlands)

    Timmer, Mark; Brinksma, Hendrik; Stoelinga, Mariëlle Ida Antoinette; Broy, M.; Leuxner, C.; Hoare, C.A.R.

    This paper provides a comprehensive introduction to a framework for formal testing using labelled transition systems, based on an extension and reformulation of the ioco theory introduced by Tretmans. We introduce the underlying models needed to specify the requirements, and formalise the notion of

  15. Wave Reflection Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Larsen, Brian Juul

    The investigation concerns the design of a new internal breakwater in the main port of Ibiza. The objective of the model tests was in the first hand to optimize the cross section to make the wave reflection low enough to ensure that unacceptable wave agitation will not occur in the port. Secondly...... wave overtopping was studied as well....

  16. Evidence of a substellar companion around a very young T Tauri star (United States)

    Almeida, P. Viana; Gameiro, J. F.; Petrov, P. P.; Melo, C.; Santos, N. C.; Figueira, P.; Alencar, S. H. P.


    We present results from a near-infrared multi-epoch spectroscopic campaign to detect a young low-mass companion to a T Tauri star. AS 205A is a late-type dwarf (≈K5) of 1 M⊙ that belongs to a triple system. Independent photometric surveys discovered that AS 205A has two distinct periods (P1 = 6.78 and P2 = 24.78 days) detected in the light curve that persist over several years. Period P1 seems to be linked to the axial-rotation of the star and is caused by the presence of cool surface spots. Period P2 is correlated with the modulation in AS 205A brightness (V) and red color (V-R), consistent with a gravitating object within the accretion disk. We here derive precise near-infrared radial velocities to investigate the origin of period P2 which is predicted to correspond to a cool source in a Keplerian orbit with a semi-major axis of 0.17 AU positioned close to the inner disk radius of 0.14 AU. The radial velocity variations of AS 205A were found to have a period of P ≈ 24.84 days and a semi-amplitude of 1.529 km s-1. This result closely resembles the P2 period in past photometric observations (P ≈ 24.78 days). The analysis of the cross-correlation function bisector has shown no correlation with the radial velocity modulations, strongly suggesting that the period is not controlled by stellar rotation. Additional activity indicators should however be explored in future surveys. Taking this into account we found that the presence of a substellar companion is the explanation that best fits the results. We derived an orbital solution for AS 205A and found evidence of a m2 sin I≃ 19.25 MJup object in an orbit with moderate eccentricity of e ≃ 0.34. If confirmed with future observations, preferably using a multiwavelength survey approach, this companion could provide interesting constraints on brown dwarf and planetary formation models. Based on observations collected with the CRIRES spectrograph at the VLT/UT1 8.2-m Antu Telescope (ESO runs ID 385.C-0706(A) and

  17. Testing and validating environmental models (United States)

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.


    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  18. Cloudless Atmospheres for Young Low-gravity Substellar Objects (United States)

    Tremblin, P.; Chabrier, G.; Baraffe, I.; Liu, Michael. C.; Magnier, E. A.; Lagage, P.-O.; Alves de Oliveira, C.; Burgasser, A. J.; Amundsen, D. S.; Drummond, B.


    Atmospheric modeling of low-gravity (VL-G) young brown dwarfs remains challenging. The presence of very thick clouds is a possible source of this challenge, because of their extremely red near-infrared (NIR) spectra, but no cloud models provide a good fit to the data with a radius compatible with the evolutionary models for these objects. We show that cloudless atmospheres assuming a temperature gradient reduction caused by fingering convection provide a very good model to match the observed VL-G NIR spectra. The sequence of extremely red colors in the NIR for atmospheres with effective temperatures from ∼2000 K down to ∼1200 K is very well reproduced with predicted radii typical of young low-gravity objects. Future observations with NIRSPEC and MIRI on the James Webb Space Telescope (JWST) will provide more constraints in the mid-infrared, helping to confirm or refute whether or not the NIR reddening is caused by fingering convection. We suggest that the presence or absence of clouds will be directly determined by the silicate absorption features that can be observed with MIRI. JWST will therefore be able to better characterize the atmosphere of these hot young brown dwarfs and their low-gravity exoplanet analogs.

  19. Transit detection limits for sub-stellar and terrestrial companions to white dwarfs (United States)

    Faedi, F.; West, R.; Burleigh, M. R.; Goad, M. R.; Hebb, L.


    The SuperWASP project is a ground-based ultra wide angle search for extra-solar planetary transits that has successfully detected 15 previously unknown planets in the last two years. We have used SuperWASP photometric data to investigate the transit characteristics of and detection limits for brown dwarfs, gas giants and terrestrial companions in orbit around white dwarfs. The relatively small size of a white dwarf host star (approximately 1 Earth radius), implies that any sub-stellar or gas giant companion will completely eclipse it, while terrestrial bodies smaller than the Moon will produce relatively large (> 1%) transits, detectable in good S/N light-curves. We performed extensive simulations using SuperWASP photometric data and we found that for Gaussian random noise we are sensitive to companions as small as the Moon. Our sensitivity drops in the presence of co-variant noise structure, nevertheless Earth-size bodies remain readily detectable in relatively low S/N data. We searched for eclipses and transit signals in a sample of 174 WASP targets, resulting from a cross-correlation of the McCook & Sion catalogue and the SuperWASP data archive. This study found no evidence for sub-stellar or planetary companions in close orbits around our sample of white dwarfs.

  20. Constraining Substellar Magnetic Dynamos using Auroral Radio Emission (United States)

    Kao, Melodie; Hallinan, Gregg; Pineda, J. Sebastian; Escala, Ivanna; Burgasser, Adam J.; Stevenson, David J.


    An important outstanding problem in dynamo theory is understanding how magnetic fields are generated and sustained in fully convective stellar objects. A number of models for possible dynamo mechanisms in this regime have been proposed but constraining data on magnetic field strengths and topologies across a wide range of mass, age, rotation rate, and temperature are sorely lacking, particularly in the brown dwarf regime. Detections of highly circularly polarized pulsed radio emission provide our only window into magnetic field measurements for objects in the ultracool brown dwarf regime. However, these detections are very rare; previous radio surveys encompassing ˜60 L6 or later targets have yielded only one detection. We have developed a selection strategy for biasing survey targets based on possible optical and infrared tracers of auroral activity. Using our selection strategy, we previously observed six late L and T dwarfs with the Jansky Very Large Array (VLA) and detected the presence of highly circularly polarized radio emission for five targets. Our initial detections at 4-8 GHz provided the most robust constraints on dynamo theory in this regime, confirming magnetic fields >2.5 kG. To further develop our understanding of magnetic fields in the ultracool brown dwarf mass regime bridging planets and stars, we present constraints on surface magnetic field strengths for two Y-dwarfs as well as higher frequency observations of the previously detected L/T dwarfs corresponding ~3.6 kG fields. By carefully comparing magnetic field measurements derived from auroral radio emission to measurements derived from Zeeman broadening and Zeeman Doppler imaging, we provide tentative evidence that the dynamo operating in this mass regime may be inconsistent with predicted values from currently in vogue models. This suggests that parameters beyond convective flux may influence magnetic field generation in brown dwarfs.

  1. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker


    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  2. A deep staring campaign in the σ Orionis cluster. Variability in substellar members (United States)

    Elliott, P.; Scholz, A.; Jayawardhana, R.; Eislöffel, J.; Hébrard, E. M.


    Context. The young star cluster near σ Orionis is one of the primary environments to study the properties of young brown dwarfs down to masses comparable to those of giant planets. Aims: Deep optical imaging is used to study time-domain properties of young brown dwarfs over typical rotational timescales and to search for new substellar and planetary-mass cluster members. Methods: We used the Visible Multi Object Spectrograph (VIMOS) at the Very Large Telescope (VLT) to monitor a 24'× 16' field in the I-band. We stared at the same area over a total integration time of 21 h, spanning three observing nights. Using the individual images from this run we investigated the photometric time series of nine substellar cluster members with masses from 10 to 60 MJup. The deep stacked image shows cluster members down to ≈5 MJup. We searched for new planetary-mass objects by combining our deep I-band photometry with public J-band magnitudes and by examining the nearby environment of known very low mass members for possible companions. Results: We find two brown dwarfs, with significantly variable, aperiodic light curves, both with masses around 50 MJup, one of which was previously unknown to be variable. The physical mechanism responsible for the observed variability is likely to be different for the two objects. The variability of the first object, a single-lined spectroscopic binary, is most likely linked to its accretion disc; the second may be caused by variable extinction by large grains. We find five new candidate members from the colour-magnitude diagram and three from a search for companions within 2000 au. We rule all eight sources out as potential members based on non-stellar shape and/or infrared colours. The I-band photometry is made available as a public dataset. Conclusions: We present two variable brown dwarfs. One is consistent with ongoing accretion, the other exhibits apparent transient variability without the presence of an accretion disc. Our analysis

  3. Formation Of Substellar Bodies In Cold Conditions : Gravitational Stability Of Fluids In A Phase Transition (United States)

    Füglistaler, Andreas


    This thesis shows that the physics of cold self-gravitation fluids such as dark molecular clouds is much richer than usually assumed. The segregation in a gravitational field of small grains towards larger bodies such as comets and planetoids cannot be simulated with traditional hydro- dynamical codes, but is possible with a super-molecular approach. Observations, linear and virial analysis as well as computer simulations suggest the possibility of the formation of substellar H2 bodies due to the combination of phase transition and gravity in cold regions, as fluids in a phase transition are gravitationally unstable, independent of the strength of the gravitational potential. H2 phase transition is reached easily during plane-parallel collapses if the initial temperature is ≤ 15 K.

  4. In-depth study of moderately young but extremely red, very dusty substellar companion HD 206893B (United States)

    Delorme, P.; Schmidt, T.; Bonnefoy, M.; Desidera, S.; Ginski, C.; Charnay, B.; Lazzoni, C.; Christiaens, V.; Messina, S.; D'Orazi, V.; Milli, J.; Schlieder, J. E.; Gratton, R.; Rodet, L.; Lagrange, A.-M.; Absil, O.; Vigan, A.; Galicher, R.; Hagelberg, J.; Bonavita, M.; Lavie, B.; Zurlo, A.; Olofsson, J.; Boccaletti, A.; Cantalloube, F.; Mouillet, D.; Chauvin, G.; Hambsch, F.-J.; Langlois, M.; Udry, S.; Henning, T.; Beuzit, J.-L.; Mordasini, C.; Lucas, P.; Marocco, F.; Biller, B.; Carson, J.; Cheetham, A.; Covino, E.; De Caprio, V.; Delboulbe, A.; Feldt, M.; Girard, J.; Hubin, N.; Maire, A.-L.; Pavlov, A.; Petit, C.; Rouan, D.; Roelfsema, R.; Wildi, F.


    Context. The substellar companion HD 206893b has recently been discovered by direct imaging of its disc-bearing host star with the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) instrument. Aims: We investigate the atypical properties of the companion, which has the reddest near-infrared colours among all known substellar objects, either orbiting a star or isolated, and we provide a comprehensive characterisation of the host star-disc-companion system. Methods: We conducted a follow-up of the companion with adaptive optics imaging and spectro-imaging with SPHERE, and a multi-instrument follow-up of its host star. We obtain a R = 30 spectrum from 0.95 to 1.64 μm of the companion and additional photometry at 2.11 and 2.25 μm. We carried out extensive atmosphere model fitting for the companions and the host star in order to derive their age, mass, and metallicity. Results: We found no additional companion in the system in spite of exquisite observing conditions resulting in sensitivity to 6 MJup (2 MJup) at 0.5'' for an age of 300 Myr (50 Myr). We detect orbital motion over more than one year and characterise the possible Keplerian orbits. We constrain the age of the system to a minimum of 50 Myr and a maximum of 700 Myr, and determine that the host-star metallicity is nearly solar. The comparison of the companion spectrum and photometry to model atmospheres indicates that the companion is an extremely dusty late L dwarf, with an intermediate gravity (log g 4.5-5.0) which is compatible with the independent age estimate of the system. Conclusions: Though our best fit corresponds to a brown dwarf of 15-30 MJup aged 100-300 Myr, our analysis is also compatible with a range of masses and ages going from a 50 Myr 12 MJup planetary-mass object to a 50 MJup Hyades-age brown dwarf. Even though this companion is extremely red, we note that it is more probable that it has an intermediate gravity rather than the very low gravity that is often associated with

  5. Silo model tests with sand

    DEFF Research Database (Denmark)

    Munch-Andersen, Jørgen

    Tests have been carried out in a large silo model with Leighton Buzzard Sand. Normal pressures and shear stresses have been measured during tests carried out with inlet and outlet geometry. The filling method is a very important parameter for the strength of the mass and thereby the pressures...


    Energy Technology Data Exchange (ETDEWEB)

    Daemgen, Sebastian [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5H 3H4 (Canada); Bonavita, Mariangela [The University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom); Jayawardhana, Ray [Physics and Astronomy, York University, Toronto, Ontario L3T 3R1 (Canada); Lafrenière, David [Department of Physics, University of Montréal, Montréal, QC (Canada); Janson, Markus, E-mail: [Department of Astronomy, Stockholm University, Stockholm (Sweden)


    We present results from a large, high-spatial-resolution near-infrared imaging search for stellar and sub-stellar companions in the Taurus-Auriga star-forming region. The sample covers 64 stars with masses between those of the most massive Taurus members at ∼3 M {sub ☉} and low-mass stars at ∼0.2 M {sub ☉}. We detected 74 companion candidates, 34 of these reported for the first time. Twenty-five companions are likely physically bound, partly confirmed by follow-up observations. Four candidate companions are likely unrelated field stars. Assuming physical association with their host star, estimated companion masses are as low as ∼2 M {sub Jup}. The inferred multiplicity frequency within our sensitivity limits between ∼10-1500 AU is 26.3{sub −4.9}{sup +6.6}%. Applying a completeness correction, 62% ± 14% of all Taurus stars between 0.7 and 1.4 M {sub ☉} appear to be multiple. Higher order multiples were found in 1.8{sub −1.5}{sup +4.2}% of the cases, in agreement with previous observations of the field. We estimate a sub-stellar companion frequency of ∼3.5%-8.8% within our sensitivity limits from the discovery of two likely bound and three other tentative very low-mass companions. This frequency appears to be in agreement with what is expected from the tail of the stellar companion mass ratio distribution, suggesting that stellar and brown dwarf companions share the same dominant formation mechanism. Further, we find evidence for possible evolution of binary parameters between two identified sub-populations in Taurus with ages of ∼2 Myr and ∼20 Myr, respectively.


    Energy Technology Data Exchange (ETDEWEB)

    Gettel, S.; Wolszczan, A. [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Niedzielski, A.; Nowak, G.; Adamow, M.; Zielinski, P.; Maciejewski, G., E-mail:, E-mail:, E-mail: [Torun Center for Astronomy, Nicolaus Copernicus University, ul. Gagarina 11, 87-100 Torun (Poland)


    We present the discovery of substellar-mass companions to three giant stars by the ongoing Penn State-Torun Planet Search conducted with the 9.2 m Hobby-Eberly Telescope. The most massive of the three stars, K2-giant HD 240237, has a 5.3 M{sub J} minimum mass companion orbiting the star at a 746 day period. The K0-giant BD +48 738 is orbited by a {>=}0.91 M{sub J} planet which has a period of 393 days and shows a nonlinear, long-term radial velocity (RV) trend that indicates a presence of another, more distant companion, which may have a substellar mass or be a low-mass star. The K2-giant HD 96127 has a {>=}4.0 M{sub J} mass companion in a 647 day orbit around the star. The two K2-giants exhibit a significant RV noise that complicates the detection of low-amplitude, periodic variations in the data. If the noise component of the observed RV variations is due to solar-type oscillations, we show, using all the published data for the substellar companions to giants, that its amplitude is anti-correlated with stellar metallicity.

  8. Exploring the brown dwarf desert: new substellar companions from the SDSS-III MARVELS survey (United States)

    Grieves, Nolan; Ge, Jian; Thomas, Neil; Ma, Bo; Sithajan, Sirinrat; Ghezzi, Luan; Kimock, Ben; Willis, Kevin; De Lee, Nathan; Lee, Brian; Fleming, Scott W.; Agol, Eric; Troup, Nicholas; Paegert, Martin; Schneider, Donald P.; Stassun, Keivan; Varosi, Frank; Zhao, Bo; Jian, Liu; Li, Rui; Porto de Mello, Gustavo F.; Bizyaev, Dmitry; Pan, Kaike; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Santiago, Basílio X.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; del Peloso, E. F.


    Planet searches using the radial velocity technique show a paucity of companions to solar-type stars within ˜5 au in the mass range of ˜10-80 MJup. This deficit, known as the brown dwarf desert, currently has no conclusive explanation. New substellar companions in this region help assess the reality of the desert and provide insight to the formation and evolution of these objects. Here, we present 10 new brown dwarf and 2 low-mass stellar companion candidates around solar-type stars from the Multi-object APO Radial Velocity Exoplanet Large-Area Survey (MARVELS) of the Sloan Digital Sky Survey III. These companions were selected from processed MARVELS data using the latest University of Florida Two Dimensional pipeline, which shows significant improvement and reduction of systematic errors over previous pipelines. The 10 brown dwarf companions range in mass from ˜13 to 76 MJup and have orbital radii of less than 1 au. The two stellar companions have minimum masses of ˜98 and 100 MJup. The host stars of the MARVELS brown dwarf sample have a mean metallicity of [Fe/H] = 0.03 ± 0.08 dex. Given our stellar sample we estimate the brown dwarf occurrence rate around solar-type stars with periods less than ˜300 d to be ˜0.56 per cent.

  9. Model-Based Testing of Probabilistic Systems

    NARCIS (Netherlands)

    Gerhold, Marcus; Stoelinga, Mariëlle Ida Antoinette; Stevens, Perdita; Wasowski, Andzej

    This paper presents a model-based testing framework for probabilistic systems. We provide algorithms to generate, execute and evaluate test cases from a probabilistic requirements model. In doing so, we connect ioco-theory for model-based testing and statistical hypothesis testing: our ioco-style

  10. 46 CFR 154.431 - Model test. (United States)


    ...(c). (b) Analyzed data of a model test for the primary and secondary barrier of the membrane tank... Model test. (a) The primary and secondary barrier of a membrane tank, including the corners and joints...

  11. Testing linear models for ability parameters in item response models

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Hendrawan, I.


    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum

  12. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir


    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  13. Vehicle rollover sensor test modeling

    NARCIS (Netherlands)

    McCoy, R.W.; Chou, C.C.; Velde, R. van de; Twisk, D.; Schie, C. van


    A computational model of a mid-size sport utility vehicle was developed using MADYMO. The model includes a detailed description of the suspension system and tire characteristics that incorporated the Delft-Tyre magic formula description. The model was correlated by simulating a vehicle suspension

  14. Model-Based Testing: The New Revolution in Software Testing

    Directory of Open Access Journals (Sweden)



    Full Text Available The efforts spent on testing are enormous due to the continuing quest for better software quality, and the ever growing complexity of software systems. The situation is aggravated by the fact that the complexity of testing tends to grow faster than the complexity of the systems being tested, in the worst case even exponentially. Whereas development and construction methods for software allow the building of ever larger and more complex systems, there is a real danger that testing methods cannot keep pace with construction, hence these new systems cannot be sufficiently fast and thoroughly be tested. This may seriously hamper the development of future generations of software systems. One of the new technologies to meet the challenges imposed on software testing is model-based testing. Models can be utilized in many ways throughout the product life-cycle, including: improved quality of specifications, code generation, reliability analysis, and test generation. This paper will focus on the testing benefits from MBT methods and review some of the historical challenges that prevented model based testing and we also try to present the solutions that can overcome these challenges.


    Energy Technology Data Exchange (ETDEWEB)



    This Test Plan describes the testing and chemical analyses release rate studies on tank residual samples collected following the retrieval of waste from the tank. This work will provide the data required to develop a contaminant release model for the tank residuals from both sludge and salt cake single-shell tanks. The data are intended for use in the long-term performance assessment and conceptual model development.

  16. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.


    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  17. Atomic Action Refinement in Model Based Testing

    NARCIS (Netherlands)

    van der Bijl, H.M.; Rensink, Arend; Tretmans, G.J.


    In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation

  18. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George


    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  19. Joint test for structural model specification


    Serkan YÜKSEL


    Cataloged from PDF version of article. Aim of this thesis is to propose a test statistic that can test for true structural model in time series. Main concern of the thesis is to suggest a test statistic, which has joint null of unit root and no structural break (difference stationary model). When joint null hypothesis is rejected, source of deviation from the null model may be structural break or (and) stationarity. Sources of the deviation correspond to different structural...

  20. Used Fuel Testing Transportation Model

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Steven B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Best, Ralph E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Maheras, Steven J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jensen, Philip J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); England, Jeffery L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); LeDuc, Dan [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    This report identifies shipping packages/casks that might be used by the Used Nuclear Fuel Disposition Campaign Program (UFDC) to ship fuel rods and pieces of fuel rods taken from high-burnup used nuclear fuel (UNF) assemblies to and between research facilities for purposes of evaluation and testing. Also identified are the actions that would need to be taken, if any, to obtain U.S. Nuclear Regulatory (NRC) or other regulatory authority approval to use each of the packages and/or shipping casks for this purpose.

  1. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen


    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  2. Linear Logistic Test Modeling with R (United States)

    Baghaei, Purya; Kubinger, Klaus D.


    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  3. Testing predictive performance of binary choice models

    NARCIS (Netherlands)

    A.C.D. Donkers (Bas); B. Melenberg (Bertrand)


    textabstractBinary choice models occur frequently in economic modeling. A measure of the predictive performance of binary choice models that is often reported is the hit rate of a model. This paper develops a test for the outperformance of a predictor for binary outcomes over a naive prediction

  4. Testing models for structure formation. (United States)

    Kaiser, N.

    The author reviews a number of tests of theories for structure formation. Large-scale flows and IRAS galaxies indicate a high density parameter Ω ≅ 1, in accord with inflationary predictions, but it is not clear how this meshes with the uniformly low values obtained from virial analysis on scales ≡1 Mpc. Gravitational distortion of faint galaxies behind clusters allows one to construct maps of the mass surface density, and this should shed some light on the large vs. small-scale Ω discrepancy. Power spectrum analysis reveals too red a spectrum on scales λ ≡ 10 - 100 h-1Mpc, but the gaussian fluctuation hypothesis appears to be in good shape. These results suggest that the problem for CDM lies not in the very early universe but in the assumed matter content. The power spectrum problem can be solved by invoking a cocktail of mixed dark matter. However, if gravitational lensing fails to reveal extended dark mass around clusters then we may be forced to explore more radical possibilities for the dark matter.

  5. Biglan Model Test Based on Institutional Diversity. (United States)

    Roskens, Ronald W.; Creswell, John W.

    The Biglan model, a theoretical framework for empirically examining the differences among subject areas, classifies according to three dimensions: adherence to common set of paradigms (hard or soft), application orientation (pure or applied), and emphasis on living systems (life or nonlife). Tests of the model are reviewed, and a further test is…

  6. Multivariate Model for Test Response Analysis

    NARCIS (Netherlands)

    Krishnan, Shaji; Krishnan, Shaji; Kerkhoff, Hans G.


    A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage


    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders


    We present novel theory for testing for reduction of GARCH-X type models with an exogenous (X) covariate to standard GARCH type models. To deal with the problems of potential nuisance parameters on the boundary of the parameter space as well as lack of identification under the null, we exploit...... a noticeable property of specific zero-entries in the inverse information of the GARCH-X type models. Specifically, we consider sequential testing based on two likelihood ratio tests and as demonstrated the structure of the inverse information implies that the proposed test neither depends on whether...

  8. Hydraulic Model Tests on Modified Wave Dragon

    DEFF Research Database (Denmark)

    Hald, Tue; Lynggaard, Jakob

    are found in Hald and Lynggaard (2001). Model tests and reconstruction are carried out during the phase 3 project: ”Wave Dragon. Reconstruction of an existing model in scale 1:50 and sequentiel tests of changes to the model geometry and mass distribution parameters” sponsored by the Danish Energy Agency...... (DEA) wave energy programme. The tests will establish a well documented basis for the development of a 1:4.5 scale prototype planned for testing Nissum Bredning, a sea inlet on the Danish West Coast....

  9. Testing Expected Shortfall Models for Derivative Positions

    NARCIS (Netherlands)

    Kerkhof, F.L.J.; Melenberg, B.; Schumacher, J.M.


    In this paper we test several risk management models for computing expected shortfall for one-period hedge errors of hedged derivatives positions.Contrary to value-at-risk, expected shortfall cannot be tested using the standard binomial test, since we need information of the distribution in the

  10. The Couplex test cases: models and lessons

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeat, A. [Lyon-1 Univ., MCS, 69 - Villeurbanne (France); Kern, M. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Schumacher, S.; Talandier, J. [Agence Nationale pour la Gestion des Dechets Radioactifs (ANDRA), 92 - Chatenay Malabry (France)


    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  11. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan


    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases......, scenarios, behavior, architecture, etc. In this paper we present a method that utilizes the formalism of timed automatons with formal and statistical model checking techniques to apply TD-MBSE to the modeling of system architecture and behavior. The results obtained from applying it to an industrial case...

  12. Collider Tests of the Little Higgs Model

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, Aaron T


    The little Higgs model provides an alternative to traditional candidates for new physics at the TeV scale. The new heavy gauge bosons predicted by this model should be observable at the Large Hadron Collider (LHC). We discuss how the LHC experiments could test the little Higgs model by studying the production and decay of these particles.

  13. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J


    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  14. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing (United States)

    Nance, Donald K.; Liever, Peter A.


    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  15. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling (United States)

    Xie, Qin; Andrews, Stephen


    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  16. Modeling Reliability Growth in Accelerated Stress Testing (United States)


    and E. Elsayed, "A general accelerated life model for step stress testing," IIE Transactions , vol. 37, no. 11, pp. 1059-1069, 2005. [57] D. Nicholls...W. Nelson, "Accelerated Life Testing - step-stress models and data analyses," IEEE Transactions on Reliability, vol. 29, pp. 103-108, 1980. [19] G...34 IEEE Transactions on Reliability, Vols. R-26, no. 5, pp. 348-351, 1977. [35] D. E. Olsen, "Estimating reliability growth," IEEE Transactions on

  17. Modelling ties in the sign test. (United States)

    Rayner, J C; Best, D J


    If ties occur in the sign test, the procedure recommended by Coakley and Heise (1996, Biometrics 52, 1242-1251) is the asymptotic uniformly most powerful nonrandomised test due to Putter (1955, Annals of Mathematical Statistics 26, 368-386). It may be shown that this is a consequence of how the probability of a tie is modelled. Other models with different optimal procedures can be constructed.

  18. Torsion vehicle model test for automotive vehicle (United States)

    Nor, M. K. Mohd; Ho, C. S.; Ma'at, N.


    Torsion vehicle model test of Simple Structural Surfaces (SSS) model for automotive vehicle sedan is proposed in this paper to demonstrate the importance of providing continuous load path within the vehicle structures. The proposed approach is relatively easy to understand as compared to Finite Element Method (FEM). The results prove that the proposed vehicle model test is capable to show that a satisfactory load paths can five a sufficient structural stiffness within the vehicle structure. It is clearly observed that the global torsion stiffness reduces significantly when only one panel is removed from the complete SSS model. The results also five a food agreement with respect to the theoretical hypothesis as the structure is less stiff in torsion in an open section condition. The SSS model and the corresponding torsion test is obviously useful to give an overview of vehicle structural integrity. It can be potentially integrated with FEM to speed up the design process of automotive vehicle.

  19. Sample Size Determination for Rasch Model Tests (United States)

    Draxler, Clemens


    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  20. Modeling answer changes on test items

    NARCIS (Netherlands)

    van der Linden, Willem J.


    The probability of test takers changing answers upon review of their initial choices is modeled. The primary purpose of the model is to check erasures on answer sheets recorded by an optical scanner for numbers and patterns that may be indicative of irregular behavior, such as teachers or school

  1. Modelling and Testing of Friction in Forging

    DEFF Research Database (Denmark)

    Bay, Niels


    Knowledge about friction is still limited in forging. The theoretical models applied presently for process analysis are not satisfactory compared to the advanced and detailed studies possible to carry out by plastic FEM analyses and more refined models have to be based on experimental testing...

  2. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard


    . This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  3. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.


    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  4. Molecular Sieve Bench Testing and Computer Modeling (United States)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.


    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  5. A 'Turing' Test for Landscape Evolution Models (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.


    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  6. Drug progression model: a social control test. (United States)

    Marcos, A C; Bahr, S J


    A social control drug progression model was delineated and tested using a sample of 2,626 high school students from the southwestern United States. Along with the social control constructs of parental attachment, educational attachment, religious attachment, and conventional values, we incorporated alcohol, cigarette, and marijuana use into the model as intervening variables. The model explains 39% of the variation in the self-reported amphetamine use and 24% of the variation in "hard drug" use (cocaine, heroin, LSD, and PCP). The findings suggest that the integration of social control theory and drug progression improves the predictive power of the model of adolescent drug use.

  7. Modeling and adapting production environmental stress testing


    Wilson, Simon


    PUBLISHED This study describes the production sampling environmental stress test (PSEST) process and the offline analysis conducted. Some of the key characteristics and parameters of the test are outlined. The analytical process is based on two types of regression model, each of which links a dependent variable (the log of time to failure in each dwell, or the log of the number failed in each dwell) to independent variables such as temperature and age. These two types of regres...

  8. Experimental Concepts for Testing Seismic Hazard Models (United States)

    Marzocchi, W.; Jordan, T. H.


    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  9. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang


    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  10. Damage modeling in Small Punch Test specimens

    DEFF Research Database (Denmark)

    Martínez Pañeda, Emilio; Cuesta, I.I.; Peñuelas, I.


    Ductile damage modeling within the Small Punch Test (SPT) is extensively investigated. The capabilities ofthe SPT to reliably estimate fracture and damage properties are thoroughly discussed and emphasis isplaced on the use of notched specimens. First, different notch profiles are analyzed....... Furthermore,Gurson-Tvergaard-Needleman model predictions from a top-down approach are employed to gain insightinto the mechanisms governing crack initiation and subsequent propagation in small punch experiments.An accurate assessment of micromechanical toughness parameters from the SPT...

  11. TRIDENT: an Infrared Differential Imaging Camera Optimized for the Detection of Methanated Substellar Companions

    Energy Technology Data Exchange (ETDEWEB)

    Marois, C; Doyon, R; Nadeau, D; Racine, R; Riopel, M; Vallee, P; Lafreniere, D


    A near-infrared camera in use at the Canada-France-Hawaii Telescope (CFHT) and at the 1.6-m telescope of the Observatoire du Mont-Megantic is described. The camera is based on a Hawaii-1 1024 x 1024 HgCdTe array detector. Its main feature is to acquire three simultaneous images at three wavelengths across the methane absorption bandhead at 1.6 {micro}m, enabling, in theory, an accurate subtraction of the stellar point spread function (PSF) and the detection of faint close methanated companions. The instrument has no coronoagraph and features fast data acquisition, yielding high observing efficiency on bright stars. The performance of the instrument is described, and it is illustrated by laboratory tests and CFHT observations of the nearby stars GL526, {nu}-And and {chi}-And. TRIDENT can detect (6{sigma}) a methanated companion with {Delta}H = 9.5 at 0.5'' separation from the star in one hour of observing time. Non-common path aberrations and amplitude modulation differences between the three optical paths are likely to be the limiting factors preventing further PSF attenuation. Instrument rotation and reference star subtraction improve the detection limit by a factor of 2 and 4 respectively. A PSF noise attenuation model is presented to estimate the non-common path wavefront difference effect on PSF subtraction performance.

  12. Testing spatial heterogeneity with stock assessment models

    DEFF Research Database (Denmark)

    Jardim, Ernesto; Eero, Margit; Silva, Alexandra


    This paper describes a methodology that combines meta-population theory and stock assessment models to gain insights about spatial heterogeneity of the meta-population in an operational time frame. The methodology was tested with stochastic simulations for different degrees of connectivity betwee...

  13. Testing mechanistic models of growth in insects. (United States)

    Maino, James L; Kearney, Michael R


    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg(-1)) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. © 2015 The Author(s).

  14. Testing mechanistic models of growth in insects (United States)

    Maino, James L.; Kearney, Michael R.


    Insects are typified by their small size, large numbers, impressive reproductive output and rapid growth. However, insect growth is not simply rapid; rather, insects follow a qualitatively distinct trajectory to many other animals. Here we present a mechanistic growth model for insects and show that increasing specific assimilation during the growth phase can explain the near-exponential growth trajectory of insects. The presented model is tested against growth data on 50 insects, and compared against other mechanistic growth models. Unlike the other mechanistic models, our growth model predicts energy reserves per biomass to increase with age, which implies a higher production efficiency and energy density of biomass in later instars. These predictions are tested against data compiled from the literature whereby it is confirmed that insects increase their production efficiency (by 24 percentage points) and energy density (by 4 J mg−1) between hatching and the attainment of full size. The model suggests that insects achieve greater production efficiencies and enhanced growth rates by increasing specific assimilation and increasing energy reserves per biomass, which are less costly to maintain than structural biomass. Our findings illustrate how the explanatory and predictive power of mechanistic growth models comes from their grounding in underlying biological processes. PMID:26609084

  15. Parametric Testing of Launch Vehicle FDDR Models (United States)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar


    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  16. Temperature Buffer Test. Final THM modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)


    The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to

  17. Business model stress testing : A practical approach to test the robustness of a business model

    NARCIS (Netherlands)

    Haaker, T.I.; Bouwman, W.A.G.A.; Janssen, W; de Reuver, G.A.

    Business models and business model innovation are increasingly gaining attention in practice as well as in academic literature. However, the robustness of business models (BM) is seldom tested vis-à-vis the fast and unpredictable changes in digital technologies, regulation and markets. The

  18. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)



    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  19. Mass Appraisal Modelling in Minsk: Testing different Models Location sensitive

    Directory of Open Access Journals (Sweden)

    Maurizio D’Amato


    Full Text Available Mass Appraisal is the valuation of large quantity of properties. This automatic valuation procedure gave the opportunity to reach a single point estimate (The Appraisal of Real Estate, 13th Edition. The work test different location sensitive methodologies on a sample of 600 residential properties in the city of Minsk in Belarus. This is the first application of mass appraisal modelling in Belarus. Empirical application compares a location blind model with two Location Value Response Surface models (O Connor, 1982, the former based on the detection of value influence centers the latter based on error surface modelling.

  20. Designing healthy communities: Testing the walkability model


    Zuniga-Teran, Adriana; Orr, Barron; Gimblett, Randy; Chalfoun, Nader; Marsh, Stuart; Guertin, David; Going, Scott


    Research from multiple domains has provided insights into how neighborhood design can be improved to have a more favorable effect on physical activity, a concept known as walkability. The relevant research findings/hypotheses have been integrated into a Walkability Framework, which organizes the design elements into nine walkability categories. The purpose of this study was to test whether this conceptual framework can be used as a model to measure the interactions between the built environme...

  1. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen


    Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...... as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR ( This includes the basic power and sample size calculations for these four discrimination tests...

  2. Inverse hydrochemical models of aqueous extracts tests

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, L.; Samper, J.; Montenegro, L.


    Aqueous extract test is a laboratory technique commonly used to measure the amount of soluble salts of a soil sample after adding a known mass of distilled water. Measured aqueous extract data have to be re-interpreted in order to infer porewater chemical composition of the sample because porewater chemistry changes significantly due to dilution and chemical reactions which take place during extraction. Here we present an inverse hydrochemical model to estimate porewater chemical composition from measured water content, aqueous extract, and mineralogical data. The model accounts for acid-base, redox, aqueous complexation, mineral dissolution/precipitation, gas dissolution/ex-solution, cation exchange and surface complexation reactions, of which are assumed to take place at local equilibrium. It has been solved with INVERSE-CORE{sup 2D} and been tested with bentonite samples taken from FEBEX (Full-scale Engineered Barrier EXperiment) in situ test. The inverse model reproduces most of the measured aqueous data except bicarbonate and provides an effective, flexible and comprehensive method to estimate porewater chemical composition of clays. Main uncertainties are related to kinetic calcite dissolution and variations in CO2(g) pressure.

  3. An Astrometric Search for a Sub-stellar Companion of the M8.5 Dwarf TVLM 513-46546 Using Very Long Baseline Interferometry


    Forbrich, Jan; Berger, Edo; Reid, Mark J.


    We conducted multi-epoch VLBI observations to search for astrometric reflex motion caused by a sub-stellar companion of the M8.5 dwarf TVLM 513–46546. The observations yield an absolute parallax corresponding to a distance of 10.762±0.027 pc and a proper motion of 78.09± 0.17 mas yr−1. From the absence of significant residual motion, we place an upper limit to any reflex motion caused by a companion, extending the parameter space covered by previous near-infrared direct-imaging searches. By c...

  4. Movable scour protection. Model test report

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, R.


    This report presents the results of a series of model tests with scour protection of marine structures. The objective of the model tests is to investigate the integrity of the scour protection during a general lowering of the surrounding seabed, for instance in connection with movement of a sand bank or with general subsidence. The scour protection in the tests is made out of stone material. Two different fractions have been used: 4 mm and 40 mm. Tests with current, with waves and with combined current and waves were carried out. The scour protection material was placed after an initial scour hole has evolved in the seabed around the structure. This design philosophy has been selected because the situation often is that the scour hole starts to generate immediately after the structure has been placed. It is therefore difficult to establish a scour protection at the undisturbed seabed if the scour material is placed after the main structure. Further, placing the scour material in the scour hole increases the stability of the material. Two types of structure have been used for the test, a Monopile and a Tripod foundation. Test with protection mats around the Monopile model was also carried out. The following main conclusions have emerged form the model tests with flat bed (i.e. no general seabed lowering): 1. The maximum scour depth found in steady current on sand bed was 1.6 times the cylinder diameter, 2. The minimum horizontal extension of the scour hole (upstream direction) was 2.8 times the cylinder diameter, corresponding to a slope of 30 degrees, 3. Concrete protection mats do not meet the criteria for a strongly erodible seabed. In the present test virtually no reduction in the scour depth was obtained. The main problem is the interface to the cylinder. If there is a void between the mats and the cylinder, scour will develop. Even with the protection mats that are tightly connected to the cylinder, scour is expected to develop as long as the mats allow for

  5. Statistical tests of simple earthquake cycle models (United States)

    DeVries, Phoebe M. R.; Evans, Eileen L.


    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  6. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon


    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  7. The Phases Differential Astrometry Data Archive. 5. Candidate Substellar Companions to Binary Systems (United States)


    Mathematics and Astronomy, 105-24, California Institute of Technology, Pasadena, CA 91125, USA 5 Nicolaus Copernicus Astronomical Center, Polish Academy...they are the reverses of Nicolaus and Venator, the Latinized versions of Cacciatore’s own names (Allen 1963). Table 9 Orbit Model for HD 171779 Parameter

  8. 2-D Model Test of Dolosse Breakwater

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Liu, Zhou


    ). To extend the design diagram to cover Dolos breakwaters with superstructure, 2-D model tests of Dolos breakwater with wave wall is included in the project Rubble Mound Breakwater Failure Modes sponsored by the Directorate General XII of the Commission of the European Communities under Contract MAS-CT92...... of the method of placing and packing the blocks on the hydraulic stability. The Dolosse were more carefully put on the slope and the hydraulic stability of such slope was compared with that of the more randomly packed slope. The whole experiment was carried out in the period of August - November 1993...

  9. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle


    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  10. Model-independent tests of cosmic gravity. (United States)

    Linder, Eric V


    Gravitation governs the expansion and fate of the universe, and the growth of large-scale structure within it, but has not been tested in detail on these cosmic scales. The observed acceleration of the expansion may provide signs of gravitational laws beyond general relativity (GR). Since the form of any such extension is not clear, from either theory or data, we adopt a model-independent approach to parametrizing deviations to the Einstein framework. We explore the phase space dynamics of two key post-GR functions and derive a classification scheme, and an absolute criterion on accuracy necessary for distinguishing classes of gravity models. Future surveys will be able to constrain the post-GR functions' amplitudes and forms to the required precision, and hence reveal new aspects of gravitation.

  11. Modeling and testing of ethernet transformers (United States)

    Bowen, David


    Twisted-pair Ethernet is now the standard home and office last-mile network technology. For decades, the IEEE standard that defines Ethernet has required electrical isolation between the twisted pair cable and the Ethernet device. So, for decades, every Ethernet interface has used magnetic core Ethernet transformers to isolate Ethernet devices and keep users safe in the event of a potentially dangerous fault on the network media. The current state-of-the-art Ethernet transformers are miniature (explored which are capable of exceptional miniaturization or on-chip fabrication. This dissertation thoroughly explores the performance of the current commercial Ethernet transformers to both increase understanding of the device's behavior and outline performance parameters for replacement devices. Lumped element and distributed circuit models are derived; testing schemes are developed and used to extract model parameters from commercial Ethernet devices. Transfer relation measurements of the commercial Ethernet transformers are compared against the model's behavior and it is found that the tuned, distributed models produce the best transfer relation match to the measured data. Process descriptions and testing results on fabricated thin-film dielectric-core toroid transformers are presented. The best results were found for a 32-turn transformer loaded with 100Ω, the impedance of twisted pair cable. This transformer gave a flat response from about 10MHz to 40MHz with a height of approximately 0.45. For the fabricated transformer structures, theoretical methods to determine resistance, capacitance and inductance are presented. A special analytical and numerical analysis of the fabricated transformer inductance is presented. Planar cuts of magnetic slope fields around the dielectric-core toroid are shown that describe the effect of core height and winding density on flux uniformity without a magnetic core.

  12. The C. elegans model in toxicity testing. (United States)

    Hunt, Piper Reid


    Caenorhabditis elegans is a small nematode that can be maintained at low cost and handled using standard in vitro techniques. Unlike toxicity testing using cell cultures, C. elegans toxicity assays provide data from a whole animal with intact and metabolically active digestive, reproductive, endocrine, sensory and neuromuscular systems. Toxicity ranking screens in C. elegans have repeatedly been shown to be as predictive of rat LD50 ranking as mouse LD50 ranking. Additionally, many instances of conservation of mode of toxic action have been noted between C. elegans and mammals. These consistent correlations make the case for inclusion of C. elegans assays in early safety testing and as one component in tiered or integrated toxicity testing strategies, but do not indicate that nematodes alone can replace data from mammals for hazard evaluation. As with cell cultures, good C. elegans culture practice (GCeCP) is essential for reliable results. This article reviews C. elegans use in various toxicity assays, the C. elegans model's strengths and limitations for use in predictive toxicology, and GCeCP. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. Journal of Applied Toxicology published by John Wiley & Sons Ltd. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. Journal of Applied Toxicology published by John Wiley & Sons Ltd.

  13. Experimentally testing the standard cosmological model

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (USA) Fermi National Accelerator Lab., Batavia, IL (USA))


    The standard model of cosmology, the big bang, is now being tested and confirmed to remarkable accuracy. Recent high precision measurements relate to the microwave background; and big bang nucleosynthesis. This paper focuses on the latter since that relates more directly to high energy experiments. In particular, the recent LEP (and SLC) results on the number of neutrinos are discussed as a positive laboratory test of the standard cosmology scenario. Discussion is presented on the improved light element observational data as well as the improved neutron lifetime data. alternate nucleosynthesis scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conclusions on the baryonic density relative to the critical density, {Omega}{sub b}, remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the standard model conclusion that {Omega}{sub b} {approximately} 0.06. This latter point is the deriving force behind the need for non-baryonic dark matter (assuming {Omega}{sub total} = 1) and the need for dark baryonic matter, since {Omega}{sub visible} < {Omega}{sub b}. Recent accelerator constraints on non-baryonic matter are discussed, showing that any massive cold dark matter candidate must now have a mass M{sub x} {approx gt} 20 GeV and an interaction weaker than the Z{sup 0} coupling to a neutrino. It is also noted that recent hints regarding the solar neutrino experiments coupled with the see-saw model for {nu}-masses may imply that the {nu}{sub {tau}} is a good hot dark matter candidate. 73 refs., 5 figs.

  14. Ablative Rocket Deflector Testing and Computational Modeling (United States)

    Allgood, Daniel C.; Lott, Jeffrey W.; Raines, Nickey


    A deflector risk mitigation program was recently conducted at the NASA Stennis Space Center. The primary objective was to develop a database that characterizes the behavior of industry-grade refractory materials subjected to rocket plume impingement conditions commonly experienced on static test stands. The program consisted of short and long duration engine tests where the supersonic exhaust flow from the engine impinged on an ablative panel. Quasi time-dependent erosion depths and patterns generated by the plume impingement were recorded for a variety of different ablative materials. The erosion behavior was found to be highly dependent on the material s composition and corresponding thermal properties. For example, in the case of the HP CAST 93Z ablative material, the erosion rate actually decreased under continued thermal heating conditions due to the formation of a low thermal conductivity "crystallization" layer. The "crystallization" layer produced near the surface of the material provided an effective insulation from the hot rocket exhaust plume. To gain further insight into the complex interaction of the plume with the ablative deflector, computational fluid dynamic modeling was performed in parallel to the ablative panel testing. The results from the current study demonstrated that locally high heating occurred due to shock reflections. These localized regions of shock-induced heat flux resulted in non-uniform erosion of the ablative panels. In turn, it was observed that the non-uniform erosion exacerbated the localized shock heating causing eventual plume separation and reversed flow for long duration tests under certain conditions. Overall, the flow simulations compared very well with the available experimental data obtained during this project.

  15. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.


    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  16. Testing computational toxicology models with phytochemicals. (United States)

    Valerio, Luis G; Arvidson, Kirk B; Busta, Emily; Minnier, Barbara L; Kruhlak, Naomi L; Benz, R Daniel


    Computational toxicology employing quantitative structure-activity relationship (QSAR) modeling is an evidence-based predictive method being evaluated by regulatory agencies for risk assessment and scientific decision support for toxicological endpoints of interest such as rodent carcinogenicity. Computational toxicology is being tested for its usefulness to support the safety assessment of drug-related substances (e.g. active pharmaceutical ingredients, metabolites, impurities), indirect food additives, and other applied uses of value for protecting public health including safety assessment of environmental chemicals. The specific use of QSAR as a chemoinformatic tool for estimating the rodent carcinogenic potential of phytochemicals present in botanicals, herbs, and natural dietary sources is investigated here by an external validation study, which is the most stringent scientific method of measuring predictive performance. The external validation statistics for predicting rodent carcinogenicity of 43 phytochemicals, using two computational software programs evaluated at the FDA, are discussed. One software program showed very good performance for predicting non-carcinogens (high specificity), but both exhibited poor performance in predicting carcinogens (sensitivity), which is consistent with the design of the models. When predictions were considered in combination with each other rather than based on any one software, the performance for sensitivity was enhanced, However, Chi-square values indicated that the overall predictive performance decreases when using the two computational programs with this particular data set. This study suggests that complementary multiple computational toxicology software need to be carefully selected to improve global QSAR predictions for this complex toxicological endpoint.

  17. Designing healthy communities: Testing the walkability model

    Directory of Open Access Journals (Sweden)

    Adriana A. Zuniga-Teran


    Full Text Available Research from multiple domains has provided insights into how neighborhood design can be improved to have a more favorable effect on physical activity, a concept known as walkability. The relevant research findings/hypotheses have been integrated into a Walkability Framework, which organizes the design elements into nine walkability categories. The purpose of this study was to test whether this conceptual framework can be used as a model to measure the interactions between the built environment and physical activity. We explored correlations between the walkability categories and physical activity reported through a survey of residents of Tucson, Arizona (n=486. The results include significant correlations between the walkability categories and physical activity as well as between the walkability categories and the two motivations for walking (recreation and transportation. To our knowledge, this is the first study that reports links between walkability and walking for recreation. Additionally, the use of the Walkability Framework allowed us to identify the walkability categories most strongly correlated with the two motivations for walking. The results of this study support the use of the Walkability Framework as a model to measure the built environment in relation to its ability to promote physical activity.

  18. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.


    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  19. Successful intelligence: A model for testing intelligence beyond IQ tests


    Sternberg, Robert J.


    Standard conventional tests only assess a narrow sampling of the abilities required for success in school and in life. In contrast, the augmented theory of successful intelligence asserts that intelligence involves creative skills in producing new ideas, analytical skills in evaluating whether the ideas are good ones, practical skills in putting the ideas into practice and in convincing other people of the value of the ideas, and wisdom-based skills in confirming that one is using one's knowl...

  20. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B


    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  1. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.


    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information

  2. Successful intelligence: A model for testing intelligence beyond IQ tests

    Directory of Open Access Journals (Sweden)

    Robert J. Sternberg


    Full Text Available Standard conventional tests only assess a narrow sampling of the abilities required for success in school and in life. In contrast, the augmented theory of successful intelligence asserts that intelligence involves creative skills in producing new ideas, analytical skills in evaluating whether the ideas are good ones, practical skills in putting the ideas into practice and in convincing other people of the value of the ideas, and wisdom-based skills in confirming that one is using one's knowledge and skills to serve a common good. Three projects were created to evaluate the theory with regard to college admissions: First, the Rainbow Project demonstrated that prediction of first-year college academic performance could be increased while simultaneously decreasing differences between ethnic groups on a predictive assessment, in comparison with the Scholastic Aptitude Test (SAT. Second, the Kaleidoscope Project improved prediction of academic and extracurricular performance over SAT scores alone; but the ethnic-group differences usually obtained vanished. Third, the Panorama Project showed the success of similar techniques in a less selective population. The projects demonstrate the application of the augmented theory of successful intelligence in enhancing college and university admissions procedures

  3. Successful intelligence: A model for testing intelligence beyond IQ tests

    Directory of Open Access Journals (Sweden)

    Robert J. Sternberg


    Full Text Available Standard conventional tests only assess a narrow sampling of the abilities required for success in school and in life. In contrast, the augmented theory of successful intelligence asserts that intelligence involves creative skills in producing new ideas, analytical skills in evaluating whether the ideas are good ones, practical skills in putting the ideas into practice and in convincing other people of the value of the ideas, and wisdom-based skills in confirming that one is using one's knowledge and skills to serve a common good. Three projects were created to evaluate the theory with regard to college admissions: First, the Rainbow Project demonstrated that prediction of first-year college academic performance could be increased while simultaneously decreasing differences between ethnic groups on a predictive assessment, in comparison with the Scholastic Aptitude Test (SAT. Second, the Kaleidoscope Project improved prediction of academic and extracurricular performance over SAT scores alone; but the ethnic-group differences usually obtained vanished. Third, the Panorama Project showed the success of similar techniques in a less selective population. The projects demonstrate the application of the augmented theory of successful intelligence in enhancing college and university admissions procedures.

  4. Engineering Abstractions in Model Checking and Testing

    DEFF Research Database (Denmark)

    Achenbach, Michael; Ostermann, Klaus


    Abstractions are used in model checking to tackle problems like state space explosion or modeling of IO. The application of these abstractions in real software development processes, however, lacks engineering support. This is one reason why model checking is not widely used in practice yet...... and implementing abstractions will improve the applicability of model checking in practice....

  5. Putting hydrological modelling practice to the test

    NARCIS (Netherlands)

    Melsen, Lieke Anna


    Six steps can be distinguished in the process of hydrological modelling: the perceptual model (deciding on the processes), the conceptual model (deciding on the equations), the procedural model (get the code to run on a computer), calibration (identify the parameters), evaluation (confronting output

  6. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  7. Reliable sequential testing for statistical model checking

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Haverkort, Boudewijn R.H.M.


    We introduce a framework for comparing statistical model checking (SMC) techniques and propose a new, more reliable, SMC technique. Statistical model checking has recently been implemented in tools like UPPAAL and PRISM to be able to handle models which are too complex for numerical analysis.

  8. Test Driven Development of Scientific Models (United States)

    Clune, Thomas L.


    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  9. State of the art hydraulic turbine model test (United States)

    Fabre, Violaine; Duparchy, Alexandre; Andre, Francois; Larroze, Pierre-Yves


    Model tests are essential in hydraulic turbine development and related fields. The methods and technologies used to perform these tests show constant progress and provide access to further information. In addition, due to its contractual nature, the test demand evolves continuously in terms of quantity and accuracy. Keeping in mind that the principal aim of model testing is the transposition of the model measurements to the real machine, the measurements should be performed accurately, and a critical analysis of the model test results is required to distinguish the transposable hydraulic phenomena from the test rig interactions. Although the resonances’ effects are known and described in the IEC standard, their identification is difficult. Leaning on a strong experience of model testing, we will illustrate with a few examples of how to identify the potential problems induced by the test rig. This paper contains some of our best practices to obtain the most accurate, relevant, and independent test-rig measurements.

  10. Design and Test of a Cognitive Model (United States)

    Cunningham, Michael A.; Gary, Harry J.


    A presentation of arguments demonstrating piaget's sensorimotor stages in Hebb's terms, and the suggestion for performing a computer test. This paper is an early progress report of an attempt to translate some plausible arguments into a rigorous demonstration. (Author)

  11. 2-D Model Test Study of the Suape Breakwater, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sopavicius, A.

    This report deals with a two-dimensional model test study of the extension of the breakwater in Suape, Brazil. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given...

  12. A Lagrange Multiplier Test for Testing the Adequacy of the Constant Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Catani, Paul; Teräsvirta, Timo; Yin, Meiqun

    A Lagrange multiplier test for testing the parametric structure of a constant conditional correlation generalized autoregressive conditional heteroskedasticity (CCC-GARCH) model is proposed. The test is based on decomposing the CCC-GARCH model multiplicatively into two components, one of which...... represents the null model, whereas the other one describes the misspeci…cation. A simulation study shows that the test has good …nite sample properties. We compare the test with other tests for misspeci…cation of multivariate GARCH models. The test has high power against alternatives where the misspeci......…cation is in the GARCH parameters and is superior to other tests. The test is not greatly affected by misspeci…cation in the conditional correlations and is therefore well suited for considering misspeci…cation of GARCH equations....

  13. "A regression error specification test (RESET) for generalized linear models".


    Sunil Sapra


    Generalized linear models (GLMs) are generalizations of linear regression models, which allow fitting regression models to response data that follow a general exponential family. GLMs are used widely in social sciences for fitting regression models to count data, qualitative response data and duration data. While a variety of specification tests have been developed for the linear regression model and are routinely applied for testing for misspecification of functional form, omitted variables,...

  14. Simulation of a Model Tank Gunnery Test (United States)


    the first and second rounds. Time data might also be measured from the sound track of gun camera tapes. This soundtrack can be used to record...j. RELIABILITY AND VALIDITY IN CRITERION-REFERENCED TESTING Traditional methods of assessing reliability and validity have a long history and much

  15. Mountain Bike Wheel Endurance Testing and Modeling (United States)


    Published by Elsevier Ltd. Keywords: Mountain biking; wheels; failure testing 1. Introduction Mountain bike ( MTB ) wheels are subject to a wide range of...accumulates over the life of the wheel and leads to part failure. MTB wheels must be designed to withstand many miles of this loading before failure

  16. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean...

  17. Testing Pearl Model In Three European Sites (United States)

    Bouraoui, F.; Bidoglio, G.

    The Plant Protection Product Directive (91/414/EEC) stresses the need of validated models to calculate predicted environmental concentrations. The use of models has become an unavoidable step before pesticide registration. In this context, European Commission, and in particular DGVI, set up a FOrum for the Co-ordination of pes- ticide fate models and their USe (FOCUS). In a complementary effort, DG research supported the APECOP project, with one of its objective being the validation and im- provement of existing pesticide fate models. The main topic of research presented here is the validation of the PEARL model for different sites in Europe. The PEARL model, actually used in the Dutch pesticide registration procedure, was validated in three well- instrumented sites: Vredepeel (the Netherlands), Brimstone (UK), and Lanna (Swe- den). A step-wise procedure was used for the validation of the PEARL model. First the water transport module was calibrated, and then the solute transport module, using tracer measurements keeping unchanged the water transport parameters. The Vrede- peel site is characterised by a sandy soil. Fourteen months of measurements were used for the calibration. Two pesticides were applied on the site: bentazone and etho- prophos. PEARL predictions were very satisfactory for both soil moisture content, and pesticide concentration in the soil profile. The Brimstone site is characterised by a cracking clay soil. The calibration was conducted on a time series measurement of 7 years. The validation consisted in comparing predictions and measurement of soil moisture at different soil depths, and in comparing the predicted and measured con- centration of isoproturon in the drainage water. The results, even if in good agreement with the measuremens, highlighted the limitation of the model when the preferential flow becomes a dominant process. PEARL did not reproduce well soil moisture pro- file during summer months, and also under-predicted the arrival of

  18. Testing Software Development Project Productivity Model (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  19. Precision tests of the standard electroweak model

    CERN Document Server


    High precision measurements of weak neutral current and charged current processes and of the properties of the Z and W bosons have established the standard electroweak model as correct down to a distance scale of 10-16 cm, and are a sensitive probe of possible underlying physics. In this book, all aspects of the program are considered in detail, including the structure of the standard model, radiative corrections, high precision experiments, and their implications. The major classes of experiments are surveyed, covering the experiments themselves, the data analysis, results, and prospects. Thi

  20. Port Adriano, 2D-Model tests

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Meinert, Palle; Andersen, Thomas Lykke

    the crown wall have been measured. The model has been subjected to irregular waves corresponding to typical conditions offshore from the intended prototype location. Characteristic situations have been video recorded. The stability of the toe has been investigated. The wave-generated forces on the caisson...

  1. Modal testing for model validation of structures with discrete nonlinearities

    National Research Council Canada - National Science Library

    Ewins, D J; Weekes, B; delli Carri, A


    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement...

  2. Active control rotor model testing at Princeton's Rotorcraft Dynamics Laboratory (United States)

    Mckillip, Robert M., Jr.


    A description of the model helicopter rotor tests currently in progress at Princeton's Rotorcraft Dynamics Laboratory is presented. The tests are designed to provide data for rotor dynamic modeling for use with active control system design. The model rotor to be used incoporates the capability for Individual Blade Control (IBC) or Higher Harmonic Control through the use of a standard swashplate on a three bladed hub. Sample results from the first series of tests are presented, along with the methodology used for state and parameter identification. Finally, pending experiments and possible research directions using this model and test facility are outlined.

  3. Evaluation of Shelter Ventilation by Model Tests (United States)


    ventilation is created by mechanical devices such as pedal venti- lators and Kearny pumps. If natural ventilation in a shelter. can be predicted with...past have centered around the design, performance analysis and deployment of mechanical ventilating units (Ref. 11-14). Other studies include one on...calculated as the sum of the air volume flow rates through all the windward openings. The variation of model ventilation througnput versus wind speed

  4. A test-tube model for rainfall (United States)

    Wilkinson, Michael


    If the temperature of a cell containing two partially miscible liquids is changed very slowly, so that the miscibility is decreased, microscopic droplets nucleate, grow and migrate to the interface due to their buoyancy. The system may show an approximately periodic variation of the turbidity of the mixture, as the mean droplet size fluctuates. These precipitation events are analogous to rainfall. This paper considers a theoretical model for these experiments. After nucleation the initial growth is by Ostwald ripening, followed by a finite-time runaway growth of droplet sizes due to larger droplets sweeping up smaller ones. The model predicts that the period \\Delta t and the temperature sweep rate ξ are related by \\Delta t\\sim C \\xi^{-3/7} , and is in good agreement with experiments. The coefficient C has a power-law divergence approaching the critical point of the miscibility transition: C\\sim (T-T_{\\text{c}})^{-\\eta} , and the critical exponent η is determined. It is argued that while the mechanism does not provide a quantitative description of terrestrial rainfall, it may be a faithful model for precipitation on other planets.

  5. Mixed Portmanteau Test for Diagnostic Checking of Time Series Models

    Directory of Open Access Journals (Sweden)

    Sohail Chand


    Full Text Available Model criticism is an important stage of model building and thus goodness of fit tests provides a set of tools for diagnostic checking of the fitted model. Several tests are suggested in literature for diagnostic checking. These tests use autocorrelation or partial autocorrelation in the residuals to criticize the adequacy of fitted model. The main idea underlying these portmanteau tests is to identify if there is any dependence structure which is yet unexplained by the fitted model. In this paper, we suggest mixed portmanteau tests based on autocorrelation and partial autocorrelation functions of the residuals. We derived the asymptotic distribution of the mixture test and studied its size and power using Monte Carlo simulations.

  6. Inhibition in speed and concentration tests: The Poisson inhibition model

    NARCIS (Netherlands)

    Smit, J.C.; Ven, A.H.G.S. van der


    A new model is presented to account for the reaction time fluctuations in concentration tests. The model is a natural generalization of an earlier model, the so-called Poisson-Erlang model, published by Pieters & van der Ven (1982). First, a description is given of the type of tasks for which the

  7. General score tests for regression models incorporating 'robust' variance estimates


    David Clayton; Joanna Howson


    Stata incorporates commands for carrying out two of the three general approaches to asymptotic significance testing in regression models, namely likelihood ratio (lrtest) and Wald tests (testparms). However, the third approach, using "score" tests, has no such general implementation. This omission is particularly serious when dealing with "clustered" data using the Huber-White approach. Here the likelihood ratio test is lost, leaving only the Wald test. This has relatively poor asymptotic pro...

  8. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark; Lauderbach, Lisa; Garza, Raul; Ferranti, Louis; Vitello, Peter


    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. The total detonation energy density was locked to the v=7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  9. Upgraded Analytical Model of the Cylinder Test

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P. Clark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Lauderbach, Lisa [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Garza, Raul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Ferranti, Louis [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center; Vitello, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Energetic Materials Center


    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of about 15 and the JWL parameter ω was obtained directly. Finally, the total detonation energy density was locked to the v = 7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.

  10. Affective Robotics: Modelling and Testing Cultural Prototypes. (United States)

    A Wilson, Paul; Lewandowska-Tomaszczyk, Barbara


    If robots are to successfully interact with humans, they need to measure, quantify and respond to the emotions we produce. Similar to humans, the perceptual cue inputs to any modelling that allows this will be based on behavioural expression and body activity features that are prototypical of each emotion. However, the likely employment of such robots in different cultures necessitates the tuning of the emotion feature recognition system to the specific feature profiles present in these cultures. The amount of tuning depends on the relative convergence of the cross-cultural mappings between the emotion feature profiles of the cultures where the robots will be used. The GRID instrument and the cognitive corpus linguistics methodology were used in a contrastive study analysing a selection of behavioural expression and body activity features to compare the feature profiles of joy, sadness, fear and anger within and between Polish and British English. The intra-linguistic differences that were found in the profile of emotion features suggest that weightings based on this profile can be used in robotic modelling to create emotion-sensitive socially interacting robots. Our cross-cultural results further indicate that this profile of features needs to be tuned in robots to make them emotionally competent in different cultures.

  11. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.


    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  12. Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation

    Directory of Open Access Journals (Sweden)

    Teemu Kanstrén


    Full Text Available We present a model-based testing approach to support automated test generation with domain-specific concepts. This includes a language expert who is an expert at building test models and domain experts who are experts in the domain of the system under test. First, we provide a framework to support the language expert in building test models using a full (Java programming language with the help of simple but powerful modeling elements of the framework. Second, based on the model built with this framework, the toolset automatically forms a domain-specific modeling language that can be used to further constrain and guide test generation from these models by a domain expert. This makes it possible to generate a large set of test cases covering the full model, chosen (constrained parts of the model, or manually define specific test cases on top of the model while using concepts familiar to the domain experts.

  13. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    DEFF Research Database (Denmark)

    H. Hjort, Ulrik; Rasmussen, Jacob Illum; Larsen, Kim Guldstrand


    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates...

  14. A Test Characteristic Curve Linking Method for the Testlet Model (United States)

    Li, Yanmei; Bolt, Daniel M.; Fu, Jianbin


    When tests are made up of testlets, a testlet-based item response theory (IRT) model may be used to account for local dependence among items from a common testlet. This study presents a new test characteristic curve method to link calibrations based on the Bradlow, Wainer, and Wang (1999) testlet model. Procedures for calculating the test…

  15. Development of computer simulation models for pedestrian subsystem impact tests

    NARCIS (Netherlands)

    Kant, R.; Konosu, A.; Ishikawa, H.


    The European Enhanced Vehicle-safety Committee (EEVC/WG10 and WG17) proposed three component subsystem tests for cars to assess pedestrian protection. The objective of this study is to develop computer simulation models of the EEVC pedestrian subsystem tests. These models are available to develop a

  16. [Study of dental model testing tool based on robot theory]. (United States)

    Hu, B; Song, Y; Cheng, L


    A new three dimensional testing and analysing system of dental model is discussed It is designed based on the motion theory of robots. The system is capable of not only measuring the three dimensional sizes of dental models, but also saving and outputing the tested data. The construction of the system is briefly introduced here.

  17. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias


    of such experiments is whether the observed redundancy gains can be explained by parallel processing of the two stimuli in a race-like fashion. To test the parallel processing model, Miller derived the well-known race model inequality which has become a routine test for behavioral data in experiments with redundant...... signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...

  18. Collider Tests of the Renormalizable Coloron Model

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yang; Dobrescu, Bogdan A.


    The coloron, a massive version of the gluon present in gauge extensions of QCD, has been searched for at the LHC as a dijet or top quark pair resonance. We point out that in the Renormalizable Coloron Model (ReCoM) with a minimal field content to break the gauge symmetry, a color-octet scalar and a singlet scalar are naturally lighter than the coloron because they are pseudo Nambu-Goldstone bosons. Consequently, the coloron may predominantly decay into scalar pairs, leading to novel signatures at the LHC. When the color-octet scalar is lighter than the singlet, or when the singlet mass is above roughly 1 TeV, the signatures consist of multi-jet resonances of multiplicity up to 12, including topologies with multi-prong jet substructure, slightly displaced vertices, and sometimes a top quark pair. When the singlet is the lightest ReCoM boson and lighter than about 1 TeV, its main decays ($W^+W^-$, $\\gamma Z$, $ZZ$) arise at three loops. The LHC signatures then involve two or four boosted electroweak bosons, often originating from highly displaced vertices, plus one or two pairs of prompt jets or top quarks.

  19. Glide back booster wind tunnel model testing (United States)

    Pricop, M. V.; Cojocaru, M. G.; Stoica, C. I.; Niculescu, M. L.; Neculaescu, A. M.; Persinaru, A. G.; Boscoianu, M.


    Affordable space access requires partial or ideally full launch vehicle reuse, which is in line with clean environment requirement. Although the idea is old, the practical use is difficult, requiring very large technology investment for qualification. Rocket gliders like Space Shuttle have been successfullyoperated but the price and correspondingly the energy footprint were found not sustainable. For medium launchers, finally there is a very promising platform as Falcon 9. For very small launchers the situation is more complex, because the performance index (payload to start mass) is already small, versus medium and heavy launchers. For partial reusable micro launchers this index is even smaller. However the challenge has to be taken because it is likely that in a multiyear effort, technology is going to enable the performance recovery to make such a system economically and environmentally feasible. The current paper is devoted to a small unitary glide back booster which is foreseen to be assembled in a number of possible configurations. Although the level of analysis is not deep, the solution is analyzed from the aerodynamic point of view. A wind tunnel model is designed, with an active canard, to enablea more efficient wind tunnel campaign, as a national level premiere.

  20. An integrated service excellence model for military test and ...

    African Journals Online (AJOL)

    An integrated service excellence model for military test and evaluation facilities. ... and tested through an empirical study conducted amongst the various test and evaluation facilities' leadership core. Solutions to financial, human resource and environmental challenges as well as quality standards were built into the ISEM.

  1. Testing for spatial error dependence in probit models

    NARCIS (Netherlands)

    Amaral, P. V.; Anselin, L.; Arribas-Bel, D.


    In this note, we compare three test statistics that have been suggested to assess the presence of spatial error autocorrelation in probit models. We highlight the differences between the tests proposed by Pinkse and Slade (J Econom 85(1):125-254, 1998), Pinkse (Asymptotics of the Moran test and a

  2. Matrix diffusion model. In situ tests using natural analogues

    Energy Technology Data Exchange (ETDEWEB)

    Rasilainen, K. [VTT Energy, Espoo (Finland)


    Matrix diffusion is an important retarding and dispersing mechanism for substances carried by groundwater in fractured bedrock. Natural analogues provide, unlike laboratory or field experiments, a possibility to test the model of matrix diffusion in situ over long periods of time. This thesis documents quantitative model tests against in situ observations, done to support modelling of matrix diffusion in performance assessments of nuclear waste repositories. 98 refs. The thesis includes also eight previous publications by author.

  3. DKIST enclosure modeling and verification during factory assembly and testing (United States)

    Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka


    The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.


    Directory of Open Access Journals (Sweden)

    Oleksandr M. Aleksieiev


    Full Text Available The principles of imitation model for knowledge test control, specifically on the perfection of the procedure of constructing a test, are developed in this model. The authors suggest taking into account the difficulty of the question when one is making a decision about including a given test question into the test. They also suggest using iterational calculations in order to create a test with the help of optimization algorithms that are used for the process of random searching. The sum of task complexity indices for such test will meet the criteria of joint value and difficulty. Detailed explanation of mathematical apparatus that is used for decision-making during the test-building process is given in the article as well as an example that demonstrates main steps of iterational calculations and mechanisms for achieving optimal test structure based on the criteria of joint value and difficulty.


    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yuxin; Suto, Yasushi, E-mail: [Department of Physics, The University of Tokyo, Tokyo 113-0033 (Japan)


    Among 100 transiting planets with a measured projected spin–orbit angle λ, several systems are suggested to be counter-orbiting. While these cases may be due to the projection effect, the mechanism that produces a counter-orbiting planet has not been established. A promising scenario for counter-orbiting planets is the extreme eccentricity evolution in near-coplanar hierarchical triple systems with eccentric inner and outer orbits. We examine this scenario in detail by performing a series of systematic numerical simulations, and consider the possibility of forming hot Jupiters (HJs), especially a counter-orbiting one under this mechanism with a distant sub-stellar perturber. We incorporate quadrupole and octupole secular gravitational interaction between the two orbits, and also short-range forces (correction for general relativity, star and inner planetary tide, and rotational distortion) simultaneously. We find that most systems are tidally disrupted and that a small fraction of the surviving planets turn out to be prograde. The formation of counter-orbiting HJs in this scenario is possible only in a very restricted parameter region, and thus is very unlikely in practice.

  6. A tutorial on testing the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias; Minakata, Katsumi


    effect. Several models have been proposed to explain this effect, including race models and coactivation models of information processing. In race models, the two stimulus components are processed in separate channels and the faster channel determines the processing time. This mechanism leads, on average......, to faster responses to redundant signals. In contrast, coactivation models assume integrated processing of the combined stimuli. To distinguish between these two accounts, Miller (1982) derived the well-known race model inequality, which has become a routine test for behavioral data in experiments...... with redundant signals. In this tutorial, we review the basic properties of redundant signals experiments and current statistical procedures used to test the race model inequality during the period between 2011 and 2014. We highlight and discuss several issues concerning study design and the test of the race...

  7. Quantitative consensus of bioaccumulation models for integrated testing strategies. (United States)

    Fernández, Alberto; Lombardo, Anna; Rallo, Robert; Roncaglioni, Alessandra; Giralt, Francesc; Benfenati, Emilio


    A quantitative consensus model based on bioconcentration factor (BCF) predictions obtained from five quantitative structure-activity relationship models was developed for bioaccumulation assessment as an integrated testing approach for waiving. Three categories were considered: non-bioaccumulative, bioaccumulative and very bioaccumulative. Five in silico BCF models were selected and included into a quantitative consensus model by means of the continuous formulation of Bayes' theorem. The discrete likelihoods commonly used in the qualitative Bayesian model were substituted by probability density functions to reduce the loss of information that occurred when continuous BCF values were distributed across the three bioaccumulation categories. Results showed that the continuous Bayesian model yielded the best classification predictions compared not only to the discrete Bayesian model, but also to the individual BCF models. The proposed quantitative consensus model proved to be a suitable approach for integrated testing strategies for continuous endpoints of environmental interest. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Scalable Power-Component Models for Concept Testing (United States)


    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN

  9. A test of 3 models of Kirtland's warbler habitat suitability (United States)

    Mark D. Nelson; Richard R. Buech


    We tested 3 models of Kirtland's warbler (Dendroica kirtlandii) habitat suitability during a period when we believe there was a surplus of good quality breeding habitat. A jack pine canopy-cover model was superior to 2 jack pine stem-density models in predicting Kirtland's warbler habitat use and non-use. Estimated density of birds in high...

  10. A Dutch test with the NewProd-model

    NARCIS (Netherlands)

    Bronnenberg, J.J.A.M.; van Engelen, M.L.


    The paper contains a report of a test of Cooper's NewProd model for predicting success and failure of product development projects. Based on Canadian data, the model has been shown to make predictions which are 84% correct. Having reservations on the reliability and validity of the model on

  11. Modal test and analysis: Multiple tests concept for improved validation of large space structure mathematical models (United States)

    Wada, B. K.; Kuo, C-P.; Glaser, R. J.


    For the structural dynamic analysis of large space structures, the technology in structural synthesis and the development of structural analysis software have increased the capability to predict the dynamic characteristics of the structural system. The various subsystems which comprise the system are represented by various displacement functions; the displacement functions are then combined to represent the total structure. Experience has indicated that even when subsystem mathematical models are verified by test, the mathematical representations of the total system are often in error because the mathematical model of the structural elements which are significant when loads are applied at the interconnection points are not adequately verified by test. A multiple test concept, based upon the Multiple Boundary Condition Test (MBCT), is presented which will increase the accuracy of the system mathematical model by improving the subsystem test and test/analysis correlation procedure.

  12. Conformance test development with the Java modeling language

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders P.


    ) profile specification. The Java Modeling Language (JML) is used to model conformance constraints for the profile. JML annotations define contracts for classes and interfaces. The annotations are translated by a tool into runtime assertion checks.Hereby the design and elaboration of the concrete test cases...... are simplified, because the expected results are derived from contracts, and thus do not need to be provided explicitly. Bottom-up testing is applied for testing methods of the SCJ classes, whereas top-down testing is applied for testing global properties, such as protocols, memory management and real......-time properties, including scheduling. The tests are executed using a simplified version of JUnit which makes the test suite executable on resource-constrained platforms....

  13. A new fit-for-purpose model testing framework: Decision Crash Tests (United States)

    Tolson, Bryan; Craig, James


    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building

  14. Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications (United States)

    Boghosian, Mary; Narvaez, Pablo; Herman, Ray


    The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.

  15. Formulation of consumables management models: Test plan for the mission planning processor working model (United States)

    Connelly, L. C.


    The test plan and test procedures to be used in the verification and validation of the software being implemented in the mission planning processor working model program are documented. The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. An overview of the working model is presented. Execution of the test plan will comprehensively exercise the working model software. An overview of the test plan, including a testing schedule, is presented along with the test plan for the unit, module, and system levels. The criteria used to validate the working model results for each consumables subsystem is discussed.


    Directory of Open Access Journals (Sweden)

    Diana MURESAN


    Full Text Available This paper reviews empirical research that apply Mishkin test for the examination of the existence of accruals anomaly using alternative approaches. Mishkin test is a test used in macro-econometrics for rational hypothesis, which test for the market efficiency. Starting with Sloan (1996 the model has been applied to accruals anomaly literature. Since Sloan (1996, the model has known various improvements and it has been the subject to many debates in the literature regarding its efficacy. Nevertheless, the current evidence strengthens the pervasiveness of the model. The analyses realized on the extended studies on Mishkin test highlights that adding additional variables enhances the results, providing insightful information about the occurrence of accruals anomaly.

  17. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.


    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...

  18. TASS Model Application for Testing the TDWAP Model (United States)

    Switzer, George F.


    One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.

  19. A general diagnostic model applied to language testing data. (United States)

    von Davier, Matthias


    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  20. Improved animal models for testing gene therapy for atherosclerosis. (United States)

    Du, Liang; Zhang, Jingwan; De Meyer, Guido R Y; Flynn, Rowan; Dichek, David A


    Gene therapy delivered to the blood vessel wall could augment current therapies for atherosclerosis, including systemic drug therapy and stenting. However, identification of clinically useful vectors and effective therapeutic transgenes remains at the preclinical stage. Identification of effective vectors and transgenes would be accelerated by availability of animal models that allow practical and expeditious testing of vessel-wall-directed gene therapy. Such models would include humanlike lesions that develop rapidly in vessels that are amenable to efficient gene delivery. Moreover, because human atherosclerosis develops in normal vessels, gene therapy that prevents atherosclerosis is most logically tested in relatively normal arteries. Similarly, gene therapy that causes atherosclerosis regression requires gene delivery to an existing lesion. Here we report development of three new rabbit models for testing vessel-wall-directed gene therapy that either prevents or reverses atherosclerosis. Carotid artery intimal lesions in these new models develop within 2-7 months after initiation of a high-fat diet and are 20-80 times larger than lesions in a model we described previously. Individual models allow generation of lesions that are relatively rich in either macrophages or smooth muscle cells, permitting testing of gene therapy strategies targeted at either cell type. Two of the models include gene delivery to essentially normal arteries and will be useful for identifying strategies that prevent lesion development. The third model generates lesions rapidly in vector-naïve animals and can be used for testing gene therapy that promotes lesion regression. These models are optimized for testing helper-dependent adenovirus (HDAd)-mediated gene therapy; however, they could be easily adapted for testing of other vectors or of different types of molecular therapies, delivered directly to the blood vessel wall. Our data also supports the promise of HDAd to deliver long

  1. Using Model Checking to Generate Test Cases for Android Applications

    Directory of Open Access Journals (Sweden)

    Ana Rosario Espada


    Full Text Available The behavior of mobile devices is highly non deterministic and barely predictable due to the interaction of the user with its applications. In consequence, analyzing the correctness of applications running on a smartphone involves dealing with the complexity of its environment. In this paper, we propose the use of model-based testing to describe the potential behaviors of users interacting with mobile applications. These behaviors are modeled by composing specially-designed state machines. These composed state machines can be exhaustively explored using a model checking tool to automatically generate all possible user interactions. Each generated trace model checker can be interpreted as a test case to drive a runtime analysis of actual applications. We have implemented a tool that follows the proposed methodology to analyze Android devices using the model checker Spin as the exhaustive generator of test cases.

  2. 1g Model Tests with Foundations in Sand

    DEFF Research Database (Denmark)

    Krabbenhøft, Sven; Damkilde, Lars; Clausen, Johan


    This paper presents the results of a series 1g model tests with both a circular and a strip foundation on dense sand. The test results have been compared with the results from finite element calculations based on a non linear Mohr-Coulomb yield criterion taking into account the dependence...

  3. Testing for causality in variance using multivariate GARCH models

    NARCIS (Netherlands)

    C.M. Hafner (Christian); H. Herwartz


    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual

  4. Design Of Computer Based Test Using The Unified Modeling Language (United States)

    Tedyyana, Agus; Danuri; Lidyawati


    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  5. Generalized F test and generalized deviance test in two-way ANOVA models for randomized trials. (United States)

    Shen, Juan; He, Xuming


    We consider the problem of detecting treatment effects in a randomized trial in the presence of an additional covariate. By reexpressing a two-way analysis of variance (ANOVA) model in a logistic regression framework, we derive generalized F tests and generalized deviance tests, which provide better power in detecting common location-scale changes of treatment outcomes than the classical F test. The null distributions of the test statistics are independent of the nuisance parameters in the models, so the critical values can be easily determined by Monte Carlo methods. We use simulation studies to demonstrate how the proposed tests perform compared with the classical F test. We also use data from a clinical study to illustrate possible savings in sample sizes.

  6. Model-Driven Test Generation of Distributed Systems (United States)

    Easwaran, Arvind; Hall, Brendan; Schweiker, Kevin


    This report describes a novel test generation technique for distributed systems. Utilizing formal models and formal verification tools, spe cifically the Symbolic Analysis Laboratory (SAL) tool-suite from SRI, we present techniques to generate concurrent test vectors for distrib uted systems. These are initially explored within an informal test validation context and later extended to achieve full MC/DC coverage of the TTEthernet protocol operating within a system-centric context.

  7. Modeling Student Test-Taking Motivation in the Context of an Adaptive Achievement Test (United States)

    Wise, Steven L.; Kingsbury, G. Gage


    This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…

  8. Testing and reference model analysis of FTTH system (United States)

    Feng, Xiancheng; Cui, Wanlong; Chen, Ying


    With rapid development of Internet and broadband access network, the technologies of xDSL, FTTx+LAN , WLAN have more applications, new network service emerges in endless stream, especially the increase of network game, meeting TV, video on demand, etc. FTTH supports all present and future service with enormous bandwidth, including traditional telecommunication service, traditional data service and traditional TV service, and the future digital TV and VOD. With huge bandwidth of FTTH, it wins the final solution of broadband network, becomes the final goal of development of optical access network.. Fiber to the Home (FTTH) will be the goal of telecommunications cable broadband access. In accordance with the development trend of telecommunication services, to enhance the capacity of integrated access network, to achieve triple-play (voice, data, image), based on the existing optical Fiber to the curb (FTTC), Fiber To The Zone (FTTZ), Fiber to the Building (FTTB) user optical cable network, the optical fiber can extend to the FTTH system of end-user by using EPON technology. The article first introduced the basic components of FTTH system; and then explain the reference model and reference point for testing of the FTTH system; Finally, by testing connection diagram, the testing process, expected results, primarily analyze SNI Interface Testing, PON interface testing, Ethernet performance testing, UNI interface testing, Ethernet functional testing, PON functional testing, equipment functional testing, telephone functional testing, operational support capability testing and so on testing of FTTH system. ...

  9. Estrogen receptor testing and 10-year mortality from breast cancer: A model for determining testing strategy

    Directory of Open Access Journals (Sweden)

    Christopher Naugler


    Full Text Available Background: The use of adjuvant tamoxifen therapy in the treatment of estrogen receptor (ER expressing breast carcinomas represents a major advance in personalized cancer treatment. Because there is no benefit (and indeed there is increased morbidity and mortality associated with the use of tamoxifen therapy in ER-negative breast cancer, its use is restricted to women with ER expressing cancers. However, correctly classifying cancers as ER positive or negative has been challenging given the high reported false negative test rates for ER expression in surgical specimens. In this paper I model practice recommendations using published information from clinical trials to address the question of whether there is a false negative test rate above which it is more efficacious to forgo ER testing and instead treat all patients with tamoxifen regardless of ER test results. Methods: I used data from randomized clinical trials to model two different hypothetical treatment strategies: (1 the current strategy of treating only ER positive women with tamoxifen and (2 an alternative strategy where all women are treated with tamoxifen regardless of ER test results. The variables used in the model are literature-derived survival rates of the different combinations of ER positivity and treatment with tamoxifen, varying true ER positivity rates and varying false negative ER testing rates. The outcome variable was hypothetical 10-year survival. Results: The model predicted that there will be a range of true ER rates and false negative test rates above which it would be more efficacious to treat all women with breast cancer with tamoxifen and forgo ER testing. This situation occurred with high true positive ER rates and false negative ER test rates in the range of 20-30%. Conclusions: It is hoped that this model will provide an example of the potential importance of diagnostic error on clinical outcomes and furthermore will give an example of how the effect of that


    Directory of Open Access Journals (Sweden)

    V. K. Nedbalsky


    Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines. 

  11. Modeling and Testing of EVs - Preliminary Study and Laboratory Development

    DEFF Research Database (Denmark)

    Yang, Guang-Ya; Marra, Francesco; Nielsen, Arne Hejde


    Electric vehicles (EVs) are expected to play a key role in the future energy management system to stabilize both supply and consumption with the presence of high penetration of renewable generation. A reasonably accurate model of battery is a key element for the study of EVs behavior and the grid...... tests, followed by the suggestions towards a feasible battery model for further studies....

  12. Data Modeling for Measurements in the Metrology and Testing Fields

    CERN Document Server

    Pavese, Franco


    Offers a comprehensive set of modeling methods for data and uncertainty analysis. This work develops methods and computational tools to address general models that arise in practice, allowing for a more valid treatment of calibration and test data and providing an understanding of complex situations in measurement science

  13. An Experimental Test of the Contingency Model of Leadership Effectiveness. (United States)

    Chemers, Martin M.; Skrzypek, George J.

    The present experiment provided a test of Fiedler's (1967) Contingency Model of Leadership Effectiveness, i.e., the relationship of leader style to group effectiveness is mediated by situational demands. Thirty-two 4 man task groups composed of military academy cadets were run in the experiment. In accordance with the Contingency Model, leaders…

  14. Academic Examinations and Anxiety: The Interaction Model Empirically Tested. (United States)

    Phillips, J. Bryan; Endler, Norman S.


    Tested the person-by-situation interaction model of anxiety. Male (N=28) and female (N=79) university students served as subjects. Results were interpreted as providing support for the multidimensionality of A-Trait and further validation of the interaction model of anxiety. (Author)

  15. Animal models for testing anti-prion drugs. (United States)

    Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín


    Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.

  16. Testing static tradeoff theiry against pecking order models of capital ...

    African Journals Online (AJOL)

    We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...

  17. Micromechanical model of the single fiber fragmentation test

    DEFF Research Database (Denmark)

    Sørensen, Bent F.


    A shear-lag model is developed for the analysis of single fiber fragmentation tests for the characterization of the mechanical properties of the fiber/matrix interface in composite materials. The model utilizes the relation for the loss in potential energy of Budiansky, Hutchinson and Evans...

  18. Testing the minimal supersymmetric standard model with the mass ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 69; Issue 5. Testing the minimal supersymmetric standard model with the mass ... We review the currently most accurate evaluation of the boson mass, , in the minimal supersymmetric standard model (MSSM). It consists of a full one-loop calculation, including the ...

  19. The Reformulated Model of Learned Helplessness: An Empirical Test. (United States)

    Rothblum, Esther D.; Green, Leon

    Abramson, Seligman and Teasdale's reformulated model of learned helplessness hypothesized that an attribution of causality intervenes between the perception of noncontingency and the future expectation of future noncontingency. To test this model, relationships between attribution and performance under failure, success, and control conditions were…

  20. Modelling of wetting tests for a natural pyroclastic soil

    Directory of Open Access Journals (Sweden)

    Moscariello Mariagiovanna


    Full Text Available The so-called wetting-induced collapse is one of the most common problems associated with unsaturated soils. This paper applies the Modified Pastor-Zienkiewicz model (MPZ to analyse the wetting behaviour of undisturbed specimens of an unsaturated air-fall volcanic (pyroclastic soil originated from the explosive activity of the Somma-Vesuvius volcano (Southern Italy. Both standard oedometric tests, suction-controlled oedometeric tests and suction-controlled isotropic tests are considered. The results of the constitutive modelling show a satisfactory capability of the MPZ to simulate the variations of soil void ratio upon wetting, with negligible differences among the measured and the computed values.

  1. Testing and modelling autoregressive conditional heteroskedasticity of streamflow processes

    Directory of Open Access Journals (Sweden)

    W. Wang


    Full Text Available Conventional streamflow models operate under the assumption of constant variance or season-dependent variances (e.g. ARMA (AutoRegressive Moving Average models for deseasonalized streamflow series and PARMA (Periodic AutoRegressive Moving Average models for seasonal streamflow series. However, with McLeod-Li test and Engle's Lagrange Multiplier test, clear evidences are found for the existence of autoregressive conditional heteroskedasticity (i.e. the ARCH (AutoRegressive Conditional Heteroskedasticity effect, a nonlinear phenomenon of the variance behaviour, in the residual series from linear models fitted to daily and monthly streamflow processes of the upper Yellow River, China. It is shown that the major cause of the ARCH effect is the seasonal variation in variance of the residual series. However, while the seasonal variation in variance can fully explain the ARCH effect for monthly streamflow, it is only a partial explanation for daily flow. It is also shown that while the periodic autoregressive moving average model is adequate in modelling monthly flows, no model is adequate in modelling daily streamflow processes because none of the conventional time series models takes the seasonal variation in variance, as well as the ARCH effect in the residuals, into account. Therefore, an ARMA-GARCH (Generalized AutoRegressive Conditional Heteroskedasticity error model is proposed to capture the ARCH effect present in daily streamflow series, as well as to preserve seasonal variation in variance in the residuals. The ARMA-GARCH error model combines an ARMA model for modelling the mean behaviour and a GARCH model for modelling the variance behaviour of the residuals from the ARMA model. Since the GARCH model is not followed widely in statistical hydrology, the work can be a useful addition in terms of statistical modelling of daily streamflow processes for the hydrological community.

  2. Model of dynamic compression tests on hydraulic testing machines: Influence of dynamic phenomena (United States)

    Diot, S.; Gavrus, A.; Guines, D.; Ragneau, E.


    The forming process simulation requires models describing the materials behaviour at large strains and at strain rates up to hundreds of s^{-1}. The major difficulty then encountered is that few experimental tests enable to reach these two criteria. For a few years, several studies have been carried out on hydraulic machines provided with a dynamic jack. However, for higher strain rates tests, the load measurement is disturbed by the response of the experimental set-up and oscillations appear. In this article, the experimental test is developed and a finite element model of the set-up is introduced.

  3. Software Testing and Verification in Climate Model Development (United States)

    Clune, Thomas L.; Rood, RIchard B.


    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  4. Reaction times to weak test lights. [psychophysics biological model (United States)

    Wandell, B. A.; Ahumada, P.; Welsh, D.


    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  5. Atmospheric Probe Model: Construction and Wind Tunnel Tests (United States)

    Vogel, Jerald M.


    The material contained in this document represents a summary of the results of a low speed wind tunnel test program to determine the performance of an atmospheric probe at low speed. The probe configuration tested consists of a 2/3 scale model constructed from a combination of hard maple wood and aluminum stock. The model design includes approximately 130 surface static pressure taps. Additional hardware incorporated in the baseline model provides a mechanism for simulating external and internal trailing edge split flaps for probe flow control. Test matrix parameters include probe side slip angle, external/internal split flap deflection angle, and trip strip applications. Test output database includes surface pressure distributions on both inner and outer annular wings and probe center line velocity distributions from forward probe to aft probe locations.

  6. Channel Modelling for Multiprobe Over-the-Air MIMO Testing


    Kyösti, Pekka; Jämsä, Tommi; Nuutinen, Jukka-Pekka


    This paper discusses over-the-air (OTA) test setup for multiple-input-multiple-output (MIMO) capable terminals with emphasis on channel modelling. The setup is composed of a fading emulator, an anechoic chamber, and multiple probes. Creation of a propagation environment inside an anechoic chamber requires unconventional radio channel modelling, namely, a specific mapping of the original models onto the probe antennas. We introduce two novel methods to generate fading emulator channel coeff...

  7. Testing a Model of Work Performance in an Academic Environment


    B. Charles Tatum


    In modern society, people both work and study. The intersection between organizational and educational research suggests that a common model should apply to both academic and job performance. The purpose of this study was to apply a model of work and job performance (based on general expectancy theory) to a classroom setting, and test the predicted relationships using a causal/path model methodology. The findings revea...

  8. Measuring organizational learning. Model testing in two Romanian universities


    Alexandra Luciana Guţă


    The scientific literature associates organizational learning with superior organization performance. If we refer to the academic environment, we appreciate that it can develop and reach better levels of performance through changes driven from the inside. Thus, through this paper we elaborate on a conceptual model of organizational learning and we test the model on a sample of employees (university teachers and researchers) from two Romanian universities. The model comprises the process of org...

  9. Tensegrity finite element models of mechanical tests of individual cells. (United States)

    Bursa, Jiri; Lebis, Radek; Holata, Jakub


    A three-dimensional finite element model of a vascular smooth muscle cell is based on models published recently; it comprehends elements representing cell membrane, cytoplasm and nucleus, and a complex tensegrity structure representing the cytoskeleton. In contrast to previous models of eucaryotic cells, this tensegrity structure consists of several parts. Its external and internal parts number 30 struts, 60 cables each, and their nodes are interconnected by 30 radial members; these parts represent cortical, nuclear and deep cytoskeletons, respectively. This arrangement enables us to simulate load transmission from the extracellular space to the nucleus or centrosome via membrane receptors (focal adhesions); the ability of the model was tested by simulation of some mechanical tests with isolated vascular smooth muscle cells. Although material properties of components defined on the basis of the mechanical tests are ambiguous, modelling of different types of tests has shown the ability of the model to simulate substantial global features of cell behaviour, e.g. "action at a distance effect" or the global load-deformation response of the cell under various types of loading. Based on computational simulations, the authors offer a hypothesis explaining the scatter of experimental results of indentation tests. © 2012 – IOS Press and the authors. All rights reserved

  10. Model of ASTM Flammability Test in Microgravity: Iron Rods (United States)

    Steinberg, Theodore A; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)


    There is extensive qualitative results from burning metallic materials in a NASA/ASTM flammability test system in normal gravity. However, this data was shown to be inconclusive for applications involving oxygen-enriched atmospheres under microgravity conditions by conducting tests using the 2.2-second Lewis Research Center (LeRC) Drop Tower. Data from neither type of test has been reduced to fundamental kinetic and dynamic systems parameters. This paper reports the initial model analysis for burning iron rods under microgravity conditions using data obtained at the LERC tower and modeling the burning system after ignition. Under the conditions of the test the burning mass regresses up the rod to be detached upon deceleration at the end of the drop. The model describes the burning system as a semi-batch, well-mixed reactor with product accumulation only. This model is consistent with the 2.0-second duration of the test. Transient temperature and pressure measurements are made on the chamber volume. The rod solid-liquid interface melting rate is obtained from film records. The model consists of a set of 17 non-linear, first-order differential equations which are solved using MATLAB. This analysis confirms that a first-order rate, in oxygen concentration, is consistent for the iron-oxygen kinetic reaction. An apparent activation energy of 246.8 kJ/mol is consistent for this model.

  11. Animal models of toxicology testing: the role of pigs. (United States)

    Helke, Kristi L; Swindle, Marvin Michael


    In regulatory toxicological testing, both a rodent and non-rodent species are required. Historically, dogs and non-human primates (NHP) have been the species of choice of the non-rodent portion of testing. The pig is an appropriate option for these tests based on metabolic pathways utilized in xenobiotic biotransformation. This review focuses on the Phase I and Phase II biotransformation pathways in humans and pigs and highlights the similarities and differences of these models. This is a growing field and references are sparse. Numerous breeds of pigs are discussed along with specific breed differences in these enzymes that are known. While much available data are presented, it is grossly incomplete and sometimes contradictory based on methods used. There is no ideal species to use in toxicology. The use of dogs and NHP in xenobiotic testing continues to be the norm. Pigs present a viable and perhaps more reliable model of non-rodent testing.

  12. Preliminary results of steel containment vessel model test

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, T.; Komine, K.; Arai, S. [Nuclear Power Engineering Corp., Tokyo (Japan)] [and others


    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.

  13. Preliminary results of steel containment vessel model test

    Energy Technology Data Exchange (ETDEWEB)

    Luk, V.K.; Hessheimer, M.F. [Sandia National Labs., Albuquerque, NM (United States); Matsumoto, T.; Komine, K.; Arai, S. [Nuclear Power Engineering Corp., Tokyo (Japan); Costello, J.F. [Nuclear Regulatory Commission, Washington, DC (United States)


    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11--12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.

  14. Penetration Testing Professional Ethics: a conceptual model and taxonomy

    Directory of Open Access Journals (Sweden)

    Justin Pierce


    Full Text Available In an environment where commercial software is continually patched to correct security flaws, penetration testing can provide organisations with a realistic assessment of their security posture. Penetration testing uses the same principles as criminal hackers to penetrate corporate networks and thereby verify the presence of software vulnerabilities. Network administrators can use the results of a penetration test to correct flaws and improve overall security. The use of hacking techniques, however, raises several ethical questions that centre on the integrity of the tester to maintain professional distance and uphold the profession. This paper discusses the ethics of penetration testing and presents our conceptual model and revised taxonomy.

  15. Ontological Analysis of Integrated Process Models: testing hypotheses

    Directory of Open Access Journals (Sweden)

    Michael Rosemann


    Full Text Available Integrated process modeling is achieving prominence in helping to document and manage business administration and IT processes in organizations. The ARIS framework is a popular example for a framework of integrated process modeling not least because it underlies the 800 or more reference models embedded in the world's most popular ERP package, SAP R/3. This paper demonstrates the usefulness of the Bunge-Wand-Weber (BWW representation model for evaluating modeling grammars such as those constituting ARIS. It reports some initial insights gained from pilot testing Green and Rosemann's (2000 evaluative propositions. Even when considering all five views of ARIS, modelers have problems representing business rules, the scope and boundary of systems, and decomposing models. However, even though it is completely ontologically redundant, users still find the function view useful in modeling.


    Energy Technology Data Exchange (ETDEWEB)

    Liu, Michael C.; Bowler, Brendan P.; Best, William M. J. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Dupuy, Trent J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Leggett, S. K. [Gemini Observatory, 670 North A' ohoku Place, Hilo, HI 96720 (United States)


    Using Keck laser guide star adaptive optics imaging, we have found that the T9 dwarf WISE J1217+1626 and T8 dwarf WISE J1711+3500 are exceptional binaries, with unusually wide separations ( Almost-Equal-To 0.''8, 8-15 AU), large near-IR flux ratios ( Almost-Equal-To 2-3 mag), and small mass ratios ( Almost-Equal-To 0.5) compared to previously known field ultracool binaries. Keck/NIRSPEC H-band spectra give a spectral type of Y0 for WISE J1217+1626B, and photometric estimates suggest T9.5 for WISE J1711+3500B. The WISE J1217+1626AB system is very similar to the T9+Y0 binary CFBDSIR J1458+1013AB; these two systems are the coldest known substellar multiples, having secondary components of Almost-Equal-To 400 K and being planetary-mass binaries if their ages are {approx}<1 Gyr. Both WISE J1217+1626B and CFBDSIR J1458+1013B have strikingly blue Y - J colors compared to previously known T dwarfs, including their T9 primaries. Combining all available data, we find that Y - J color drops precipitously between the very latest T dwarfs and the Y dwarfs. The fact that this is seen in (coeval, mono-metallicity) binaries demonstrates that the color drop arises from a change in temperature, not surface gravity or metallicity variations among the field population. Thus, the T/Y transition established by near-IR spectra coincides with a significant change in the Almost-Equal-To 1 {mu}m fluxes of ultracool photospheres. One explanation is the depletion of potassium, whose broad absorption wings dominate the far-red optical spectra of T dwarfs. This large color change suggests that far-red data may be valuable for classifying objects of {approx}<500 K.

  17. SPOTS: The Search for Planets Orbiting Two Stars. II. First constraints on the frequency of sub-stellar companions on wide circumbinary orbits (United States)

    Bonavita, M.; Desidera, S.; Thalmann, C.; Janson, M.; Vigan, A.; Chauvin, G.; Lannier, J.


    A large number of direct imaging surveys for exoplanets have been performed in recent years, yielding the first directly imaged planets and providing constraints on the prevalence and distribution of wide planetary systems. However, like most of the radial velocity ones, these generally focus on single stars, hence binaries and higher-order multiples have not been studied to the same level of scrutiny. This motivated the Search for Planets Orbiting Two Stars (SPOTS) survey, which is an ongoing direct imaging study of a large sample of close binaries, started with VLT/NACO and now continuing with VLT/SPHERE. To complement this survey, we have identified the close binary targets in 24 published direct imaging surveys. Here we present our statistical analysis of this combined body of data. We analysed a sample of 117 tight binary systems, using a combined Monte Carlo and Bayesian approach to derive the expected values of the frequency of companions, for different values of the companion's semi-major axis. Our analysis suggest that the frequency of sub-stellar companions in wide orbit is moderately low (≲ 13% with a best value of 6% at 95% confidence level) and not significantly different between single stars and tight binaries. One implication of this result is that the very high frequency of circumbinary planets in wide orbits around post-common envelope binaries, implied by eclipse timing, cannot be uniquely due to planets formed before the common-envelope phase (first generation planets), supporting instead the second generation planet formation or a non-Keplerian origin of the timing variations.

  18. Clinch River Breeder Reactor Plant Steam Generator Few Tube Test model post-test examination

    Energy Technology Data Exchange (ETDEWEB)

    Impellezzeri, J.R.; Camaret, T.L.; Friske, W.H.


    The Steam Generator Few Tube Test (FTT) was part of an extensive testing program carried out in support of the Clinch River Breeder Reactor Plant (CRBRP) steam generator design. The testing of full-length seven-tube evaporator and three-tube superheater models of the CRBRP design was conducted to provide steady-state thermal/hydraulic performance data to full power per tube and to verify the absence of multi-year endurance problems. This paper describes the problems encountered with the mechanical features of the FTT model design which led to premature test termination, and the results of the post-test examination. Conditions of tube bowing and significant tube and tube support gouging was observed. An interpretation of the visual and metallurgical observations is also presented. The CRBRP steam generator has undergone design evaluations to resolve observed deficiences found in the FFTM.

  19. SPSS and SAS programming for the testing of mediation models. (United States)

    Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S


    Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.

  20. Development of a fault test experimental facility model using Matlab

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez; Moraes, Davi Almeida, E-mail:, E-mail: [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)


    The Fault Test Experimental Facility was developed to simulate a PWR nuclear power plant and is instrumented with temperature, level and pressure sensors. The Fault Test Experimental Facility can be operated to generate normal and fault data, and these failures can be added initially small, and their magnitude being increasing gradually. This work presents the Fault Test Experimental Facility model developed using the Matlab GUIDE (Graphical User Interface Development Environment) toolbox that consists of a set of functions designed to create interfaces in an easy and fast way. The system model is based on the mass and energy inventory balance equations. Physical as well as operational aspects are taken into consideration. The interface layout looks like a process flowchart and the user can set the input variables. Besides the normal operation conditions, there is the possibility to choose a faulty variable from a list. The program also allows the user to set the noise level for the input variables. Using the model, data were generated for different operational conditions, both under normal and fault conditions with different noise levels added to the input variables. Data generated by the model will be compared with Fault Test Experimental Facility data. The Fault Test Experimental Facility theoretical model results will be used for the development of a Monitoring and Fault Detection System. (author)

  1. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  2. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  3. Target Soil Impact Verification: Experimental Testing and Kayenta Constitutive Modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Broome, Scott Thomas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Flint, Gregory Mark [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Dewers, Thomas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Newell, Pania [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)


    This report details experimental testing and constitutive modeling of sandy soil deformation under quasi - static conditions. This is driven by the need to understand constitutive response of soil to target/component behavior upon impact . An experimental and constitutive modeling program was followed to determine elastic - plastic properties and a compressional failure envelope of dry soil . One hydrostatic, one unconfined compressive stress (UCS), nine axisymmetric compression (ACS) , and one uniaxial strain (US) test were conducted at room temperature . Elastic moduli, assuming isotropy, are determined from unload/reload loops and final unloading for all tests pre - failure and increase monotonically with mean stress. Very little modulus degradation was discernable from elastic results even when exposed to mean stresses above 200 MPa . The failure envelope and initial yield surface were determined from peak stresses and observed onset of plastic yielding from all test results. Soil elasto - plastic behavior is described using the Brannon et al. (2009) Kayenta constitutive model. As a validation exercise, the ACS - parameterized Kayenta model is used to predict response of the soil material under uniaxial strain loading. The resulting parameterized and validated Kayenta model is of high quality and suitable for modeling sandy soil deformation under a range of conditions, including that for impact prediction.

  4. Asteroid modeling for testing spacecraft approach and landing. (United States)

    Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick


    Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.

  5. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  6. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.


    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  7. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda


    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  8. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu


    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  9. Modeling cross-hole slug tests in an unconfined aquifer

    CERN Document Server

    Malama, Bwalya; Brauchler, Ralf; Bayer, Peter


    A modified version of a published slug test model for unconfined aquifers is applied to cross-hole slug test data collected in field tests conducted at the Widen site in Switzerland. The model accounts for water-table effects using the linearised kinematic condition. The model also accounts for inertial effects in source and observation wells. The primary objective of this work is to demonstrate applicability of this semi-analytical model to multi-well and multi-level pneumatic slug tests. The pneumatic perturbation was applied at discrete intervals in a source well and monitored at discrete vertical intervals in observation wells. The source and observation well pairs were separated by distances of up to 4 m. The analysis yielded vertical profiles of hydraulic conductivity, specific storage, and specific yield at observation well locations. The hydraulic parameter estimates are compared to results from prior pumping and single-well slug tests conducted at the site, as well as to estimates from particle size ...

  10. Testing GNSS ionosphere models based on the position domain (United States)

    Orus-Perez, Raul; Rovira, Adria


    As is well know, the ionosphere is one of the main contributors to the navigation error of single-frequency users. Currently, there are many models available for correcting the ionosphere delay. Thus, the different GNSS provide its own ionosphere corrections in the Signal-in-Space as for instance, NeQuick G for Galileo or Klobuchar for GPS. Other sources for ionosphere corrections are the Satellite Based Augmentation Systems (i.e. EGNOS or WAAS), Global Ionospheric Maps (i.e. provided by IGS), regional maps and even climatological models, like NeQuick or IRI. With this large variety of models, there have been a lot of efforts to define a suitable strategy to test the accuracy of the different models. Usually, this testing has been done by computing a "reference ionosphere", using all kind of GNSS techniques, using ionosonde data or using altimeter data. These techniques are not bias free and they may raise questions on which is the absolute accuracy they achieve. In order to complement these tests, a new methodology has been developed to test ionosphere models for GNSS. This methodology is based on the position domain, modeling the observables on each frequency with geodetic accuracy, and then to combine the obtained least square solutions to determine the ionosphere error. The results of the testing for different GIMs from IGS and different Signal-in-Space models (GPS, Galileo, and EGNOS) will be presented for 2 years of the last Solar Maximum with more than 40 receivers worldwide. The weaknesses and strengths of the new methodology will also be shown to get a comprehensive idea of its capabilities.

  11. SOFIA 2 model telescope wind tunnel test report (United States)

    Keas, Paul


    This document outlines the tests performed to make aerodynamic force and torque measurements on the SOFIA wind tunnel model telescope. These tests were performed during the SOFIA 2 wind tunnel test in the 14 ft wind tunnel during the months of June through August 1994. The test was designed to measure the dynamic cross elevation moment acting on the SOFIA model telescope due to aerodynamic loading. The measurements were taken with the telescope mounted in an open cavity in the tail section of the SOFIA model 747. The purpose of the test was to obtain an estimate of the full scale aerodynamic disturbance spectrum, by scaling up the wind tunnel results (taking into account differences in sail area, air density, cavity dimension, etc.). An estimate of the full scale cross elevation moment spectrum was needed to help determine the impact this disturbance would have on the telescope positioning system requirements. A model of the telescope structure, made of a light weight composite material, was mounted in the open cavity of the SOFIA wind tunnel model. This model was mounted via a force balance to the cavity bulkhead. Despite efforts to use a 'stiff' balance, and a lightweight model, the balance/telescope system had a very low resonant frequency (37 Hz) compared to the desired measurement bandwidth (1000 Hz). Due to this mechanical resonance of the balance/telescope system, the balance alone could not provide an accurate measure of applied aerodynamic force at the high frequencies desired. A method of measurement was developed that incorporated accelerometers in addition to the balance signal, to calculate the aerodynamic force.

  12. Description of Model Tests Carried Out by Aalborg University

    DEFF Research Database (Denmark)

    Frigaard, Peter; Schlütter, F.; Andersen, H.


    As associated partner, Aalborg University (AU) have participated in different aspects of "the Zeebrugge project". AU has carried out an extensive number of small-scale model tests (1:65) with the Zeebrugge breakwater with the aim of investigating scale-effects.......As associated partner, Aalborg University (AU) have participated in different aspects of "the Zeebrugge project". AU has carried out an extensive number of small-scale model tests (1:65) with the Zeebrugge breakwater with the aim of investigating scale-effects....

  13. Model-based automated testing of critical PLC programs.

    CERN Document Server

    Fernández Adiego, B; Tournier, J-C; González Suárez, V M; Bliudze, S


    Testing of critical PLC (Programmable Logic Controller) programs remains a challenging task for control system engineers as it can rarely be automated. This paper proposes a model based approach which uses the BIP (Behavior, Interactions and Priorities) framework to perform automated testing of PLC programs developed with the UNICOS (UNified Industrial COntrol System) framework. This paper defines the translation procedure and rules from UNICOS to BIP which can be fully automated in order to hide the complexity of the underlying model from the control engineers. The approach is illustrated and validated through the study of a water treatment process.

  14. Design verification and cold-flow modeling test report

    Energy Technology Data Exchange (ETDEWEB)


    This report presents a compilation of the following three test reports prepared by TRW for Alaska Industrial Development and Export Authority (AIDEA) as part of the Healy Clean Coal Project, Phase 1 Design of the TRW Combustor and Auxiliary Systems, which is co-sponsored by the Department of Energy under the Clean Coal Technology 3 Program: (1) Design Verification Test Report, dated April 1993, (2) Combustor Cold Flow Model Report, dated August 28, 1992, (3) Coal Feed System Cold Flow Model Report, October 28, 1992. In this compilation, these three reports are included in one volume consisting of three parts, and TRW proprietary information has been excluded.

  15. Model test of new floating offshore wind turbine platforms

    Directory of Open Access Journals (Sweden)

    Hyunkyoung Shin


    Full Text Available This paper presents the model test results of 3 new spar platforms which were developed based on the OC3-Hywind spar to support a 5-MW wind turbine. By changing the shape but keeping both volume and mass of OC3-Hywind spar platform, those platforms were expected to experience different hydrodynamic and hydrostatic loads. The scale models were built with a 1/128 scale ratio. The model tests were carried out in waves, including both rotating rotor effect and mean wind speed. The characteristic motions of the 3 new models were measured; Response Amplitude Operators (RAO and significant motions were calculated and compared with those of OC3-Hywind.

  16. Evaluating planetary digital terrain models-The HRSC DTM test (United States)

    Heipke, C.; Oberst, J.; Albertz, J.; Attwenger, M.; Dorninger, P.; Dorrer, E.; Ewe, M.; Gehrke, S.; Gwinner, K.; Hirschmuller, H.; Kim, J.R.; Kirk, R.L.; Mayer, H.; Muller, Jan-Peter; Rengarajan, R.; Rentsch, M.; Schmidt, R.; Scholten, F.; Shan, J.; Spiegel, M.; Wahlisch, M.; Neukum, G.


    The High Resolution Stereo Camera (HRSC) has been orbiting the planet Mars since January 2004 onboard the European Space Agency (ESA) Mars Express mission and delivers imagery which is being used for topographic mapping of the planet. The HRSC team has conducted a systematic inter-comparison of different alternatives for the production of high resolution digital terrain models (DTMs) from the multi look HRSC push broom imagery. Based on carefully chosen test sites the test participants have produced DTMs which have been subsequently analysed in a quantitative and a qualitative manner. This paper reports on the results obtained in this test. ?? 2007 Elsevier Ltd. All rights reserved.

  17. Exact Hypothesis Tests for Log-linear Models with exactLoglinTest

    Directory of Open Access Journals (Sweden)

    Brian Caffo


    Full Text Available This manuscript overviews exact testing of goodness of fit for log-linear models using the R package exactLoglinTest. This package evaluates model fit for Poisson log-linear models by conditioning on minimal sufficient statistics to remove nuisance parameters. A Monte Carlo algorithm is proposed to estimate P values from the resulting conditional distribution. In particular, this package implements a sequentially rounded normal approximation and importance sampling to approximate probabilities from the conditional distribution. Usually, this results in a high percentage of valid samples. However, in instances where this is not the case, a Metropolis Hastings algorithm can be implemented that makes more localized jumps within the reference set. The manuscript details how some conditional tests for binomial logit models can also be viewed as conditional Poisson log-linear models and hence can be performed via exactLoglinTest. A diverse battery of examples is considered to highlight use, features and extensions of the software. Notably, potential extensions to evaluating disclosure risk are also considered.

  18. Building and testing models with extended Higgs sectors (United States)

    Ivanov, Igor P.


    Models with non-minimal Higgs sectors represent a mainstream direction in theoretical exploration of physics opportunities beyond the Standard Model. Extended scalar sectors help alleviate difficulties of the Standard Model and lead to a rich spectrum of characteristic collider signatures and astroparticle consequences. In this review, we introduce the reader to the world of extended Higgs sectors. Not pretending to exhaustively cover the entire body of literature, we walk through a selection of the most popular examples: the two- and multi-Higgs-doublet models, as well as singlet and triplet extensions. We will show how one typically builds models with extended Higgs sectors, describe the main goals and the challenges which arise on the way, and mention some methods to overcome them. We will also describe how such models can be tested, what are the key observables one focuses on, and illustrate the general strategy with a subjective selection of results.

  19. Animal models for dengue vaccine development and testing. (United States)

    Na, Woonsung; Yeom, Minjoo; Choi, Il-Kyu; Yook, Heejun; Song, Daesub


    Dengue fever is a tropical endemic disease; however, because of climate change, it may become a problem in South Korea in the near future. Research on vaccines for dengue fever and outbreak preparedness are currently insufficient. In addition, because there are no appropriate animal models, controversial results from vaccine efficacy assessments and clinical trials have been reported. Therefore, to study the mechanism of dengue fever and test the immunogenicity of vaccines, an appropriate animal model is urgently needed. In addition to mouse models, more suitable models using animals that can be humanized will need to be constructed. In this report, we look at the current status of model animal construction and discuss which models require further development.


    Directory of Open Access Journals (Sweden)

    Farhat Iqbal


    Full Text Available In this paper the asymptotic distribution of the absolute residual autocorrelations from generalized autoregressive conditional heteroscedastic (GARCH models is derived. The correct asymptotic standard errors for the absolute residual autocorrelations are also obtained and based on these results, a diagnostic test for checking the adequacy of GARCH-type models are developed. Our results do not depend on the existence of higher moments and is therefore robust under heavy-tailed distributions.

  1. Drive Rig Mufflers for Model Scale Engine Acoustic Testing (United States)

    Stephens, David


    Testing of air breathing propulsion systems in the 9x15 foot wind tunnel at NASA Glenn Research Center depends on compressed air turbines for power. The drive rig turbines exhaust directly to the wind tunnel test section, and have been found to produce significant unwanted noise that reduces the quality of the acoustic measurements of the model being tested. In order to mitigate this acoustic contamination, a muffler can be attached downstream of the drive rig turbine. The modern engine designs currently being tested produce much less noise than traditional engines, and consequently a lower noise floor is required of the facility. An acoustic test of a muffler designed to mitigate this extraneous noise is presented, and a noise reduction of 8 dB between 700 Hz and 20 kHz was documented, significantly improving the quality of acoustic measurements in the facility.

  2. Operational Testing of Satellite based Hydrological Model (SHM) (United States)

    Gaur, Srishti; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghavendra P.


    Incorporation of the concept of transposability in model testing is one of the prominent ways to check the credibility of a hydrological model. Successful testing ensures ability of hydrological models to deal with changing conditions, along with its extrapolation capacity. For a newly developed model, a number of contradictions arises regarding its applicability, therefore testing of credibility of model is essential to proficiently assess its strength and limitations. This concept emphasizes to perform 'Hierarchical Operational Testing' of Satellite based Hydrological Model (SHM), a newly developed surface water-groundwater coupled model, under PRACRITI-2 program initiated by Space Application Centre (SAC), Ahmedabad. SHM aims at sustainable water resources management using remote sensing data from Indian satellites. It consists of grid cells of 5km x 5km resolution and comprises of five modules namely: Surface Water (SW), Forest (F), Snow (S), Groundwater (GW) and Routing (ROU). SW module (functions in the grid cells with land cover other than forest and snow) deals with estimation of surface runoff, soil moisture and evapotranspiration by using NRCS-CN method, water balance and Hragreaves method, respectively. The hydrology of F module is dependent entirely on sub-surface processes and water balance is calculated based on it. GW module generates baseflow (depending on water table variation with the level of water in streams) using Boussinesq equation. ROU module is grounded on a cell-to-cell routing technique based on the principle of Time Variant Spatially Distributed Direct Runoff Hydrograph (SDDH) to route the generated runoff and baseflow by different modules up to the outlet. For this study Subarnarekha river basin, flood prone zone of eastern India, has been chosen for hierarchical operational testing scheme which includes tests under stationary as well as transitory conditions. For this the basin has been divided into three sub-basins using three flow

  3. Proton Exchange Membrane Fuel Cell Engineering Model Powerplant. Test Report: Benchmark Tests in Three Spatial Orientations (United States)

    Loyselle, Patricia; Prokopius, Kevin


    Proton exchange membrane (PEM) fuel cell technology is the leading candidate to replace the aging alkaline fuel cell technology, currently used on the Shuttle, for future space missions. This test effort marks the final phase of a 5-yr development program that began under the Second Generation Reusable Launch Vehicle (RLV) Program, transitioned into the Next Generation Launch Technologies (NGLT) Program, and continued under Constellation Systems in the Exploration Technology Development Program. Initially, the engineering model (EM) powerplant was evaluated with respect to its performance as compared to acceptance tests carried out at the manufacturer. This was to determine the sensitivity of the powerplant performance to changes in test environment. In addition, a series of tests were performed with the powerplant in the original standard orientation. This report details the continuing EM benchmark test results in three spatial orientations as well as extended duration testing in the mission profile test. The results from these tests verify the applicability of PEM fuel cells for future NASA missions. The specifics of these different tests are described in the following sections.

  4. Testing ocean tide models using GGP superconducting gravimeter observations (United States)

    Baker, T.; Bos, M.


    Observations from the global network of superconducting gravimeters in the Global Geodynamics Project (GGP) are used to test 10 ocean tide models (SCHW; FES94.1, 95.2, 98, 99; CSR3.0, 4.0; TPXO.5; GOT99.2b; and NAO.99b). In addition, observations are used from selected sites with LaCoste and Romberg gravimeters with electrostatic feedback, where special attention has been given to achieving a calibration accuracy of 0.1%. In Europe, there are several superconducting gravimeter stations in a relatively small area and this can be used to advantage in testing the ocean (and body) tide models and in identifying sites with anomalous observations. At some of the superconducting gravimeter sites there are anomalies in the in-phase components of the main tidal harmonics, which are due to calibration errors of up to 0.3%. It is shown that the recent ocean tide models are in better agreement with the tidal gravity observations than were the earlier models of Schwiderski and FES94.1. However, no single ocean tide model gives completely satisfactory results in all areas of the world. For example, for M2 the TPXO.5 and NAO99b models give anomalous results in Europe, whereas the FES95.2, FES98 and FES99 models give anomalous results in China and Japan. It is shown that the observations from this improved set of tidal gravity stations will provide an important test of the new ocean tide models that will be developed in the next few years. For further details see Baker, T.F. and Bos, M.S. (2003). "Validating Earth and ocean tide models using tidal gravity measurements", Geophysical Journal International, 152.

  5. A General Model for Testing Mediation and Moderation Effects (United States)

    MacKinnon, David P.


    This paper describes methods for testing mediation and moderation effects in a dataset, both together and separately. Investigations of this kind are especially valuable in prevention research to obtain information on the process by which a program achieves its effects and whether the program is effective for subgroups of individuals. A general model that simultaneously estimates mediation and moderation effects is presented, and the utility of combining the effects into a single model is described. Possible effects of interest in the model are explained, as are statistical methods to assess these effects. The methods are further illustrated in a hypothetical prevention program example. PMID:19003535

  6. Transient testing using EMTP modeling in an automatic playback mode

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, J. B. [Schweitzer Engineering Laboratories Inc., Pullman, WA (United States)


    The need to test protection schemes under realistic power system conditions, as opposed to doing steady-state tests, was discussed. Transient testing is one of the methods that gives engineers the confidence they need to use newly developed protection schemes. Real-time digital simulators typically use a program like the Electromagnetic Transients Program (EMTP) to model power systems. The digitally generated output of EMTP is converted to analog signals in real-time mode via digital-to-analog converters and power amplifiers. COMTRADE is one of the standards that provides a format for playback of power system events. This paper describes the method for transient testing of protective relays using EMTP as the means of modeling the power system, and for replaying the modeled disturbances in an automatic batch mode playback, for a complete, cost effective and thorough testing of the protection system. The system is most useful in situations where the protection system is reasonably predictable or where the number of cases is relatively small. 3 refs., 4 figs.

  7. Testing exact rational expectations in cointegrated vector autoregressive models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh


    This paper considers the testing of restrictions implied by rational expectations hypotheses in a cointegrated vector autoregressive model for I(1) variables. If the rational expectations involve one-step-ahead observations only and the coefficients are known, an explicit parameterization...

  8. Instruction manual model 600F, data transmission test set (United States)


    Information necessary for the operation and maintenance of the Model 600F Data Transmission Test Set is presented. A description is contained of the physical and functional characteristics; pertinent installation data; instructions for operating the equipment; general and detailed principles of operation; preventive and corrective maintenance procedures; and block, logic, and component layout diagrams of the equipment and its major component assemblies.

  9. Testing the minimal supersymmetric standard model with the mass ...

    Indian Academy of Sciences (India)

    S Heinemeyer et al particles (see e.g. refs [5,6]). Consequently, a precise theoretical prediction for. MW in terms of the model parameters is of utmost importance for present and future electroweak precision tests. A precise prediction for MW in the MSSM is also needed as a part of the 'SPA Convention and Project' (see ref.

  10. JTorX: Exploring Model-Based Testing

    NARCIS (Netherlands)

    Belinfante, Axel


    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and

  11. Stochastic Models for Strength of Wind Turbine Blades using Tests

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    The structural cost of wind turbine blades is dependent on the values of the partial safety factors which reflect the uncertainties in the design values, including statistical uncertainty from a limited number of tests. This paper presents a probabilistic model for ultimate and fatigue strength...

  12. Model Testing - Bringing the Ocean into the Laboratory

    DEFF Research Database (Denmark)

    Aage, Christian


    Hydrodynamic model testing, the principle of bringing the ocean into the laboratory to study the behaviour of the ocean itself and the response of man-made structures in the ocean in reduced scale, has been known for centuries. Due to an insufficient understanding of the physics involved, however...

  13. Testing macroeconomic models by indirect inference on unfiltered data


    Meenagh, David; Minford, Patrick; Wickens, Michael


    We extend the method of indirect inference testing to data that is not filtered and so may be non-stationary. We apply the method to an open economy real business cycle model on UK data. We review the method using a Monte Carlo experiment and find that it performs accurately and has good power.

  14. Measuring damage in physical model tests of rubble mounds

    NARCIS (Netherlands)

    Hofland, B.; Rosa-Santos, Paulo; Taveira-Pinto, Francisco; Lemos, Rute; Mendonça, A.; Juana Fortes, C


    This paper studies novel ways to evaluate armour damage in physical models of coastal structures. High-resolution damage data for reference rubble mound breakwaters obtained under the HYDRALAB+ joint-research project are analysed and discussed. These tests are used to analyse the way to describe

  15. Project Physics Tests 5, Models of the Atom. (United States)

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    Test items relating to Project Physics Unit 5 are presented in this booklet. Included are 70 multiple-choice and 23 problem-and-essay questions. Concepts of atomic model are examined on aspects of relativistic corrections, electron emission, photoelectric effects, Compton effect, quantum theories, electrolysis experiments, atomic number and mass,…

  16. Ares I Scale Model Acoustic Test Lift-Off Acoustics (United States)

    Counter, Douglas D.; Houston, Janie D.


    The lift-off acoustic (LOA) environment is an important design factor for any launch vehicle. For the Ares I vehicle, the LOA environments were derived by scaling flight data from other launch vehicles. The Ares I LOA predicted environments are compared to the Ares I Scale Model Acoustic Test (ASMAT) preliminary results.

  17. A Test Procedure for Determining Models of LV Equipment

    NARCIS (Netherlands)

    Cuk, Vladimir; Cobben, Joseph F.G.; Kling, Wil L.; Timens, R.B.; Leferink, Frank Bernardus Johannes


    An automated test technique for determining parameters of low voltage equipment is presented in the paper. The aim of this research is to obtain simple models of household, office and industrial equipment which could be used to predict power quality problems during the design of low voltage

  18. Xenopus laevis embryos and tadpoles as models for testing for ...

    African Journals Online (AJOL)

    Xenopus laevis embryos and tadpoles as models for testing for pollution by zinc, copper, lead and cadmium. ... Metals affected the growth of the tadpoles by reducing body length with increasing concentrations. An increase in the concentration of each metal resulted in an increase in the frequency and severity of ...

  19. Towards a pragmatic human migraine model for drug testing

    DEFF Research Database (Denmark)

    Hansen, Emma Katrine; Olesen, Jes


    BACKGROUND: A model for the testing of novel anti-migraine drugs should preferably use healthy volunteers for ease of recruiting. Isosorbide-5-mononitrate (5-ISMN) provokes headache in healthy volunteers with some migraine features such as pulsating pain quality and aggravation by physical activity...

  20. Toward a pragmatic migraine model for drug testing

    DEFF Research Database (Denmark)

    Hansen, Emma Katrine; Guo, Song; Ashina, Messoud


    BACKGROUND: A model for the testing of novel antimigraine drugs should ideally use healthy volunteers for ease of recruiting. Cilostazol provokes headache in healthy volunteers with some migraine features such as pulsating pain quality and aggravation by physical activity. Therefore, this headach...

  1. Testing one-body density functionals on a solvable model

    CERN Document Server

    Benavides-Riveros, Carlos L


    There are several physically motivated density matrix functionals in the literature, built from the knowledge of the natural orbitals and the occupation numbers of the one-body reduced density matrix. With the help of the equivalent phase-space formalism, we thoroughly test some of the most popular of those functionals on a completely solvable model.

  2. Testing one-body density functionals on a solvable model (United States)

    Benavides-Riveros, C. L.; Várilly, J. C.


    There are several physically motivated density matrix functionals in the literature, built from the knowledge of the natural orbitals and the occupation numbers of the one-body reduced density matrix. With the help of the equivalent phase-space formalism, we thoroughly test some of the most popular of those functionals on a completely solvable model.

  3. Modeling DIF in Complex Response Data Using Test Design Strategies (United States)

    Kahraman, Nilufer; De Boeck, Paul; Janssen, Rianne


    This study introduces an approach for modeling multidimensional response data with construct-relevant group and domain factors. The item level parameter estimation process is extended to incorporate the refined effects of test dimension and group factors. Differences in item performances over groups are evaluated, distinguishing two levels of…

  4. Parametric Thermal Models of the Transient Reactor Test Facility (TREAT)

    Energy Technology Data Exchange (ETDEWEB)

    Bradley K. Heath


    This work supports the restart of transient testing in the United States using the Department of Energy’s Transient Reactor Test Facility at the Idaho National Laboratory. It also supports the Global Threat Reduction Initiative by reducing proliferation risk of high enriched uranium fuel. The work involves the creation of a nuclear fuel assembly model using the fuel performance code known as BISON. The model simulates the thermal behavior of a nuclear fuel assembly during steady state and transient operational modes. Additional models of the same geometry but differing material properties are created to perform parametric studies. The results show that fuel and cladding thermal conductivity have the greatest effect on fuel temperature under the steady state operational mode. Fuel density and fuel specific heat have the greatest effect for transient operational model. When considering a new fuel type it is recommended to use materials that decrease the specific heat of the fuel and the thermal conductivity of the fuel’s cladding in order to deal with higher density fuels that accompany the LEU conversion process. Data on the latest operating conditions of TREAT need to be attained in order to validate BISON’s results. BISON’s models for TREAT (material models, boundary convection models) are modest and need additional work to ensure accuracy and confidence in results.

  5. Testing Ocean Tide Models Using Superconducting Gravimeter Observations (United States)

    Baker, T. F.; Bos, M. S.


    Observations from the global network of superconducting gravimeters in the Global Geodynamics Project (GGP) are used to test 10 recent ocean tide models. In addition, observations are used from selected sites with LaCoste and Romberg gravimeters with electrostatic feedback, where special attention has been given to achieving a calibration accuracy of 0.1%. At some superconducting gravimeter sites there are anomalies in the in-phase components of the main tidal harmonics, which are due to calibration errors of up to 0.3%. It is shown that the recent ocean tide models are in better agreement with the tidal gravity observations than were the earlier models of Schwiderski and FES94.1. However, no single ocean tide model gives completely satisfactory results in all areas of the world. For example, for M2 the TPXO.5 and NAO99b models give anomalous results in Europe, whereas the FES95.2, FES98 and FES99 models give anomalous results in China and Japan. It is shown that the observations from this improved set of tidal gravity stations will provide an important test of the new ocean tide models that will be developed in the next few years.

  6. Testing Premixed Turbulent Combustion Models by Studying Flame Dynamics

    Directory of Open Access Journals (Sweden)

    Andrei N. Lipatnikov


    Full Text Available First, the following universal feature of premixed turbulent flame dynamics is highlighted: During an early stage of flame development, the burning velocity grows much faster than the mean flame brush thickness, because the two processes are controlled by the small-scale and large-scale turbulent eddies, respectively. Second, this feature of developing flames is exploited in order to test a number of different models of premixed turbulent combustion by theoretically and numerically studying an interaction of an initially laminar, planar, one-dimensional flame with a statistically stationary, planar, one-dimensional, and spatially uniform turbulent flow not affected by combustion. To test as many models as possible in a simple and unified manner, various combustion models are divided into three generalized groups: (i algebraic models, which invoke an algebraic expression for the mean rate of product creation, (ii gradient models, which involve a gradient-type source term in a balance equation for the mean combustion progress variable, and (iii two-equation models, which deal not only with a balance equation for the mean combustion progress variable but also with either a balance equation for the flame surface density or a balance equation for the mean scalar dissipation rate. Analytical and numerical results reported in the paper indicate that solely the gradient models are able to yield substantially different growth rates of the turbulent burning velocity and the mean flame brush thickness.

  7. A Coupled THMC model of FEBEX mock-up test

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Liange; Samper, Javier


    FEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project for the engineered barrier system (EBS) of a radioactive waste repository in granite. It includes two full-scale heating and hydration tests: the in situ test performed at Grimsel (Switzerland) and a mock-up test operating at CIEMAT facilities in Madrid (Spain). The mock-up test provides valuable insight on thermal, hydrodynamic, mechanical and chemical (THMC) behavior of EBS because its hydration is controlled better than that of in situ test in which the buffer is saturated with water from the surrounding granitic rock. Here we present a coupled THMC model of the mock-up test which accounts for thermal and chemical osmosis and bentonite swelling with a state-surface approach. The THMC model reproduces measured temperature and cumulative water inflow data. It fits also relative humidity data at the outer part of the buffer, but underestimates relative humidities near the heater. Dilution due to hydration and evaporation near the heater are the main processes controlling the concentration of conservative species while surface complexation, mineral dissolution/precipitation and cation exchanges affect significantly reactive species as well. Results of sensitivity analyses to chemical processes show that pH is mostly controlled by surface complexation while dissolved cations concentrations are controlled by cation exchange reactions.

  8. ExEP yield modeling tool and validation test results (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul


    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  9. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model. (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten


    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  10. Modified porosity rate frost heave model and tests verification (United States)

    Ji, Zhi-qiang; Xu, Xue-yan


    To avoid the complexity of modeling frost heave from microscope, porosity rate function has been used in predication of frost heave phenomenon. The approach explored in this paper is based on frost heave tests and the concept of the segregated potential which has been widely accepted by researchers in order to find the proper form of the porosity rate function. In the frozen fringe the porosity rate function was derived: n•=Be(-aPe) (gradT)2 (1-n) , (Tstests were carried out to verify the model, and the comparison between test results and analog results shows that the modified model is efficient for the prediction of frost heave, and it can be used in engineering practice.

  11. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    . This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known......Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central...

  12. The Latent Class Model as a Measurement Model for Situational Judgment Tests

    Directory of Open Access Journals (Sweden)

    Frank Rijmen


    Full Text Available In a situational judgment test, it is often debatable what constitutes a correct answer to a situation. There is currently a multitude of scoring procedures. Establishing a measurement model can guide the selection of a scoring rule. It is argued that the latent class model is a good candidate for a measurement model. Two latent class models are applied to the Managing Emotions subtest of the Mayer, Salovey, Caruso Emotional Intelligence Test: a plain-vanilla latent class model, and a second-order latent class model that takes into account the clustering of several possible reactions within each hypothetical scenario of the situational judgment test. The results for both models indicated that there were three subgroups characterised by the degree to which differentiation occurred between possible reactions in terms of perceived effectiveness. Furthermore, the results for the second-order model indicated a moderate cluster effect.


    Energy Technology Data Exchange (ETDEWEB)

    Street, R. A.; Tsapras, Y. [LCOGT, 6740 Cortona Drive, Suite 102, Goleta, CA 93117 (United States); Choi, J.-Y.; Han, C. [Department of Physics, Institute for Astrophysics, Chungbuk National University, Cheongju 361-763 (Korea, Republic of); Furusawa, K. [Solar-Terrestrial Environment Laboratory, Nagoya University, Nagoya 464-8601 (Japan); Hundertmark, M.; Horne, K.; Dominik, M.; Browne, P.; Bajek, D. [SUPA/St Andrews, Department of Physics and Astronomy, North Haugh, St. Andrews, Fife KY16 9SS (United Kingdom); Gould, A. [Department of Astronomy, Ohio State University, McPherson Laboratory, 140 West 18th Avenue, Columbus, OH 43210-1173 (United States); Sumi, T. [Department of Earth and Space Science, Graduate School of Science, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka 560-0043 (Japan); Bond, I. A. [Institute of Information and Mathematical Sciences, Massey University, Private Bag 102-904, North Shore Mail Centre, Auckland (New Zealand); Wouters, D. [UPMC-CNRS, UMR 7095, Institut d' Astrophysique de Paris, 98bis boulevard Arago, F-75014 Paris (France); Zellem, R. [Lunar and Planetary Laboratory, Department of Planetary Sciences, University of Arizona, 1629 East University Boulevard, Tucson, AZ 85721-0092 (United States); Udalski, A. [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa (Poland); Snodgrass, C. [Max Planck Institute for Solar System Research, Max-Planck-Str. 2, D-37191 Katlenburg-Lindau (Germany); Kains, N.; Bramich, D. M. [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei Muenchen (Germany); Steele, I. A., E-mail: [Astrophysics Research Institute, Liverpool John Moores University, Twelve Quays House, Egerton Wharf, Birkenhead, Wirral CH41 1LD (United Kingdom); Collaboration: RoboNet Collaboration; MOA Collaboration; OGLE Collaboration; muFUN Collaboration; PLANET Collaboration; MiNDSTEp Collaboration; and others


    We present an analysis of the anomalous microlensing event, MOA-2010-BLG-073, announced by the Microlensing Observations in Astrophysics survey on 2010 March 18. This event was remarkable because the source was previously known to be photometrically variable. Analyzing the pre-event source light curve, we demonstrate that it is an irregular variable over timescales >200 days. Its dereddened color, (V - I) {sub S,0}, is 1.221 {+-} 0.051 mag, and from our lens model we derive a source radius of 14.7 {+-} 1.3 R {sub Sun }, suggesting that it is a red giant star. We initially explored a number of purely microlensing models for the event but found a residual gradient in the data taken prior to and after the event. This is likely to be due to the variability of the source rather than part of the lensing event, so we incorporated a slope parameter in our model in order to derive the true parameters of the lensing system. We find that the lensing system has a mass ratio of q = 0.0654 {+-} 0.0006. The Einstein crossing time of the event, t {sub E} = 44.3 {+-} 0.1 days, was sufficiently long that the light curve exhibited parallax effects. In addition, the source trajectory relative to the large caustic structure allowed the orbital motion of the lens system to be detected. Combining the parallax with the Einstein radius, we were able to derive the distance to the lens, D{sub L} = 2.8 {+-} 0.4 kpc, and the masses of the lensing objects. The primary of the lens is an M-dwarf with M {sub L,1} = 0.16 {+-} 0.03 M {sub Sun }, while the companion has M {sub L,2} = 11.0 {+-} 2.0 M {sub J}, putting it in the boundary zone between planets and brown dwarfs.

  14. A Model of Shared Mycobacteriology Testing Services: Lessons Learned. (United States)

    Mitchell, Kara; Halse, Tanya; Kohlerschmidt, Donna; Bennett, Toby; Vanner, Cynthia; King, Ewa; Musser, Kimberlee; Escuyer, Vincent


    The need for public health laboratories (PHLs) to prioritize resources has led to increased interest in sharing diagnostic services. To address this concept for tuberculosis (TB) testing, the New York State Department of Health Wadsworth Center and the Rhode Island State Health Laboratories assessed the feasibility of shared services for the detection and characterization of Mycobacterium tuberculosis complex (MTBC). We assessed multiple aspects of shared services including shipping, testing, reporting, and cost. Rhode Island State Health Laboratories shipped MTBC-positive specimens and isolates to Wadsworth Center. Average turnaround times were calculated and cost analysis was performed. Testing turnaround times were similar at both PHLs; however, the availability of conventional drug susceptibility testing (DST) results for Rhode Island primary specimens and isolates were extended by approximately four days of shipping time. An extended molecular testing panel was performed on every specimen submitted from Rhode Island State Health Laboratories to Wadsworth Center, and the total cost per specimen at Wadsworth Center was $177.12 less than at Rhode Island State Health Laboratories, plus shipping. Following a mid-study review, Wadsworth Center provided testing turnaround times for detection (same day), species determination of MTBC (same day), and molecular DST (2.5 days). The collaboration between Wadsworth Center and Rhode Island State Health Laboratories to assess shared services of TB testing highlighted a successful model that may serve as a guideline for other PHLs. The provision of additional rapid testing at a lower cost demonstrated in this study could potentially improve patient management and result in significant cost and resource savings if used in similar models across the country.

  15. Cyber-Physical Energy Systems Modeling, Test Specification, and Co-Simulation Based Testing

    DEFF Research Database (Denmark)

    van der Meer, A. A.; Palensky, P.; Heussen, Kai


    The gradual deployment of intelligent and coordinated devices in the electrical power system needs careful investigation of the interactions between the various domains involved. Especially due to the coupling between ICT and power systems a holistic approach for testing and validating is require....... The presented method addresses most modeling and specification challenges in cyber-physical energy systems and is extensible for future additions such as uncertainty quantification.......The gradual deployment of intelligent and coordinated devices in the electrical power system needs careful investigation of the interactions between the various domains involved. Especially due to the coupling between ICT and power systems a holistic approach for testing and validating is required....... Taking existing (quasi-) standardised smart grid system and test specification methods as a starting point, we are developing a holistic testing and validation approach that allows a very flexible way of assessing the system level aspects by various types of experiments (including virtual, real...

  16. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.


    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  17. Simulation of Acoustics for Ares I Scale Model Acoustic Tests (United States)

    Putnam, Gabriel; Strutzenberg, Louise L.


    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity acoustic measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Results from ASMAT simulations with the rocket in both held down and elevated configurations, as well as with and without water suppression have been compared to acoustic data collected from similar live-fire tests. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure.

  18. Review of the ATLAS B0 model coil test program

    CERN Document Server

    Dolgetta, N; Acerbi, E; Berriaud, C; Boxman, H; Broggi, F; Cataneo, F; Daël, A; Delruelle, N; Dudarev, A; Foussat, A; Haug, F; ten Kate, H H J; Mayri, C; Paccalini, A; Pengo, R; Rivoltella, G; Sbrissa, E


    The ATLAS B0 model coil has been extensively tested, reproducing the operational conditions of the final ATLAS Barrel Toroid coils. Two test campaigns have taken place on B0, at the CERN facility where the individual BT coils are about to be tested. The first campaign aimed to test the cool-down, warm-up phases and to commission the coil up to its nominal current of 20.5 kA, reproducing Lorentz forces similar to the ones on the BT coil. The second campaign aimed to evaluate the margins above the nominal conditions. The B0 was tested up to 24 kA and specific tests were performed to assess: the coil temperature margin with respect to the design value, the performance of the double pancake internal joints, static and dynamic heat loads, behavior of the coil under quench conditions. The paper reviews the overall test program with emphasis on second campaign results not covered before. 10 Refs.

  19. Model year 2010 Ford Fusion Level-1 testing report.

    Energy Technology Data Exchange (ETDEWEB)

    Rask, E.; Bocci, D.; Duoba, M.; Lohse-Busch, H.; Energy Systems


    As a part of the US Department of Energy's Advanced Vehicle Testing Activity (AVTA), a model year 2010 Ford Fusion was procured by eTec (Phoenix, AZ) and sent to ANL's Advanced Powertrain Research Facility for the purposes of vehicle-level testing in support of the Advanced Vehicle Testing Activity. Data was acquired during testing using non-intrusive sensors, vehicle network information, and facilities equipment (emissions and dynamometer). Standard drive cycles, performance cycles, steady-state cycles, and A/C usage cycles were conducted. Much of this data is openly available for download in ANL's Downloadable Dynamometer Database. The major results are shown in this report. Given the benchmark nature of this assessment, the majority of the testing was done over standard regulatory cycles and sought to obtain a general overview of how the vehicle performs. These cycles include the US FTP cycle (Urban) and Highway Fuel Economy Test cycle as well as the US06, a more aggressive supplemental regulatory cycle. Data collection for this testing was kept at a fairly high level and includes emissions and fuel measurements from an exhaust emissions bench, high-voltage and accessory current/voltage from a DC power analyzer, and CAN bus data such as engine speed, engine load, and electric machine operation. The following sections will seek to explain some of the basic operating characteristics of the MY2010 Fusion and provide insight into unique features of its operation and design.

  20. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel


    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  1. A test-bed modeling study for wave resource assessment (United States)

    Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.


    Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.

  2. System model of a natural circulation integral test facility (United States)

    Galvin, Mark R.

    The Department of Nuclear Engineering and Radiation Health Physics (NE/RHP) at Oregon State University (OSU) has been developing an innovative modular reactor plant concept since being initiated with a Department of Energy (DoE) grant in 1999. This concept, the Multi-Application Small Light Water Reactor (MASLWR), is an integral pressurized water reactor (PWR) plant that utilizes natural circulation flow in the primary and employs advanced passive safety features. The OSU MASLWR test facility is an electrically heated integral effects facility, scaled from the MASLWR concept design, that has been previously used to assess the feasibility of the concept design safety approach. To assist in evaluating operational scenarios, a simulation tool that models the test facility and is based on both test facility experimental data and analytical methods has been developed. The tool models both the test facility electric core and a simulated nuclear core, allowing evaluation of a broad spectrum of operational scenarios to identify those scenarios that should be explored experimentally using the test facility or design-quality multi-physics tools. Using the simulation tool, the total cost of experimentation and analysis can be reduced by directing time and resources towards the operational scenarios of interest.

  3. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni


    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  4. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory


    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  5. Modeling and Testing Landslide Hazard Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Mutasem Sh. Alkhasawneh


    Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

  6. Testing the gravity p-median model empirically

    Directory of Open Access Journals (Sweden)

    Kenneth Carling


    Full Text Available Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

  7. ITER CS Model Coil and CS Insert Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Martovetsky, N; Michael, P; Minervina, J; Radovinsky, A; Takayasu, M; Thome, R; Ando, T; Isono, T; Kato, T; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahashi, Y; Tsuji, H; Bessette, D; Okuno, K; Ricci, M


    The Inner and Outer modules of the Central Solenoid Model Coil (CSMC) were built by US and Japanese home teams in collaboration with European and Russian teams to demonstrate the feasibility of a superconducting Central Solenoid for ITER and other large tokamak reactors. The CSMC mass is about 120 t, OD is about 3.6 m and the stored energy is 640 MJ at 46 kA and peak field of 13 T. Testing of the CSMC and the CS Insert took place at Japan Atomic Energy Research Institute (JAERI) from mid March until mid August 2000. This paper presents the main results of the tests performed.

  8. SCYNet. Testing supersymmetric models at the LHC with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip; Belkner, Sebastian; Hamer, Matthias [Universitaet Bonn, Bonn (Germany); Dercks, Daniel [Universitaet Hamburg, Hamburg (Germany); Keller, Tim; Kraemer, Michael; Sarrazin, Bjoern; Schuette-Engel, Jan; Tattersall, Jamie [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany)


    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model. (orig.)

  9. SCYNet: testing supersymmetric models at the LHC with neural networks (United States)

    Bechtle, Philip; Belkner, Sebastian; Dercks, Daniel; Hamer, Matthias; Keller, Tim; Krämer, Michael; Sarrazin, Björn; Schütte-Engel, Jan; Tattersall, Jamie


    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model.

  10. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten


    Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...

  11. In vitro Cell Culture Model for Toxic Inhaled Chemical Testing (United States)

    Ahmad, Shama; Ahmad, Aftab; Neeves, Keith B.; Hendry-Hofer, Tara; Loader, Joan E.; White, Carl W.; Veress, Livia


    Cell cultures are indispensable to develop and study efficacy of therapeutic agents, prior to their use in animal models. We have the unique ability to model well differentiated human airway epithelium and heart muscle cells. This could be an invaluable tool to study the deleterious effects of toxic inhaled chemicals, such as chlorine, that can normally interact with the cell surfaces, and form various byproducts upon reacting with water, and limiting their effects in submerged cultures. Our model using well differentiated human airway epithelial cell cultures at air-liqiuid interface circumvents this limitation as well as provides an opportunity to evaluate critical mechanisms of toxicity of potential poisonous inhaled chemicals. We describe enhanced loss of membrane integrity, caspase release and death upon toxic inhaled chemical such as chlorine exposure. In this article, we propose methods to model chlorine exposure in mammalian heart and airway epithelial cells in culture and simple tests to evaluate its effect on these cell types. PMID:24837339

  12. Perceived game realism: a test of three alternative models. (United States)

    Ribbens, Wannes


    Perceived realism is considered a key concept in explaining the mental processing of media messages and the societal impact of media. Despite its importance, little is known about its conceptualization and dimensional structure, especially with regard to digital games. The aim of this study was to test a six-factor model of perceived game realism comprised of simulational realism, freedom of choice, perceptual pervasiveness, social realism, authenticity, and character involvement and to assess it against an alternative single- and five-factor model. Data were collected from 380 male digital game users who judged the realism of the first-person shooter Half-Life 2 based upon their previous experience with the game. Confirmatory factor analysis was applied to investigate which model fits the data best. The results support the six-factor model over the single- and five-factor solutions. The study contributes to our knowledge of perceived game realism by further developing its conceptualization and measurement.

  13. Correlation Results for a Mass Loaded Vehicle Panel Test Article Finite Element Models and Modal Survey Tests (United States)

    Maasha, Rumaasha; Towner, Robert L.


    High-fidelity Finite Element Models (FEMs) were developed to support a recent test program at Marshall Space Flight Center (MSFC). The FEMs correspond to test articles used for a series of acoustic tests. Modal survey tests were used to validate the FEMs for five acoustic tests (a bare panel and four different mass-loaded panel configurations). An additional modal survey test was performed on the empty test fixture (orthogrid panel mounting fixture, between the reverb and anechoic chambers). Modal survey tests were used to test-validate the dynamic characteristics of FEMs used for acoustic test excitation. Modal survey testing and subsequent model correlation has validated the natural frequencies and mode shapes of the FEMs. The modal survey test results provide a basis for the analysis models used for acoustic loading response test and analysis comparisons

  14. Testing Geyser Models using Down-vent Data (United States)

    Wang, C.; Munoz, C.; Ingebritsen, S.; King, E.


    Geysers are often studied as an analogue to magmatic volcanoes because both involve the transfer of mass and energy that leads to eruption. Several conceptual models have been proposed to explain geyser eruption, but no definitive test has been performed largely due to scarcity of down-vent data. In this study we compare simulated time histories of pressure and temperature against published data for the Old Faithful geyser in the Yellowstone National Park and new down-vent measurements from geysers in the El Tatio geyser field of northern Chile. We test two major types of geyser models by comparing simulated and field results. In the chamber model, the geyser system is approximated as a fissure-like conduit connected to a subsurface chamber of water and steam. Heat supplied to the chamber causes water to boil and drives geyser eruptions. Here the Navier-Stokes equation is used to simulate the flow of water and steam. In the fracture-zone model, the geyser system is approximated as a saturated fracture zone of high permeability and compressibility, surrounded by rock matrix of relatively low permeability and compressibility. Heat supply from below causes pore water to boil and drives geyser eruption. Here a two-phase form of Darcy's law is assumed to describe the flow of water and steam (Ingebritsen and Rojstaczer, 1993). Both models can produce P-T time histories qualitatively similar to field results, but the simulations are sensitive to assumed parameters. Results from the chamber model are sensitive to the heat supplied to the system and to the width of the conduit, while results from the fracture-zone model are most sensitive to the permeability of the fracture zone and the adjacent wall rocks. Detailed comparison between field and simulated results, such as the phase lag between changes of pressure and temperature, may help to resolve which model might be more realistic.

  15. Benchmark testing the flow and solidification modeling of AI castings (United States)

    Sirrell, B.; Holliday, M.; Campbell, J.


    Although the heat flow aspects of the simulation of castings now appears to be tolerably well advanced, a recent exercise has revealed that computed predictions can, in fact, be widely different from experimentally observed values. The modeling of flow, where turbulence is properly taken into account, appears to be good in its macroscopic ability. However, better resolution and the possible general incorporation of surface tension will be required to simulate the damaging effect of air entrainment common in most metal castings. It is envisaged that the results of this excercise will constitute a useful benchmark test for computer models of flow and solidification for the foreseeable future.

  16. Stationarity test with a direct test for heteroskedasticity in exchange rate forecasting models (United States)

    Khin, Aye Aye; Chau, Wong Hong; Seong, Lim Chee; Bin, Raymond Ling Leh; Teng, Kevin Low Lock


    Global economic has been decreasing in the recent years, manifested by the greater exchange rates volatility on international commodity market. This study attempts to analyze some prominent exchange rate forecasting models on Malaysian commodity trading: univariate ARIMA, ARCH and GARCH models in conjunction with stationarity test on residual diagnosis direct testing of heteroskedasticity. All forecasting models utilized the monthly data from 1990 to 2015. Given a total of 312 observations, the data used to forecast both short-term and long-term exchange rate. The forecasting power statistics suggested that the forecasting performance of ARIMA (1, 1, 1) model is more efficient than the ARCH (1) and GARCH (1, 1) models. For ex-post forecast, exchange rate was increased from RM 3.50 per USD in January 2015 to RM 4.47 per USD in December 2015 based on the baseline data. For short-term ex-ante forecast, the analysis results indicate a decrease in exchange rate on 2016 June (RM 4.27 per USD) as compared with 2015 December. A more appropriate forecasting method of exchange rate is vital to aid the decision-making process and planning on the sustainable commodities' production in the world economy.

  17. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...... electrode force, and the time of stabilizing does not depend on the level of the force. An additional spring mounted in the welding head improves the machine touching behavior due to a soft electrode application, but this results in longer time of oscillation of the electrode force, especially when...

  18. Residual flexibility test method for verification of constrained structural models (United States)

    Admire, John R.; Tinker, Michael L.; Ivey, Edward W.


    A method is described for deriving constrained modes and frequencies from a reduced model based on a subset of the free-free modes plus the residual effects of neglected modes. The method involves a simple modification of the MacNeal and Rubin component mode representation to allow development of a verified constrained (fixed-base) structural model. Results for two spaceflight structures having translational boundary degrees of freedom show quick convergence of constrained modes using a measureable number of free-free modes plus the boundary partition of the residual flexibility matrix. This paper presents the free-free residual flexibility approach as an alternative test/analysis method when fixed-base testing proves impractical.

  19. Modeling and testing of docking and berthing mechanisms (United States)

    Hall, Drew P.; Slone, B. Mark; Tobbe, Patrick A.


    The Contact Dynamics Simulation Laboratory (CDSL) of the Marshall Space Flight Center provides for refined hardware-in-the-loop real-time simulation of docking and berthing mechanisms and associated control systems. This facility is employed to verify the performance of docking/berthing mechanisms during Earth-orbit operations, determine the capture envelope of docking/berthing devices, measure contact loads at vehicle interfaces, and evaluate visual cues for man-in-the-loop operations. The CDSL has developed test verified analytical models of such systems as the International Space Station (ISS) Common Berthing Mechanism (CBM) and the Hubble Space Telescope (HST) Three Point Docking Mechanism. This paper will describe the modeling and test techniques employed at the CDSL and present results from recent programs.

  20. Rigorously testing multialternative decision field theory against random utility models. (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg


    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Music Therapy. History, Models, Techniques, Tests with Chinese Instruments


    Yang, Shu


    This diploma dissertation consists of twelve chapters which focus on the history, definitions, models, methods, forms, techniques, Chinese instruments, school's tests and activities ofMusic Therapy. The first chapter indicates general information about Music Therapy and covers the formal definition of the term. The second chapter includes the history of Music Therapy, from Ancient time to the 20th century. In the third chapter I mentioned the foundation of Music Therapy from physical, emotion...

  2. Potential complications to TB vaccine testing in animal models. (United States)

    Orme, Ian M


    Testing of new vaccines in animal models has certain advantages and disadvantages. As we better understand the complexity of the immune response to vaccines, new information may be complicating the assessment of the efficacy of new candidate vaccines. Four possible complications are discussed here, (i) induction of Foxp3+ T cells; (ii) induction of memory T cell subsets; (iii) location of extracellular organisms in lung necrosis; and (iv) protection against isolates of high/extreme immunopathology.

  3. Transfer as a Two-Way Process: Testing a Model (United States)

    Vermeulen, Rita; Admiraal, Wilfried


    Purpose: The purpose of this exploratory research is to test the model of training transfer as a two-way process. Design/methodology/approach: Based on self-report data gathered from 58 to 44 respondents in a field experiment, it is argued that there is not just learning in the context of training and not just application in the context of work.…

  4. Adaptive testing of Materials using Preisach Model Parameters Variations - Introductory Tests

    Directory of Open Access Journals (Sweden)

    Tomas Visnovec


    Full Text Available A new diagnostic method (MAT - Magnetic Adaptive Testing for non-destructive testing of ferromagnetic construction materials (i.e. iron based under mechanical stress is under development, [1]. The method is based on the investigation of the correlation between the mechanical load and the parameters of Preisach-like model describing magnetic properties of such materials as the differential permeability matrix. To get the set of model parameters needed, a number of minor hysteresis loops under defined exciting magnetic field strength waveform shape H(t, especially with constant field change rate dH(t/dt required (which implies the inducted voltage to be proportional to the differential permeability, is to be measured. The influence of initial magnetic state of the investigated material, algorithm of demagnetisation process, the slope of time dependence of exciting magnetic field on the signal-to-noise ratio and stability of the measured signal is discussed.

  5. Model tests of a baseline 40 MW pilot plant. Volume B: Test data (United States)

    George, J. F.; Stadter, J. T.; Donnelly, H. L.; Richards, D.; Brewer, F. N.; Hutchison, B. L.


    A baseline design of an OTEC pilot plant, configured as a floating platform for large scale, at sea, practical demonstrations of OTEC system operation, was completed. Model tests at 1/30 scale were conducted in a model basin. Waves were produced to simulate a variety of ocean conditions, including 100 year storm seas where hurricane waves equivalent to a maximum height of 65 ft were created. The platform survived all simulated conditions, although it was observed that a shaped bow, bilge keels, and additional hull length would improve seakeeping in the hurricane seas. Quantitative data were obtained on ship motions, cold water pipe loads and motions, mooring forces, and seawater system pressures. a compilation of the test data is presented.

  6. The Young Substellar Companion ROXs 12 B: Near-infrared Spectrum, System Architecture, and Spin-Orbit Misalignment (United States)

    Bowler, Brendan P.; Kraus, Adam L.; Bryan, Marta L.; Knutson, Heather A.; Brogi, Matteo; Rizzuto, Aaron C.; Mace, Gregory N.; Vanderburg, Andrew; Liu, Michael C.; Hillenbrand, Lynne A.; Cieza, Lucas A.


    ROXs 12 (2MASS J16262803-2526477) is a young star hosting a directly imaged companion near the deuterium-burning limit. We present a suite of spectroscopic, imaging, and time-series observations to characterize the physical and environmental properties of this system. Moderate-resolution near-infrared spectroscopy of ROXs 12 B from Gemini-North/NIFS and Keck/OSIRIS reveals signatures of low surface gravity including weak alkali absorption lines and a triangular H-band pseudocontinuum shape. No signs of Paβ emission are evident. As a population, however, we find that about half (46% ± 14%) of young (≲15 Myr) companions with masses ≲20 M Jup possess actively accreting subdisks detected via Paβ line emission, which represents a lower limit on the prevalence of circumplanetary disks in general, as some are expected to be in a quiescent phase of accretion. The bolometric luminosity of the companion and age of the host star ({6}-2+4 Myr) imply a mass of 17.5 ± 1.5 M Jup for ROXs 12 B based on hot-start evolutionary models. We identify a wide (5100 au) tertiary companion to this system, 2MASS J16262774-2527247, that is heavily accreting and exhibits stochastic variability in its K2 light curve. By combining v sin I * measurements with rotation periods from K2, we constrain the line-of-sight inclinations of ROXs 12 A and 2MASS J16262774-2527247 and find that they are misaligned by {{60}-11+7}^\\circ . In addition, the orbital axis of ROXs 12 B is likely misaligned from the spin axis of its host star, ROXs 12 A, suggesting that ROXs 12 B formed akin to fragmenting binary stars or in an equatorial disk that was torqued by the wide stellar tertiary. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The observatory was made possible by the generous financial support

  7. The Standard-Model Extension and Gravitational Tests

    CERN Document Server

    Tasson, Jay D


    The Standard-Model Extension (SME) provides a comprehensive effective field-theory framework for the study of CPT and Lorentz symmetry. This work reviews the structure and philosophy of the SME and provides some intuitive examples of symmetry violation. The results of recent gravitational tests performed within the SME are summarized including analysis of results from the Laser Interferometer Gravitational-Wave Observatory (LIGO), sensitivities achieved in short-range gravity experiments, constraints from cosmic-ray data, and results achieved by studying planetary ephemerids. Some proposals and ongoing efforts will also be considered including gravimeter tests, tests of the Weak Equivalence Principle, and antimatter experiments. Our review of the above topics is augmented by several original extensions of the relevant work. We present new examples of symmetry violation in the SME and use the cosmic-ray analysis to place first-ever constraints on 81 additional operators.

  8. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D


    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  9. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D


    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  10. The transition model test for serial dependence in mixed-effects models for binary data

    DEFF Research Database (Denmark)

    Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders


    Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...... elaboration. We therefore propose a targeted diagnostic test for serial dependence, called the transition model test (TMT), that is straightforward and computationally efficient to implement in standard software. The TMT is shown to have larger power than general misspecification tests. We also propose...

  11. Model year 2010 Honda insight level-1 testing report.

    Energy Technology Data Exchange (ETDEWEB)

    Rask, E.; Bocci, D.; Duoba, M.; Lohse-Busch, H. (Energy Systems)


    As a part of the US Department of Energy's Advanced Vehicle Testing Activity (AVTA), a model year 2010 Honda Insight was procured by eTec (Phoenix, AZ) and sent to ANL's Advanced Powertrain Research Facility for the purposes of vehicle-level testing in support of the Advanced Vehicle Testing Activity (AVTA). Data was acquired during testing using non-intrusive sensors, vehicle network information, and facilities equipment (emissions and dynamometer data). Standard drive cycles, performance cycles, steady-state cycles and A/C usage cycles were tested. Much of this data is openly available for download in ANL's Downloadable Dynamometer Database (D3). The major results are shown here in this report. Given the preliminary nature of this assessment, the majority of the testing was done over standard regulatory cycles and seeks to obtain a general overview of how the vehicle performs. These cycles include the US FTP cycle (Urban) and Highway Fuel Economy Test cycle as well as the US06, a more aggressive supplemental regulatory cycle. Data collection for this testing was kept at a fairly high level and includes emissions and fuel measurements from an exhaust emissions bench, high-voltage and accessory current and voltage from a DC power analyzer, and CAN bus data such as engine speed, engine load, and electric machine operation when available. The following sections will seek to explain some of the basic operating characteristics of the MY2010 Insight and provide insight into unique features of its operation and design.

  12. Testing biomechanical models of human lumbar lordosis variability. (United States)

    Castillo, Eric R; Hsu, Connie; Mair, Ross W; Lieberman, Daniel E


    Lumbar lordosis (LL) is a key adaptation for bipedalism, but factors underlying curvature variations remain unclear. This study tests three biomechanical models to explain LL variability. Thirty adults (15 male, 15 female) were scanned using magnetic resonance imaging (MRI), a standing posture analysis was conducted, and lumbar range of motion (ROM) was assessed. Three measures of LL were compared. The trunk's center of mass was estimated from external markers to calculate hip moments (Mhip ) and lumbar flexion moments. Cross-sectional areas of lumbar vertebral bodies and trunk muscles were measured from scans. Regression models tested associations between LL and the Mhip moment arm, a beam bending model, and an interaction between relative trunk strength (RTS) and ROM. Hip moments were not associated with LL. Beam bending was moderately predictive of standing but not supine LL (R(2)  = 0.25). Stronger backs and increased ROM were associated with greater LL, especially when standing (R(2)  = 0.65). The strength-flexibility model demonstrates the differential influence of RTS depending on ROM: individuals with high ROM exhibited the most LL variation with RTS, while those with low ROM showed reduced LL regardless of RTS. Hip moments appear constrained suggesting the possibility of selection, and the beam model explains some LL variability due to variations in trunk geometry. The strength-flexibility interaction best predicted LL, suggesting a tradeoff in which ROM limits the effects of back strength on LL. The strength-flexibility model may have clinical relevance for spinal alignment and pathology. This model may also suggest that straight-backed Neanderthals had reduced lumbar mobility. © 2017 Wiley Periodicals, Inc.

  13. Modal testing for model validation of structures with discrete nonlinearities. (United States)

    Ewins, D J; Weekes, B; delli Carri, A


    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or 'valid': i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. © 2015 The Authors.

  14. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling. (United States)

    Johnson, Shane D; Groff, Elizabeth R


    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  15. Pescara benchmark: overview of modelling, testing and identification

    Energy Technology Data Exchange (ETDEWEB)

    Bellino, A; Garibaldi, L; Marchesiello, S [Dynamics/Identification Research Group, Department of Mechanics, Politecnico of Torino, Duca degli Abruzzi 24, 10129 Torino (Italy); Brancaleoni, F; Gabriele, S; Spina, D [Department of Structures, University ' Roma Tre' of Rome, Via C. Segre 4/6, 00146 Rome (Italy); Bregant, L [Department of Mechanical and Marine Engineering , University of Trieste, Via Valerio 8, 34127 Trieste (Italy); Carminelli, A; Catania, G; Sorrentino, S [Diem Department of Mechanical Engineering, University of Bologna, Viale Risorgimento 2, 40136 Bologna (Italy); Di Evangelista, A; Valente, C; Zuccarino, L, E-mail: [Department of Engineering, University ' G. d' Annunzio' of Chieti-Pescara Viale Pindaro 42, 65127 Pescara (Italy)


    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  16. Pescara benchmark: overview of modelling, testing and identification (United States)

    Bellino, A.; Brancaleoni, F.; Bregant, L.; Carminelli, A.; Catania, G.; Di Evangelista, A.; Gabriele, S.; Garibaldi, L.; Marchesiello, S.; Sorrentino, S.; Spina, D.; Valente, C.; Zuccarino, L.


    The `Pescara benchmark' is part of the national research project `BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universitá e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  17. Test-day models : breeding value estimation based on individual test-day records


    Pool, M.H.


    The studies described in this thesis were achieved within the graduate school Wageningen Institute of Animal Science (WIAS), carried out at the Institute for Animal Science and Health (ID-Lelystad BV) at the department of Genetics and Reproduction, and financially supported by the product division NRS of CR-DELTA.

    This thesis describes choices and decisions made to develop a random regression test-day model. Studies included were performed on Dutch dairy cattle data...

  18. Wind tunnel test of 1/30 scale heliostat field array model. Test report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, G. L.


    From 9 January through 20 January 1978, Honeywell conducted a wind tunnel test on a 1/30 scale partial heliostat field. The heliostats were per Honeywell's design developed under the 10 megawatt central receiver pilot electrical power plant subsystem research experiment contract. Likewise, the scaled section of the field geometry duplicated the proposed circular layout. Testing was conducted at the Georgia Institute of Technology's 9 foot subsonic tunnel. The objective of the test was to ascertain from a qualitative standpoint the field effects upon wind loading within a heliostat field. To accomplish this, numerous pressure tap measurements at different heights and at different field positions were taken with varying wind speeds, fence designs, and heliostat gimbal orientations. The Department of Energy specified boundary layer profile was also scaled by 1/30 in order to simulate the total wind effects as accurately as possible taking into account the potentially severe scaling or Reynolds number effects at a 1/30 scale. After initial model set-up within the tunnel and scaled boundary layer generated, 91 separate runs were accomplished. The results do demonstrate the high sensitivity of wind loading upon the collector field due to the actual heliostat orientation and fence geometry. Vertical pressure gradients within the model field and flow reentry angles provide a good qualitative feel as to the full scale environment that might be expected and point to the need for specific additional testing to further explore potentially dangerous conditions.

  19. Optimum coagulant forecasting by modeling jar test experiments using ANNs

    Directory of Open Access Journals (Sweden)

    S. Haghiri


    Full Text Available Currently, the proper utilization of water treatment plants and optimizing their use is of particular importance. Coagulation and flocculation in water treatment are the common ways through which the use of coagulants leads to instability of particles and the formation of larger and heavier particles, resulting in improvement of sedimentation and filtration processes. Determination of the optimum dose of such a coagulant is of particular significance. A high dose, in addition to adding costs, can cause the sediment to remain in the filtrate, a dangerous condition according to the standards, while a sub-adequate dose of coagulants can result in the reducing the required quality and acceptable performance of the coagulation process. Although jar tests are used for testing coagulants, such experiments face many constraints with respect to evaluating the results produced by sudden changes in input water because of their significant costs, long time requirements, and complex relationships among the many factors (turbidity, temperature, pH, alkalinity, etc. that can influence the efficiency of coagulant and test results. Modeling can be used to overcome these limitations; in this research study, an artificial neural network (ANN multi-layer perceptron (MLP with one hidden layer has been used for modeling the jar test to determine the dosage level of used coagulant in water treatment processes. The data contained in this research have been obtained from the drinking water treatment plant located in Ardabil province in Iran. To evaluate the performance of the model, the mean squared error (MSE and correlation coefficient (R2 parameters have been used. The obtained values are within an acceptable range that demonstrates the high accuracy of the models with respect to the estimation of water-quality characteristics and the optimal dosages of coagulants; so using these models will allow operators to not only reduce costs and time taken to perform

  20. Rare B decays as tests of the Standard Model (United States)

    Blake, Thomas; Lanfranchi, Gaia; Straub, David M.


    One of the most interesting puzzles in particle physics today is that new physics is expected at the TeV energy scale to solve the hierarchy problem, and stabilises the Higgs mass, but so far no unambiguous signal of new physics has been found. Strong constraints on the energy scale of new physics can be derived from precision tests of the electroweak theory and from flavour-changing or CP-violating processes in strange, charm and beauty hadron decays. Decays that proceed via flavour-changing-neutral-current processes are forbidden at the lowest perturbative order in the Standard Model and are, therefore, rare. Rare b hadron decays are playing a central role in the understanding of the underlying patterns of Standard Model physics and in setting up new directions in model building for new physics contributions. In this article the status and prospects of this field are reviewed.

  1. Testing and Modeling of Contact Problems in Resistance Welding

    DEFF Research Database (Denmark)

    Song, Quanfeng

    together two or three cylindrical parts as well as disc-ring pairs of dissimilar metals. The tests have demonstrated the effectiveness of the model. A theoretical and experimental study is performed on the contact resistance aiming at a more reliable model for numerical simulation of resistance welding...... is validated in some projection welding experiments. The program is also applied to solve some resistance welding operations involving contact problems, showing that numerical simulation facilitates better understand of resistance welding.......As a part of the efforts towards a professional and reliable numerical tool for resistance welding engineers, this Ph.D. project is dedicated to refining the numerical models related to the interface behavior. An FE algorithm for the contact problems in resistance welding has been developed...

  2. Testing a Model of Work Performance in an Academic Environment

    Directory of Open Access Journals (Sweden)

    B. Charles Tatum


    Full Text Available In modern society, people both work and study. The intersection between organizational and educational research suggests that a common model should apply to both academic and job performance. The purpose of this study was to apply a model of work and job performance (based on general expectancy theory to a classroom setting, and test the predicted relationships using a causal/path model methodology. The findings revealed that motivation and ability predicted student expectations and self-efficacy, and that expectations and efficacy predicted class performance. Limitations, implications, and future research directions are discussed. This study showed how the research in industrial and organizational psychology is relevant to education. It was concluded that greater effort should be made to integrate knowledge across a wider set of domains.

  3. Force Limited Random Vibration Test of TESS Camera Mass Model (United States)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.


    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  4. Yield surface investigation of alloys during model disk spin tests

    Directory of Open Access Journals (Sweden)

    E. P. Kuzmin


    Full Text Available Gas-turbine engines operate under heavy subsequently static loading conditions. Disks of gas-turbine engine are high loaded parts of irregular shape having intensive stress concentrators wherein a 3D stress strain state occurs. The loss of load-carrying capability or burst of disk can lead to severe accident or disaster. Therefore, development of methods to assess deformations and to predict burst is one of the most important problems.Strength assessment approaches are used at all levels of engine creation. In recent years due to actively developing numerical method, particularly FEA, it became possible to investigate load-carrying capability of irregular shape disks, to use 3D computing schemes including flow theory and different options of force and deformation failure criteria. In spite of a wide progress and practical use of strength assessment approaches, there is a lack of detailed research data on yield surface of disk alloys. The main purpose of this work is to validate the use of basis hypothesis of flow theory and investigate the yield surface of disk alloys during the disks spin test.The results of quasi-static numerical simulation of spin tests of model disk made from high-temperature forged alloy are presented. To determine stress-strain state of disk during loading finite element analysis is used. Simulation of elastic-plastic strain fields was carried out using incremental theory of plasticity with isotropic hardening. Hardening function was taken from the results of specimens tensile test. Specimens were cut from a sinkhead of model disk. The paper investigates the model sensitivity affected by V.Mises and Tresca yield criteria as well as the Hosford model. To identify the material model parameters the eddy current sensors were used in the experimental approach to measure rim radial displacements during the load-unload of spin test. The results of calculation made using different material models were compared with the

  5. Computational model for simulation small testing launcher, technical solution

    Energy Technology Data Exchange (ETDEWEB)

    Chelaru, Teodor-Viorel, E-mail: [University POLITEHNICA of Bucharest - Research Center for Aeronautics and Space, Str. Ghe Polizu, nr. 1, Bucharest, Sector 1 (Romania); Cristian, Barbu, E-mail: [Military Technical Academy, Romania, B-dul. George Coşbuc, nr. 81-83, Bucharest, Sector 5 (Romania); Chelaru, Adrian, E-mail: [INCAS -National Institute for Aerospace Research Elie Carafoli, B-dul Iuliu Maniu 220, 061126, Bucharest, Sector 6 (Romania)


    The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital

  6. Atmospheric resuspension of radionuclides. Model testing using Chernobyl data

    Energy Technology Data Exchange (ETDEWEB)

    Garger, E.; Lev, T.; Talerko, N. [Inst. of Radioecology UAAS, Kiev (Ukraine); Galeriu, D. [Institute of Atomic Physics, Bucharest (Romania); Garland, J. [Consultant (United Kingdom); Hoffman, O.; Nair, S.; Thiessen, K. [SENES, Oak Ridge, TN (United States); Miller, C. [Centre for Disease Control, Atlanta, GA (United States); Mueller, H. [GSF - Inst. fuer Strahlenschultz, Neuherberg (Germany); Kryshev, A. [Moscow State Univ. (Russian Federation)


    Resuspension can be an important secondary source of contamination after a release has stopped, as well as a source of contamination for people and areas not exposed to the original release. The inhalation of resuspended radionuclides contributes to the overall dose received by exposed individuals. Based on measurements collected after the Chernobyl accident, Scenario R was developed to provide an opportunity to test existing mathematical models of contamination resuspension. In particular, this scenario provided the opportunity to examine data and test models for atmospheric resuspension of radionuclides at several different locations from the release, to investigate resuspension processes on both local and regional scales, and to investigate the importance of seasonal variations of these processes. Participants in the test exercise were provided with information for three different types of locations: (1) within the 30-km zone, where local resuspension processes are expected to dominate; (2) a large urban location (Kiev) 120 km from the release site, where vehicular traffic is expected to be the dominant mechanism for resuspension; and (3) an agricultural area 40-60 km from the release site, where highly contaminated upwind 'hot spots' are expected to be important. Input information included characteristics of the ground contamination around specific sites, climatological data for the sites, characteristics of the terrain and topography, and locations of the sampling sites. Participants were requested to predict the average (quarterly and yearly) concentrations of 137 Cs in air at specified locations due to resuspension of Chernobyl fallout; predictions for 90 Sr and 239 + 240 Pu were also requested for one location and time point. Predictions for specified resuspension factors and rates were also requested. Most participants used empirical models for the resuspension factor as a function of time K(t), as opposed to process-based models. While many of

  7. Laboratory tests of IEC DER object models for grid applications.

    Energy Technology Data Exchange (ETDEWEB)

    Blevins, John D. (PE Salt River Project, Phoenix, AZ); Menicucci, David F.; Byrd, Thomas, Jr. (,; .); Gonzalez, Sigifredo; Ginn, Jerry W.; Ortiz-Moyet, Juan (Primecore, Inc.)


    This report describes a Cooperative Research and Development Agreement (CRADA) between Salt River Project Agricultural Improvement and Power District (SRP) and Sandia National Laboratories to jointly develop advanced methods of controlling distributed energy resources (DERs) that may be located within SRP distribution systems. The controls must provide a standardized interface to allow plug-and-play capability and should allow utilities to take advantage of advanced capabilities of DERs to provide a value beyond offsetting load power. To do this, Sandia and SRP field-tested the IEC 61850-7-420 DER object model (OM) in a grid environment, with the goal of validating whether the model is robust enough to be used in common utility applications. The diesel generator OM tested was successfully used to accomplish basic genset control and monitoring. However, as presently constituted it does not enable plug-and-play functionality. Suggestions are made of aspects of the standard that need further development and testing. These problems are far from insurmountable and do not imply anything fundamentally unsound or unworkable in the standard.

  8. Numerical modeling of Thermal Response Tests in Energy Piles (United States)

    Franco, A.; Toledo, M.; Moffat, R.; Herrera, P. A.


    Nowadays, thermal response tests (TRT) are used as the main tools for the evaluation of low enthalpy geothermal systems such as heat exchangers. The results of TRT are used for estimating thermal conductivity and thermal resistance values of those systems. We present results of synthetic TRT simulations that model the behavior observed in an experimental energy pile system, which was installed at the new building of the Faculty of Engineering of Universidad de Chile. Moreover, we also present a parametric study to identify the most influent parameters in the performance of this type of tests. The modeling was developed using the finite element software COMSOL Multiphysics, which allows the incorporation of flow and heat transport processes. The modeled system consists on a concrete pile with 1 m diameter and 28 m deep, which contains a 28 mm diameter PEX pipe arranged in a closed circuit. Three configurations were analyzed: a U pipe, a triple U and a helicoid shape implemented at the experimental site. All simulations were run considering transient response in a three-dimensional domain. The simulation results provided the temperature distribution on the pile for a set of different geometry and physical properties of the materials. These results were compared with analytical solutions which are commonly used to interpret TRT data. This analysis demonstrated that there are several parameters that affect the system response in a synthetic TRT. For example, the diameter of the simulated pile affects the estimated effective thermal conductivity of the system. Moreover, the simulation results show that the estimated thermal conductivity for a 1 m diameter pile did not stabilize even after 100 hours since the beginning of the test, when it reached a value 30% below value used to set up the material properties in the simulation. Furthermore, we observed different behaviors depending on the thermal properties of concrete and soil. According to the simulations, the thermal

  9. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Ramanathan, Arvind [ORNL; Steed, Chad A [ORNL; Pullum, Laura L [ORNL


    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we build a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.

  10. Generalized Symbolic Execution for Model Checking and Testing (United States)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)


    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  11. Estimating the Strength of Superrotation with a Simplified Shallow-Water Model (United States)

    Wang, H.; Wordsworth, R. D.


    Synchronously rotating close-in exoplanets, based on three-dimensional general circulation models, are usually expected to exhibit strong eastward equatorial jets (equatorial superrotation). The strength of equatorial superrotation greatly influences important observables, such as the day-night temperature difference and hottest region phase shift from the substellar point. Yet the strength of equatorial jets cannot be quantitatively predicted by current theories. We try to estimate the strength of superrotation with a simplified analytical model, which is based on a one-and-a-half-layer shallow water model. In our model, an active layer is governed by the shallow water equation, and a quiescent layer exchanges mass and momentum with the active layer. This shallow water model, originally proposed by Shell and Held (2004) to study superrotation, allows us to test different approximations that aid our estimation of the jet speed. In addition, by varying the interaction between the active layer and the immobile layer, we study how the lower atmosphere influences the dynamics and day-night gradient in the upper atmosphere and investigate the possibility of gathering information on the lower atmosphere by analyzing the observables of the upper atmosphere. We also compare our shallow-water model with an idealized three-dimensional general circulation model to assess the limitations of our model and theory.

  12. Overview of the Ares I Scale Model Acoustic Test Program (United States)

    Counter, Douglas D.; Houston, Janice D.


    Launch environments, such as lift-off acoustic (LOA) and ignition overpressure (IOP), are important design factors for any vehicle and are dependent upon the design of both the vehicle and the ground systems. LOA environments are used directly in the development of vehicle vibro-acoustic environments and IOP is used in the loads assessment. The NASA Constellation Program had several risks to the development of the Ares I vehicle linked to LOA. The risks included cost, schedule and technical impacts for component qualification due to high predicted vibro-acoustic environments. One solution is to mitigate the environment at the component level. However, where the environment is too severe for component survivability, reduction of the environment itself is required. The Ares I Scale Model Acoustic Test (ASMAT) program was implemented to verify the Ares I LOA and IOP environments for the vehicle and ground systems including the Mobile Launcher (ML) and tower. An additional objective was to determine the acoustic reduction for the LOA environment with an above deck water sound suppression system. ASMAT was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116 (TS 116). The ASMAT program is described in this presentation.

  13. Verification and Uncertainty Reduction of Amchitka Underground Nuclear Testing Models

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed Hassan; Jenny Chapman


    The modeling of Amchitka underground nuclear tests conducted in 2002 is verified and uncertainty in model input parameters, as well as predictions, has been reduced using newly collected data obtained by the summer 2004 field expedition of CRESP. Newly collected data that pertain to the groundwater model include magnetotelluric (MT) surveys conducted on the island to determine the subsurface salinity and porosity structure of the subsurface, and bathymetric surveys to determine the bathymetric maps of the areas offshore from the Long Shot and Cannikin Sites. Analysis and interpretation of the MT data yielded information on the location of the transition zone, and porosity profiles showing porosity values decaying with depth. These new data sets are used to verify the original model in terms of model parameters, model structure, and model output verification. In addition, by using the new data along with the existing data (chemistry and head data), the uncertainty in model input and output is decreased by conditioning on all the available data. A Markov Chain Monte Carlo (MCMC) approach is adapted for developing new input parameter distributions conditioned on prior knowledge and new data. The MCMC approach is a form of Bayesian conditioning that is constructed in such a way that it produces samples of the model parameters that eventually converge to a stationary posterior distribution. The Bayesian MCMC approach enhances probabilistic assessment. Instead of simply propagating uncertainty forward from input parameters into model predictions (i.e., traditional Monte Carlo approach), MCMC propagates uncertainty backward from data onto parameters, and then forward from parameters into predictions. Comparisons between new data and the original model, and conditioning on all available data using MCMC method, yield the following results and conclusions: (1) Model structure is verified at Long Shot and Cannikin where the high-resolution bathymetric data collected by CRESP

  14. Generative Temporal Modelling of Neuroimaging - Decomposition and Nonparametric Testing

    DEFF Research Database (Denmark)

    Hald, Ditte Høvenhoff

    instantaneous and convolutive mixing, and the inferred temporal patterns. Spatial maps are seen to capture smooth and localized stimuli-related components, and often identifiable noise components. The implementation is freely available as a GUI/SPM plugin, and we recommend using GPICA as an additional tool when...... of the brain, and provides insight into neuronal coupling during mental processes or tasks. The decomposition method is a Gaussian process-based independent components analysis (GPICA), which incorporates a temporal dependency in the sources. A hierarchical model specification is used, featuring both...... performing ICA on fMRI data to investigate the effect of the temporal source prior. In fMRI, statistical tests are used to investigate the significance of activation in specific brain regions. By extending the non-parametric testing framework to incorporate functional prior knowledge, an increase...

  15. Boron-10 ABUNCL Prototype Models And Initial Active Testing

    Energy Technology Data Exchange (ETDEWEB)

    Kouzes, Richard T.; Ely, James H.; Lintereur, Azaree T.; Siciliano, Edward R.


    The Department of Energy Office of Nuclear Safeguards and Security (NA-241) is supporting the project Coincidence Counting With Boron-Based Alternative Neutron Detection Technology at Pacific Northwest National Laboratory (PNNL) for the development of a 3He proportional counter alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a system based upon 10B-lined proportional tubes in a configuration typical for 3He-based coincidence counter applications. This report provides results from MCNPX model simulations and initial testing of the active mode variation of the Alternative Boron-Based Uranium Neutron Coincidence Collar (ABUNCL) design built by General Electric Reuter-Stokes. Initial experimental testing of the as-delivered passive ABUNCL was previously reported.

  16. Comparison between the Lactation Model and the Test-Day Model ...

    African Journals Online (AJOL)


    Genetic Evaluation, using a Lactation Model (LM). The other set was obtained in the 2004 South African. National Genetic Evaluation, using a Fixed Regression Test-day Model (TDM). This comparison is made for. Ayrshire, Guernsey, Holstein and Jersey cows participating in the South African Dairy Animal Improvement.

  17. Model Checking and Model-based Testing in the Railway Domain

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Peleska, Jan


    This chapter describes some approaches and emerging trends for verification and model-based testing of railway control systems. We describe state-of-the-art methods and associated tools for verifying interlocking systems and their configuration data, using bounded model checking and k-induction. ...

  18. Modeling Pacing Behavior and Test Speededness Using Latent Growth Curve Models (United States)

    Kahraman, Nilufer; Cuddy, Monica M.; Clauser, Brian E.


    This research explores the usefulness of latent growth curve modeling in the study of pacing behavior and test speededness. Examinee response times from a high-stakes, computerized examination, collected before and after the examination was subjected to a timing change, were analyzed using a series of latent growth curve models to detect…

  19. Potential Worst-case System for Testing EMI Filters Tested on Simple Filter Models

    Directory of Open Access Journals (Sweden)

    Z. Raida


    Full Text Available This paper deals with the approximate worst-case test method for testing the insertion loss of the EMI filters. The systems with 0.1 Ω and 100 Ω impedances are usually used for this testing. These systems are required by the international CISPR 17 standard. The main disadvantage of this system is the use of two impedance transformers. Especially the impedance transformer with 0.1 Ω output impedance is not easy to be produced. These transformers have usually narrow bandwidth. This paper discusses the alternative system with 1 Ω and 100 Ω impedances. The performance of these systems was tested on several filters’ models and the obtained data are depicted, too. The performance comparison of several filters in several systems is also included. The performance of alternate worst-case system is discussed in the conclusion.

  20. The backpack run test: a model for a fair and occupationally relevant military fitness test. (United States)

    Vanderburgh, P M; Flanagan, S


    Our purpose in this investigation was to develop and validate a theoretical model for a backpack run test based on how fast one can run 2 miles while wearing a backpack. Using actual unloaded (no backpack) 2-mile-run test data from 59 male service academy cadets, we calculated the average oxygen cost during the run, the equivalent cost if wearing additional weight, and the corresponding estimated run time with the backpack. The correlations between body weight and loaded (backpack weight = 30 kg) run times (r = 0.55 [p 0.05], respectively) demonstrate that the bias against heavier runners is eliminated with the backpack run. Given that the backpack run test requires only standard-issue equipment, demonstrates clear occupational and health-related fitness relevance, predicts no apparent body-size bias, and measures work- and health-related components of fitness, we recommend that the military services consider the present data when developing or modifying tests of physical fitness.

  1. T-Craft Seabase Ramp Loads Model Test Data Report (United States)


    matrix of Table 1, the T-Craft model was tested in three different modes in the Tandem. These modes included 1) as a barge (with a Styrofoam block... plates of the T-Craft is presented in Figure 11. This figure details the airflow holes for each of the separate plates that make up the transverse...deck. 13 *oo 99.3 ^ 3 och tmaaa kolc -3.30 - Top Plate Details i •K’Ki i.UU «J0 NJD -99.37- -» 18.10 » 1 Fii7IX2:.5«l ?l <l 43 30

  2. Model Borne Data Management System for Wind Tunnel Testing (United States)


    AD-A276 296 WL-TR-93-3 125 MODEL BORNE DATA MANAGEMENT SYSTEM FOR WIND TUNNEL TESTING PHASE II SBIR FINAL REPORT Lee F. Webster T.S. Paige TeSCO , Inc...DIRECTORATE AEROMECHANICS DIVISION, WL/FIMH TeSCO -TR-93-1 WRIGHT-PATTERSON AFB OH 45433-7936 9. SPONSOR-.NG/MOn!TOR!NG AGENCY NM.ME(S) AND ADDRESS(ES) 10...INTRODUCTION TeSCO , Inc., with support from Wright Patterson Air Force Base, has developed and demonstrated new techniques and data acquisition hardware

  3. Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Abrahamsen, Asger Bech


    This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive way...... of validating finite element (FE) simulations and gaining a better understanding of HTS machines. 3D FE simulations of the machine are compared to measured current vs. voltage (IV) curves for the tape on its own. It is validated that this method can be used to predict the critical current of the HTS tape...... installed in the machine. The measured torque as a function of rotor position is also reproduced by the 3D FE model....

  4. A field test of a simple stochastic radiative transfer model

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, N. [Science Applications International Corp., San Diego, CA (United States)


    The problem of determining the effect of clouds on the radiative energy balance of the globe is of well-recognized importance. One can in principle solve the problem for any given configuration of clouds using numerical techniques. This knowledge is not useful however, because of the amount of input data and computer resources required. Besides, we need only the average of the resulting solution over the grid scale of a general circulation model (GCM). Therefore, we are interested in estimating the average of the solutions of such fine-grained problems using only coarse grained data, a science or art called stochastic radiation transfer. Results of the described field test indicate that the stochastic description is a somewhat better fit to the data than is a fractional cloud cover model, but more data are needed. 1 ref., 3 figs.

  5. Testing a model of depression among Thai adolescents. (United States)

    Vatanasin, Duangjai; Thapinta, Darawan; Thompson, Elaine Adams; Thungjaroenkul, Petsunee


    This predictive correlational study was designed to test a comprehensive model of depression for Thai adolescents. This sample included 800 high school students in Chiang Mai, Thailand. Data were collected using self-reported measures of depression, negative automatic thoughts, effective social problem solving, ineffective social problem solving, rumination, parental care, parental overprotection, and negative life events. Structural equation modeling revealed that negative automatic thoughts, effective and ineffective social problem solving mediated the effects of rumination, negative life events, and parental care and overprotection on adolescent depression. These findings provide new knowledge about identified factors and the mechanisms of their influence on depression among Thai adolescents, which are appropriate for targeting preventive interventions. © 2012 Wiley Periodicals, Inc.

  6. Testing Numerical Modeling of Phase Coarsening by Microgravity Experiments (United States)

    Wang, K. G.; Glicksman, M. E.


    Quantitative understanding of the morphological evolution that occurs during phase coarsening is crucial for optimization of processing procedures to control the final structure and properties of multiphase materials. Generally, ground-based experimental studies of phase coarsening in solids are limited to model alloy systems. Data from microgravity experiments on phase coarsening in Sn-Pb solid-liquid mixtures, executed on the International Space Station, are archived in NASA's Physical Sciences Informatics (PSI) system. In such microgravity experiments, it is expected that the rate of sedimentation will be greatly reduced compared with terrestrial conditions, allowing the kinetics of phase coarsening to be followed more carefully and accurately. In this work we tested existing numerical models of phase coarsening using NASA's PSI microgravity data. Specially, we compared the microstructures derived from phase-field and multiparticle diffusion simulations with those observed in microgravity experiments.

  7. An innovation in physical modelling for testing marine renewables technology (United States)

    Todd, David; Whitehouse, Richard; Harris, John; Liddiard, Mark


    HR Wallingford has undertaken physical modelling of scour around structures since its creation as a government research laboratory in 1947. Since privatisation in 1982 HR Wallingford has carried out a large number of studies for offshore developments including renewable energy developments and offshore wind in particular, looking at scour around offshore foundations and cables. To maintain our position as both a research and consultancy organisation delivering high quality work we have developed a new purpose built physical modelling facility. The Fast Flow Facility is a dual-channel, race track shaped flume and the only large scale physical modelling facility of this kind offering wave, fast tidal current and recirculating sediment capabilities. The 75 m long, 8 m wide and 2.5 m deep Fast Flow Facility has two working channels of 4 m and 2.6 m width. Holding up to a million litres of water the facility can generate waves with significant wave heights, Hs, of up to 0.5 m and maximum wave heights of up to 1 m in combination with flows of up to 2 m/s (~4 knots). This state-of-the-art facility combines fast, reversible currents with wave generation and sediment transport modelling in a single flume, allowing us to further develop our understanding of sediment transport within the marine environment and keep us at the forefront of sediment transport research. The facility has been designed with the marine renewables sector in mind, with a 4 x 4 x 1m deep sediment pit in the centre of the flume allowing investigations to provide improved understanding of the detailed processes which lead to scour, and enabling improvements in prediction capabilities for marine scour in different sediment seabed compositions (non-cohesive and cohesive) for a range of structure types (monopiles, jackets, gravity base foundations, jack-ups etc.). The facility also enables the testing of scour protection methodologies at relatively large scale (typically 1: 10 - 1:20) and allows for

  8. Can the super model (SUMO) method improve hydrological simulations? Exploratory tests with the GR hydrological models (United States)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles


    Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.

  9. Test of a Power Transfer Model for Standardized Electrofishing (United States)

    Miranda, L.E.; Dolan, C.R.


    Standardization of electrofishing in waters with differing conductivities is critical when monitoring temporal and spatial differences in fish assemblages. We tested a model that can help improve the consistency of electrofishing by allowing control over the amount of power that is transferred to the fish. The primary objective was to verify, under controlled laboratory conditions, whether the model adequately described fish immobilization responses elicited with various electrical settings over a range of water conductivities. We found that the model accurately described empirical observations over conductivities ranging from 12 to 1,030 ??S/cm for DC and various pulsed-DC settings. Because the model requires knowledge of a fish's effective conductivity, an attribute that is likely to vary according to species, size, temperature, and other variables, a second objective was to gather available estimates of the effective conductivity of fish to examine the magnitude of variation and to assess whether in practical applications a standard effective conductivity value for fish may be assumed. We found that applying a standard fish effective conductivity of 115 ??S/cm introduced relatively little error into the estimation of the peak power density required to immobilize fish with electrofishing. However, this standard was derived from few estimates of fish effective conductivity and a limited number of species; more estimates are needed to validate our working standard.

  10. A network landscape model: stability analysis and numerical tests (United States)

    Bonacini, E.; Groppi, M.; Monaco, R.; Soares, A. J.; Soresina, C.


    A Network Landscape Model (NLM) for the evaluation of the ecological trend of an environmental system is here presented and investigated. The model consists in a network of dynamical systems, where each node represents a single Landscape Unit (LU), endowed by a system of ODEs for two variables relevant to the production of bio-energy and to the percentage of green areas, respectively. The main goal of the paper consists in testing the relevance of connectivity between the LUs. For this purpose we consider first the Single LU Model (SLM) and investigate its equilibria and their stability, in terms of two bifurcation parameters. Then the network dynamics is theoretically investigated by means of a bifurcation analysis of a proper simplified differential system, that allows to understand how the coupling between different LUs modifies the asymptotic scenarios for the single LU model. Numerical simulations of NLM are performed, with reference to an environmental system in Northern Italy, and results are discussed in connection with SLM.

  11. Testing flow diversion in animal models: a systematic review. (United States)

    Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E


    Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.

  12. Low Frequency Noise Contamination in Fan Model Testing (United States)

    Brown, Clifford A.; Schifer, Nicholas A.


    Aircraft engine noise research and development depends on the ability to study and predict the noise created by each engine component in isolation. The presence of a downstream pylon for a model fan test, however, may result in noise contamination through pylon interactions with the free stream and model exhaust airflows. Additionally, there is the problem of separating the fan and jet noise components generated by the model fan. A methodology was therefore developed to improve the data quality for the 9 15 Low Speed Wind Tunnel (LSWT) at the NASA Glenn Research Center that identifies three noise sources: fan noise, jet noise, and rig noise. The jet noise and rig noise were then measured by mounting a scale model of the 9 15 LSWT model fan installation in a jet rig to simulate everything except the rotating machinery and in duct components of fan noise. The data showed that the spectra measured in the LSWT has a strong rig noise component at frequencies as high as 3 kHz depending on the fan and airflow fan exit velocity. The jet noise was determined to be significantly lower than the rig noise (i.e., noise generated by flow interaction with the downstream support pylon). A mathematical model for the rig noise was then developed using a multi-dimensional least squares fit to the rig noise data. This allows the rig noise to be subtracted or removed, depending on the amplitude of the rig noise relative to the fan noise, at any given frequency, observer angle, or nozzle pressure ratio. The impact of isolating the fan noise with this method on spectra, overall power level (OAPWL), and Effective Perceived Noise Level (EPNL) is studied.

  13. Estimating a DIF decomposition model using a random-weights linear logistic test model approach. (United States)

    Paek, Insu; Fukuhara, Hirotaka


    A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.

  14. Modeling transient streaming potentials in falling-head permeameter tests. (United States)

    Malama, Bwalya; Revil, André


    We present transient streaming potential data collected during falling-head permeameter tests performed on samples of two sands with different physical and chemical properties. The objective of the work is to estimate hydraulic conductivity (K) and the electrokinetic coupling coefficient (Cl ) of the sand samples. A semi-empirical model based on the falling-head permeameter flow model and electrokinetic coupling is used to analyze the streaming potential data and to estimate K and Cl . The values of K estimated from head data are used to validate the streaming potential method. Estimates of K from streaming potential data closely match those obtained from the associated head data, with less than 10% deviation. The electrokinetic coupling coefficient was estimated from streaming potential vs. (1) time and (2) head data for both sands. The results indicate that, within limits of experimental error, the values of Cl estimated by the two methods are essentially the same. The results of this work demonstrate that a temporal record of the streaming potential response in falling-head permeameter tests can be used to estimate both K and Cl . They further indicate the potential for using transient streaming potential data as a proxy for hydraulic head in hydrogeology applications. © 2013, National Ground Water Association.

  15. Very Low-mass Stellar and Substellar Companions to Solar-like Stars from MARVELS. VI. A Giant Planet and a Brown Dwarf Candidate in a Close Binary System HD 87646 (United States)

    Ma, Bo; Ge, Jian; Wolszczan, Alex; Muterspaugh, Matthew W.; Lee, Brian; Henry, Gregory W.; Schneider, Donald P.; Martín, Eduardo L.; Niedzielski, Andrzej; Xie, Jiwei; Fleming, Scott W.; Thomas, Neil; Williamson, Michael; Zhu, Zhaohuan; Agol, Eric; Bizyaev, Dmitry; Nicolaci da Costa, Luiz; Jiang, Peng; Martinez Fiorenzano, A. F.; González Hernández, Jonay I.; Guo, Pengcheng; Grieves, Nolan; Li, Rui; Liu, Jane; Mahadevan, Suvrath; Mazeh, Tsevi; Nguyen, Duy Cuong; Paegert, Martin; Sithajan, Sirinrat; Stassun, Keivan; Thirupathi, Sivarani; van Eyken, Julian C.; Wan, Xiaoke; Wang, Ji; Wisniewski, John P.; Zhao, Bo; Zucker, Shay


    We report the detections of a giant planet (MARVELS-7b) and a brown dwarf (BD) candidate (MARVELS-7c) around the primary star in the close binary system, HD 87646. To the best of our knowledge, it is the first close binary system with more than one substellar circumprimary companion that has been discovered. The detection of this giant planet was accomplished using the first multi-object Doppler instrument (KeckET) at the Sloan Digital Sky Survey (SDSS) telescope. Subsequent radial velocity observations using the Exoplanet Tracker at the Kitt Peak National Observatory, the High Resolution Spectrograph at the Hobby Eberley telescope, the “Classic” spectrograph at the Automatic Spectroscopic Telescope at the Fairborn Observatory, and MARVELS from SDSS-III confirmed this giant planet discovery and revealed the existence of a long-period BD in this binary. HD 87646 is a close binary with a separation of ˜22 au between the two stars, estimated using the Hipparcos catalog and our newly acquired AO image from PALAO on the 200 inch Hale Telescope at Palomar. The primary star in the binary, HD 87646A, has {T}{eff} = 5770 ± 80 K, log g = 4.1 ± 0.1, and [Fe/H] = -0.17 ± 0.08. The derived minimum masses of the two substellar companions of HD 87646A are 12.4 ± 0.7 {M}{Jup} and 57.0 ± 3.7 {M}{Jup}. The periods are 13.481 ± 0.001 days and 674 ± 4 days and the measured eccentricities are 0.05 ± 0.02 and 0.50 ± 0.02 respectively. Our dynamical simulations show that the system is stable if the binary orbit has a large semimajor axis and a low eccentricity, which can be verified with future astrometry observations.

  16. Testing calibration routines for LISFLOOD, a distributed hydrological model (United States)

    Pannemans, B.


    Traditionally hydrological models are considered as difficult to calibrate: their highly non-linearity results in rugged and rough response surfaces were calibration algorithms easily get stuck in local minima. For the calibration of distributed hydrological models two extra factors play an important role: on the one hand they are often costly on computation, thus restricting the feasible number of model runs; on the other hand their distributed nature smooths the response surface, thus facilitating the search for a global minimum. Lisflood is a distributed hydrological model currently used for the European Flood Alert System - EFAS (Van der Knijff et al, 2008). Its upcoming recalibration over more then 200 catchments, each with an average runtime of 2-3 minutes, proved a perfect occasion to put several existing calibration algorithms to the test. The tested routines are Downhill Simplex (DHS, Nelder and Mead, 1965), SCEUA (Duan et Al. 1993), SCEM (Vrugt et al., 2003) and AMALGAM (Vrugt et al., 2008), and they were evaluated on their capability to efficiently converge onto the global minimum and on the spread in the found solutions in repeated runs. The routines were let loose on a simple hyperbolic function, on a Lisflood catchment using model output as observation, and on two Lisflood catchments using real observations (one on the river Inn in the Alps, the other along the downstream stretch of the Elbe). On the mathematical problem and on the catchment with synthetic observations DHS proved to be the fastest and the most efficient in finding a solution. SCEUA and AMALGAM are a slower, but while SCEUA keeps converging on the exact solution, AMALGAM slows down after about 600 runs. For the Lisflood models with real-time observations AMALGAM (hybrid algorithm that combines several other algorithms, we used CMA, PSO and GA) came as fastest out of the tests, and giving comparable results in consecutive runs. However, some more work is needed to tweak the stopping

  17. Organotypic liver culture models: meeting current challenges in toxicity testing. (United States)

    LeCluyse, Edward L; Witek, Rafal P; Andersen, Melvin E; Powers, Mark J


    Prediction of chemical-induced hepatotoxicity in humans from in vitro data continues to be a significant challenge for the pharmaceutical and chemical industries. Generally, conventional in vitro hepatic model systems (i.e. 2-D static monocultures of primary or immortalized hepatocytes) are limited by their inability to maintain histotypic and phenotypic characteristics over time in culture, including stable expression of clearance and bioactivation pathways, as well as complex adaptive responses to chemical exposure. These systems are less than ideal for longer-term toxicity evaluations and elucidation of key cellular and molecular events involved in primary and secondary adaptation to chemical exposure, or for identification of important mediators of inflammation, proliferation and apoptosis. Progress in implementing a more effective strategy for in vitro-in vivo extrapolation and human risk assessment depends on significant advances in tissue culture technology and increasing their level of biological complexity. This article describes the current and ongoing need for more relevant, organotypic in vitro surrogate systems of human liver and recent efforts to recreate the multicellular architecture and hemodynamic properties of the liver using novel culture platforms. As these systems become more widely used for chemical and drug toxicity testing, there will be a corresponding need to establish standardized testing conditions, endpoint analyses and acceptance criteria. In the future, a balanced approach between sample throughput and biological relevance should provide better in vitro tools that are complementary with animal testing and assist in conducting more predictive human risk assessment.

  18. Model Checking Vector Addition Systems with one zero-test

    CERN Document Server

    Bonet, Rémi; Leroux, Jérôme; Zeitoun, Marc


    We design a variation of the Karp-Miller algorithm to compute, in a forward manner, a finite representation of the cover (i.e., the downward closure of the reachability set) of a vector addition system with one zero-test. This algorithm yields decision procedures for several problems for these systems, open until now, such as place-boundedness or LTL model-checking. The proof techniques to handle the zero-test are based on two new notions of cover: the refined and the filtered cover. The refined cover is a hybrid between the reachability set and the classical cover. It inherits properties of the reachability set: equality of two refined covers is undecidable, even for usual Vector Addition Systems (with no zero-test), but the refined cover of a Vector Addition System is a recursive set. The second notion of cover, called the filtered cover, is the central tool of our algorithms. It inherits properties of the classical cover, and in particular, one can effectively compute a finite representation of this set, e...

  19. Design and testing of a model CELSS chamber robot (United States)

    Davis, Mark; Dezego, Shawn; Jones, Kinzy; Kewley, Christopher; Langlais, Mike; Mccarthy, John; Penny, Damon; Bonner, Tom; Funderburke, C. Ashley; Hailey, Ruth


    A robot system for use in an enclosed environment was designed and tested. The conceptual design will be used to assist in research performed by the Controlled Ecological Life Support System (CELSS) project. Design specifications include maximum load capacity, operation at specified environmental conditions, low maintenance, and safety. The robot system must not be hazardous to the sealed environment, and be capable of stowing and deploying within a minimum area of the CELSS chamber facility. This design consists of a telescoping robot arm that slides vertically on a shaft positioned in the center of the CELSS chamber. The telescoping robot arm consists of a series of links which can be fully extended to a length equal to the radius of the working envelope of the CELSS chamber. The vertical motion of the robot arm is achieved through the use of a combination ball screw/ball spline actuator system. The robot arm rotates cylindrically about the vertical axis through use of a turntable bearing attached to a central mounting structure fitted to the actuator shaft. The shaft is installed in an overhead rail system allowing the entire structure to be stowed and deployed within the CELSS chamber. The overhead rail system is located above the chamber's upper lamps and extends to the center of the CELSS chamber. The mounting interface of the actuator shaft and rail system allows the entire actuator shaft to be detached and removed from the CELSS chamber. When the actuator shaft is deployed, it is held fixed at the bottom of the chamber by placing a square knob on the bottom of the shaft into a recessed square fitting in the bottom of the chamber floor. A support boot ensures the rigidity of the shaft. Three student teams combined into one group designed a model of the CELSS chamber robot that they could build. They investigated materials, availability, and strength in their design. After the model arm and stand were built, the class performed pre-tests on the entire system

  20. Testing the FOCUS model PEARL in an Italian site (United States)

    Bouraoui, F.; Bidoglio, G.


    Pesticides are integral part of the modern agricultural production system . The use of pesticide has soared during the post war period, and now the consumption of pesticide has been reducing in Europe. However, the reduction is difficult to attribute to one specific factor since the application of pesticide is highly variable and linked to climatic, out-breaks of diseases, etc. Furthermore, new molecules are being produced which are more efficient and require a lower dosage. In the EU, the placing on the market of Plant Protection Products (PPP) is regulated at the Community Level by the Council Directive (91/414/EEC). The PPP stresses the need of validated models to calculate predicted environmental concentrations. In this context, European Commission set up a FOrum for the Co-ordination of pesticide fate models and their USe (FOCUS). In a complementary effort, DG research supported the APECOP project with one major objective being the validation and improvement of existing pesticide fate models. The research presented here focuses on the validation of the PEARL model in an Italian site. The PEARL model, which is one of the FOCUS model, is actually used in the Dutch pesticide registration procedure. The test site is located near Bologna (Italy). The 35 months long experiment was conducted on a 107m by 28m plot with a loamy soil for . The experiment involved the application of KBr as a tracer and two applications of ethoprophos and three applications of Aclonifen. A sequential approach was used for the Bologna site. During this exercise only the measured soil physical parameters were used. The simulation with the PEARL model yielded negative values for both soil moisture profile and pesticide content. In a second step, the water transport module was calibrated, using measured soil moisture profile. This improved greatly the prediction of the soil water balance. Information relative to pesticide degradation and sorption where then included. This allowed a good

  1. Generalized Chaplygin Gas Models Tested with Type Ia Supernovae (United States)

    Biesiada, Marek; Godłowski, Włodzimierz; Szydłowski, Marek


    The generalized Chaplygin gas (GCG), with the equation of state p=-A/ρα, was recently proposed as a candidate for dark energy in the universe. In this paper we confront the GCG with Type Ia supernova (SN Ia) data using available samples. Specifically, we have tested the GCG cosmology in three different classes of models with (1) Ωm=0.3 and ΩCh=0.7, (2) Ωm=0.05 and ΩCh=0.95, and (3) Ωm=0 and ΩCh=1, as well as a model without prior assumptions on Ωm. The best-fit models are obtained by minimizing the χ2 function. We supplement our analysis with confidence intervals in the (A0, α)-plane by marginalizing the probability density functions (pdf's) over the remaining parameters assuming uniform priors. We have also derived one-dimensional pdf's for ΩCh obtained from joint marginalization over α and A0. The maximum value of such a pdf provides the most probable value of ΩCh within the full class of GCG models. The general conclusion is that SN Ia data give support to the Chaplygin gas (with α=1). However, a noticeable preference for A0-values close to 1 means that the α dependence becomes insignificant. This is reflected in one-dimensional pdf's for α that turned out to be flat, meaning that the power of the present supernova data to discriminate between various GCG models (differing by α) is weak. Extending our analysis by relaxing the prior assumption of the flatness of the universe leads to the result that even though the best-fit values of Ωk are formally nonzero, they are still close to the flat case. Our results show clearly that in GCG cosmology, distant (i.e., z>1) supernovae should be brighter than in the ΛCDM model. Therefore, one can expect that future supernova experiments (e.g., SNAP) having access to higher redshifts will eventually resolve the issue of whether the dark energy content of the universe could be described as a Chaplygin gas. Moreover, it would be possible to differentiate between models with various values of the

  2. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN


    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  3. Large animal models for vaccine development and testing. (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A


    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email:

  4. Meso-scale modeling of irradiated concrete in test reactor

    Energy Technology Data Exchange (ETDEWEB)

    Giorla, A. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Vaitová, M. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic); Le Pape, Y., E-mail: [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Štemberk, P. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic)


    Highlights: • A meso-scale finite element model for irradiated concrete is developed. • Neutron radiation-induced volumetric expansion is a predominant degradation mode. • Confrontation with expansion and damage obtained from experiments is successful. • Effects of paste shrinkage, creep and ductility are discussed. - Abstract: A numerical model accounting for the effects of neutron irradiation on concrete at the mesoscale is detailed in this paper. Irradiation experiments in test reactor (Elleuch et al., 1972), i.e., in accelerated conditions, are simulated. Concrete is considered as a two-phase material made of elastic inclusions (aggregate) subjected to thermal and irradiation-induced swelling and embedded in a cementitious matrix subjected to shrinkage and thermal expansion. The role of the hardened cement paste in the post-peak regime (brittle-ductile transition with decreasing loading rate), and creep effects are investigated. Radiation-induced volumetric expansion (RIVE) of the aggregate cause the development and propagation of damage around the aggregate which further develops in bridging cracks across the hardened cement paste between the individual aggregate particles. The development of damage is aggravated when shrinkage occurs simultaneously with RIVE during the irradiation experiment. The post-irradiation expansion derived from the simulation is well correlated with the experimental data and, the obtained damage levels are fully consistent with previous estimations based on a micromechanical interpretation of the experimental post-irradiation elastic properties (Le Pape et al., 2015). The proposed modeling opens new perspectives for the interpretation of test reactor experiments in regards to the actual operation of light water reactors.

  5. Model test and CFD calculation of a cavitating bulb turbine

    Energy Technology Data Exchange (ETDEWEB)

    Necker, J; Aschenbrenner, T, E-mail: [Voith Hydro Holding GmbH and Co. KG Alexanderstrasse 11, 89522 Heidenheim (Germany)


    The flow in a horizontal shaft bulb turbine is calculated as a two-phase flow with a commercial Computational Fluid Dynamics (CFD-)-code including cavitation model. The results are compared with experimental results achieved at a closed loop test rig for model turbines. On the model test rig, for a certain operating point (i.e. volume flow, net head, blade angle, guide vane opening) the pressure behind the turbine is lowered (i.e. the Thoma-coefficient {sigma} is lowered) and the efficiency of the turbine is recorded. The measured values can be depicted in a so-called {sigma}-break curve or {eta}- {sigma}-diagram. Usually, the efficiency is independent of the Thoma-coefficient up to a certain value. When lowering the Thoma-coefficient below this value the efficiency will drop rapidly. Visual observations of the different cavitation conditions complete the experiment. In analogy, several calculations are done for different Thoma-coefficients {sigma}and the corresponding hydraulic losses of the runner are evaluated quantitatively. For a low {sigma}-value showing in the experiment significant efficiency loss, the the change of volume flow in the experiment was simulated. Besides, the fraction of water vapour as an indication of the size of the cavitation cavity is analyzed qualitatively. The experimentally and the numerically obtained results are compared and show a good agreement. Especially the drop in efficiency can be calculated with satisfying accuracy. This drop in efficiency is of high practical importance since it is one criterion to determine the admissible cavitation in a bulb-turbine. The visual impression of the cavitation in the CFD-analysis is well in accordance with the observed cavitation bubbles recorded on sketches and/or photographs.

  6. Induction Heating Model of Cermet Fuel Element Environmental Test (CFEET) (United States)

    Gomez, Carlos F.; Bradley, D. E.; Cavender, D. P.; Mireles, O. R.; Hickman, R. R.; Trent, D.; Stewart, E.


    Deep space missions with large payloads require high specific impulse and relatively high thrust to achieve mission goals in reasonable time frames. Nuclear Thermal Rockets (NTR) are capable of producing a high specific impulse by employing heat produced by a fission reactor to heat and therefore accelerate hydrogen through a rocket nozzle providing thrust. Fuel element temperatures are very high (up to 3000 K) and hydrogen is highly reactive with most materials at high temperatures. Data covering the effects of high-temperature hydrogen exposure on fuel elements are limited. The primary concern is the mechanical failure of fuel elements due to large thermal gradients; therefore, high-melting-point ceramics-metallic matrix composites (cermets) are one of the fuels under consideration as part of the Nuclear Cryogenic Propulsion Stage (NCPS) Advance Exploration System (AES) technology project at the Marshall Space Flight Center. The purpose of testing and analytical modeling is to determine their ability to survive and maintain thermal performance in a prototypical NTR reactor environment of exposure to hydrogen at very high temperatures and obtain data to assess the properties of the non-nuclear support materials. The fission process and the resulting heating performance are well known and do not require that active fissile material to be integrated in this testing. A small-scale test bed; Compact Fuel Element Environmental Tester (CFEET), designed to heat fuel element samples via induction heating and expose samples to hydrogen is being developed at MSFC to assist in optimal material and manufacturing process selection without utilizing fissile material. This paper details the analytical approach to help design and optimize the test bed using COMSOL Multiphysics for predicting thermal gradients induced by electromagnetic heating (Induction heating) and Thermal Desktop for radiation calculations.

  7. Stress-testing the Standard Model at the LHC

    CERN Document Server


    With the high-energy run of the LHC now underway, and clear manifestations of beyond-Standard-Model physics not yet seen in data from the previous run, the search for new physics at the LHC may be a quest for small deviations with big consequences. If clear signals are present, precise predictions and measurements will again be crucial for extracting the maximum information from the data, as in the case of the Higgs boson. Precision will therefore remain a key theme for particle physics research in the coming years. The conference will provide a forum for experimentalists and theorists to identify the challenges and refine the tools for high-precision tests of the Standard Model and searches for signals of new physics at Run II of the LHC. Topics to be discussed include: pinning down Standard Model corrections to key LHC processes; combining fixed-order QCD calculations with all-order resummations and parton showers; new developments in jet physics concerning jet substructure, associated jets and boosted je...

  8. Aggressiveness and size: a model and two tests. (United States)

    Logue, David M; Takahashi, April D; Cade, William H


    Individual variation in aggressive behavior in animals might be caused by adaptive covariation with body size. We developed a model that predicts the benefits of aggressiveness as a function of body size. The model indicated that individuals of intermediate sizes would derive the greatest benefits from being aggressive. If we assume that the cost of aggression is approximately uniform with respect to body size, selection should favor higher aggression in intermediate-sized individuals than in large or small individuals. This prediction was tested by stimulating male Madagascar hissing cockroaches, Gromphadorhina portentosa, with disembodied antennae and recording the males' aggressive responses. Antennae from larger males evoked weaker responses in subjects, suggesting that males obtained information about their opponents' size from the opponents' antennae alone. After accounting for this effect, we found support for the key prediction of our model: aggressiveness peaked at intermediate sizes. Data from actual male-male interactions validated that the antenna assay accurately measured aggressiveness. Analysis of an independent data set generated by staging male-male interactions also supported the prediction that intermediate-sized males were most aggressive. We conclude that adaptive covariation between body size and aggressiveness explains some interindividual variation in aggressiveness.


    Directory of Open Access Journals (Sweden)

    I. V. Novash


    Full Text Available The methods of modelling of power system modes and of testing of relay protection devices with the aid the simulation complexes in real time and with the help of computer software systems that enables the simulation of virtual time scale are considered. Information input protection signals in the simulation of the virtual model time are being obtained in the computational experiment, whereas the tests of protective devices are carried out with the help of hardware and software test systems with the use of estimated input signals. Study of power system stability when modes of generating and consuming electrical equipment and conditions of devices of relay protection are being changed requires testing with the use of digital simulators in a mode of a closed loop. Herewith feedbacks between a model of the power system operating in a real time and external devices or their models must be determined (modelled. Modelling in real time and the analysis of international experience in the use of digital simulation power systems for real-time simulation (RTDS simulator have been fulfilled. Examples are given of the use of RTDS systems by foreign energy companies to test relay protection systems and control, to test the equipment and devices of automatic control, analysis of cyber security and evaluation of the operation of energy systems under different scenarios of occurrence of emergency situations. Some quantitative data on the distribution of RTDS in different countries and Russia are presented. It is noted that the leading energy universities of Russia use the real-time simulation not only to solve scientific and technical problems, but also to conduct training and laboratory classes on modelling of electric networks and anti-emergency automatic equipment with the students. In order to check serviceability of devices of relay protection without taking into account the reaction of the power system tests can be performed in an open loop mode with the

  10. Assessing Statistical Aspects of Test Fairness with Structural Equation Modelling (United States)

    Kline, Rex B.


    Test fairness and test bias are not synonymous concepts. Test bias refers to statistical evidence that the psychometrics or interpretation of test scores depend on group membership, such as gender or race, when such differences are not expected. A test that is grossly biased may be judged to be unfair, but test fairness concerns the broader, more…

  11. The Thermal Phase Curve Offset on Tidally and Nontidally Locked Exoplanets: A Shallow Water Model (United States)

    Penn, James; Vallis, Geoffrey K.


    Using a shallow water model with time-dependent forcing, we show that the peak of an exoplanet thermal phase curve is, in general, offset from the secondary eclipse when the planet is rotating. That is, the planetary hot spot is offset from the point of maximal heating (the substellar point) and may lead or lag the forcing; the extent and sign of the offset are functions of both the rotation rate and orbital period of the planet. We also find that the system reaches a steady state in the reference frame of the moving forcing. The model is an extension of the well-studied Matsuno-Gill model into a full spherical geometry and with a planetary-scale translating forcing representing the insolation received on an exoplanet from a host star. The speed of the gravity waves in the model is shown to be a key metric in evaluating the phase curve offset. If the velocity of the substellar point (relative to the planet’s surface) exceeds that of the gravity waves, then the hot spot will lag the substellar point, as might be expected by consideration of forced gravity wave dynamics. However, when the substellar point is moving slower than the internal wave speed of the system, the hottest point may lead the passage of the forcing. We provide an interpretation of this result by consideration of the Rossby and Kelvin wave dynamics, as well as, in the very slowly rotating case, a one-dimensional model that yields an analytic solution. Finally, we consider the inverse problem of constraining planetary rotation rate from an observed phase curve.

  12. PICASSO VISION instrument design, engineering model test results, and flight model development status (United States)

    Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Akujärvi, Altti; Saari, Heikki; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe


    PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT Technical Research Centre of Finland Ltd, Clyde Space Ltd. (UK) and Centre Spatial de Liège (BE). The test campaign for the engineering model of the PICASSO VISION instrument, a miniaturized nanosatellite spectral imager, has been successfully completed. The test results look very promising. The proto-flight model of VISION has also been successfully integrated and it is waiting for the final integration to the satellite platform.

  13. Design and testing of a model CELSS chamber robot (United States)

    Davis, Mark; Dezego, Shawn; Jones, Kinzy; Kewley, Christopher; Langlais, Mike; McCarthy, John; Penny, Damon; Bonner, Tom; Funderburke, C. Ashley; Hailey, Ruth


    A robot system for use in an enclosed environment was designed and tested. The conceptual design will be used to assist in research performed by the Controlled Ecological Life Support System (CELSS) project. Design specifications include maximum load capacity, operation at specified environmental conditions, low maintenance, and safety. The robot system must not be hazardous to the sealed environment, and be capable of stowing and deploying within a minimum area of the CELSS chamber facility. This design consists of a telescoping robot arm that slides vertically on a shaft positioned in the center of the CELSS chamber. The telescoping robot arm consists of a series of links which can be fully extended to a length equal to the radius of the working envelope of the CELSS chamber. The vertical motion of the robot arm is achieved through the use of a combination ball screw/ball spline actuator system. The robot arm rotates cylindrically about the vertical axis through use of a turntable bearing attached to a central mounting structure fitted to the actuator shaft. The shaft is installed in an overhead rail system allowing the entire structure to be stowed and deployed within the CELSS chamber. The overhead rail system is located above the chamber's upper lamps and extends to the center of the CELSS chamber. The mounting interface of the actuator shaft and rail system allows the entire actuator shaft to be detached and removed from the CELSS chamber. When the actuator shaft is deployed, it is held fixed at the bottom of the chamber by placing a square knob on the bottom of the shaft into a recessed square fitting in the bottom of the chamber floor. A support boot ensures the rigidity of the shaft. Three student teams combined into one group designed a model of the CELSS chamber robot that they could build. They investigated materials, availability, and strength in their design. After the model arm and stand were built, the class performed pre-tests on the entire system

  14. Testing Hardy-Weinberg disequilibrium using the generalized linear model. (United States)

    Xu, Shizhong


    Current methods for detecting Hardy-Weinberg disequilibrium (HWD) only deal with one locus at a time. We developed a method that can jointly detect HWD for multiple loci. The method was developed under the generalized linear model (GLM) using the probit link function. When applied to a single locus, the new method is more powerful than the exact test. When applied to two or more loci, the method can reduce false positives caused by linkage disequilibrium (LD). We applied the method to 24 single nucleotide polymorphism (SNP) markers of a single human gene and eliminated several false positive HWDs due to LD. We developed an R package 'hwdglm' for joint HWD detection, which can be downloaded from our personal website (

  15. Modeling Gravitational Waves to Test GR Dispersion and Polarization (United States)

    Tso, Rhondale; Chen, Yanbei; Isi, Maximilliano


    Given continued observation runs from the Laser Interferometer Gravitational-Wave Observatory Scientific Collaboration, further gravitational wave (GW) events will provide added constraints on beyond-general relativity (b-GR) theories. One approach, independent of the GW generation mechanism at the source, is to look at modification to the GW dispersion and propagation, which can accumulate over vast distances. Generic modification of GW propagation can also, in certain b-GR theories, impact the polarization content of GWs. To this end, a comprehensive approach to testing the dispersion and polarization content is developed by modeling anisotropic deformations to the waveforms' phase, along with birefringence effects and corollary consequences for b-GR polarizations, i.e., breathing, vector, and longitudinal modes. Such an approach can be mapped to specific theories like Lorentz violation, amplitude birefringence in Chern-Simons, and provide hints at additional theories to be included. An overview of data analysis routines to be implemented will also be discussed.

  16. Sound preference test in animal models of addicts and phobias. (United States)

    Soga, Ryo; Shiramatsu, Tomoyo I; Kanzaki, Ryohei; Takahashi, Hirokazu


    Biased or too strong preference for a particular object is often problematic, resulting in addiction and phobia. In animal models, alternative forced-choice tasks have been routinely used, but such preference test is far from daily situations that addicts or phobic are facing. In the present study, we developed a behavioral assay to evaluate the preference of sounds in rodents. In the assay, several sounds were presented according to the position of free-moving rats, and quantified the sound preference based on the behavior. A particular tone was paired with microstimulation to the ventral tegmental area (VTA), which plays central roles in reward processing, to increase sound preference. The behaviors of rats were logged during the classical conditioning for six days. Consequently, some behavioral indices suggest that rats search for the conditioned sound. Thus, our data demonstrated that quantitative evaluation of preference in the behavioral assay is feasible.

  17. Pulsar Braking Index: A Test of Emission Models? (United States)

    Xu, R. X.; Qiao, G. J.


    Pulsar braking torques due to magnetodipole radiation and the unipolar generator are considered, which results in a braking index n of less than 3 and could be employed to test the emission models. Improved equations for the pulsar braking index and magnetic field are presented, which are true if the rotation energy-loss rate equals the sum of the energy-loss rate of dipole radiation and of relativistic particles powered by a unipolar generator. The magnetic field calculated conventionally could be good enough, but only if it were modified by a factor of at most ~0.6. Both inner and outer gaps may coexist in the magnetosphere of the Vela pulsar.

  18. Ares I Scale Model Acoustic Test Overpressure Results (United States)

    Casiano, M. J.; Alvord, D. A.; McDaniels, D. M.


    A summary of the overpressure environment from the 5% Ares I Scale Model Acoustic Test (ASMAT) and the implications to the full-scale Ares I are presented in this Technical Memorandum. These include the scaled environment that would be used for assessing the full-scale Ares I configuration, observations, and team recommendations. The ignition transient is first characterized and described, the overpressure suppression system configuration is then examined, and the final environment characteristics are detailed. The recommendation for Ares I is to keep the space shuttle heritage ignition overpressure (IOP) suppression system (below-deck IOP water in the launch mount and mobile launcher and also the crest water on the main flame deflector) and the water bags.

  19. Model of a nuclear thermal test pipe using ATHENA

    Energy Technology Data Exchange (ETDEWEB)

    Dibben, Mark J. [Air Force Inst. of Technology, Wright-Patterson AFB (United States)


    Nuclear thermal propulsion offers significant improvements in rocket engine specific impulse over rockets employing chemical propulsion. The computer code ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) was used in a parametric analysis of a fuelpipe. The fuelpipe is an annular particle bed fuel element of the reactor with radially inward flow of hydrogen through it. The outlet temperature of the hydrogen is parametrically related to key effects, including the effect of reactor power at two different pressure drops, the effect of the power coupling factor of the Annular Core Research Reactor, and the effect of hydrogen flow. Results show that the outlet temperature is linearly related to the reactor power and nonlinearly to the change in pressure drop. The linear relationship at higher temperatures is probably not valid due to dissociation of hydrogen. Once thermal properties of hydrogen become available, the ATHENA model for this study could easily be modified to test this conjecture.

  20. Setup of IN VIVO Breast Cancer Models for Nanodrug Testing

    DEFF Research Database (Denmark)

    Schifter, Søren


    RNA/aptamer conjugates, or carriers such as liposome/chitosan/micelle spheres. As a first step towards testing of the efficacy of siRNA delivery in vivo via different conjugates and complexes, we aimed at developing a standardized breast cancer model system in mice. In this conception, a reporter gene is used...... for detection of the primary tumor and metastasis and the efficacy of siRNA delivery is measured by reporter gene-targeting siRNAs and in vivo imaging. The use of a uniform siRNA not affecting cellular processes would allow for standardized assessment of siRNA delivery to cancer cells without interferences via...... differential knockdown efficacies and the readout can directly be performed by quantitative imaging using a Caliper IVIS system. In one line of experiments, we engineered non-metastatic MCF-7 breast cancer cells to express the luminescent reporter firefly luciferase (Luc2) along with a pro-metastatic micro...

  1. Modelling and Testing of Blast Effect On the Structures (United States)

    Figuli, Lucia; Jangl, Štefan; Papán, Daniel


    As a blasting agent in the blasting and mining engineering, has been using one of so called new generation of explosives which offer greater flexibility in their range and application, and such explosive is ANFO. It is type of explosive consists of an oxidiser and a fuel (ammonium nitrate and fuel oil). One of such ANFO explosives which are industrially made in Slovakia is POLONIT. The explosive is a mixture of ammonium nitrate, methyl esters of higher fatty acids, vegetable oil and red dye. The paper deals with the analysis of structure subjected to the blast load created by the explosion of POLONIT charge. First part of paper is describing behaviour and characteristic of blast wave generated from the blast (detonation characteristics, physical characteristics, time-history diagram etc.) and the second part presents the behaviour of such loaded structures, because of the analysis of such dynamical loaded structure is required knowing the parameters of blast wave, its effect on structure and the tools for the solution of dynamic analysis. The real field tests of three different weight of charges and two different structures were done. The explosive POLONIT was used together with 25 g of ignition explosive PLNp10. Analytical and numerical model of blast loaded structure is compared with the results obtained from the field tests (is compared with the corresponding experimental accelerations). For the modelling structures were approximated as a one-degree system of freedom (SDOF), where the blast wave was estimated with linear decay and exponential decay using positive and negative phase of blast wave. Numerical solution of the steel beam dynamic response was performed via FEM (Finite Element Method) using standard software Visual FEA.



    Blank, Steven C.


    The single index model (SIM), developed for analysis of financial assets, is assessed as a tool for evaluating the risk-return tradeoff faced in agricultural enterprise selection. This study tests whether some of the hypotheses underlying the SIM are valid when the SIM is used in agricultural cropping decisions. Empirical evidence from county level data does not support SIM hypotheses, indicating that more robust results might come from multiple index models.

  3. Combining test statistics and models in bootstrapped model rejection: it is a balancing act. (United States)

    Johansson, Rikard; Strålfors, Peter; Cedersund, Gunnar


    Model rejections lie at the heart of systems biology, since they provide conclusive statements: that the corresponding mechanistic assumptions do not serve as valid explanations for the experimental data. Rejections are usually done using e.g. the chi-square test (χ2) or the Durbin-Watson test (DW). Analytical formulas for the corresponding distributions rely on assumptions that typically are not fulfilled. This problem is partly alleviated by the usage of bootstrapping, a computationally heavy approach to calculate an empirical distribution. Bootstrapping also allows for a natural extension to estimation of joint distributions, but this feature has so far been little exploited. We herein show that simplistic combinations of bootstrapped tests, like the max or min of the individual p-values, give inconsistent, i.e. overly conservative or liberal, results. A new two-dimensional (2D) approach based on parametric bootstrapping, on the other hand, is found both consistent and with a higher power than the individual tests, when tested on static and dynamic examples where the truth is known. In the same examples, the most superior test is a 2D χ2vsχ2, where the second χ2-value comes from an additional help model, and its ability to describe bootstraps from the tested model. This superiority is lost if the help model is too simple, or too flexible. If a useful help model is found, the most powerful approach is the bootstrapped log-likelihood ratio (LHR). We show that this is because the LHR is one-dimensional, because the second dimension comes at a cost, and because LHR has retained most of the crucial information in the 2D distribution. These approaches statistically resolve a previously published rejection example for the first time. We have shown how to, and how not to, combine tests in a bootstrap setting, when the combination is advantageous, and when it is advantageous to include a second model. These results also provide a deeper insight into the original

  4. Testing the tidal alignment model of galaxy intrinsic alignment

    Energy Technology Data Exchange (ETDEWEB)

    Blazek, Jonathan; Seljak, Uroš [Department of Physics and Lawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 (United States); McQuinn, Matthew, E-mail:, E-mail:, E-mail: [Department of Astronomy, University of California, Berkeley, CA 94720 (United States)


    Weak gravitational lensing has become a powerful probe of large-scale structure and cosmological parameters. Precision weak lensing measurements require an understanding of the intrinsic alignment of galaxy ellipticities, which can in turn inform models of galaxy formation. It is hypothesized that elliptical galaxies align with the background tidal field and that this alignment mechanism dominates the correlation between ellipticities on cosmological scales (in the absence of lensing). We use recent large-scale structure measurements from the Sloan Digital Sky Survey to test this picture with several statistics: (1) the correlation between ellipticity and galaxy overdensity, w{sub g+}; (2) the intrinsic alignment auto-correlation functions; (3) the correlation functions of curl-free, E, and divergence-free, B, modes, the latter of which is zero in the linear tidal alignment theory; (4) the alignment correlation function, w{sub g}(r{sub p},θ), a recently developed statistic that generalizes the galaxy correlation function to account for the angle between the galaxy separation vector and the principle axis of ellipticity. We show that recent measurements are largely consistent with the tidal alignment model and discuss dependence on galaxy luminosity. In addition, we show that at linear order the tidal alignment model predicts that the angular dependence of w{sub g}(r{sub p},θ) is simply w{sub g+}(r{sub p})cos (2θ) and that this dependence is consistent with recent measurements. We also study how stochastic nonlinear contributions to galaxy ellipticity impact these statistics. We find that a significant fraction of the observed LRG ellipticity can be explained by alignment with the tidal field on scales ∼> 10 \\hMpc. These considerations are relevant to galaxy formation and evolution.

  5. Testing Multidimensional Models of Youth Civic Engagement: Model Comparisons, Measurement Invariance, and Age Differences (United States)

    Wray-Lake, Laura; Metzger, Aaron; Syvertsen, Amy K.


    Despite recognition that youth civic engagement is multidimensional, different modeling approaches are rarely compared or tested for measurement invariance. Using a diverse sample of 2,467 elementary, middle, and high school-aged youth, we measured eight dimensions of civic engagement: social responsibility values, informal helping, political…

  6. A framework for testing the ability of models to project climate change and its impacts

    DEFF Research Database (Denmark)

    Refsgaard, J. C.; Madsen, H.; Andréassian, V.


    a validation framework and guiding principles applicable across earth science disciplines for testing the capability of models to project future climate change and its impacts. Model test schemes comprising split-sample tests, differential split-sample tests and proxy site tests are discussed in relation...... in order to build further confidence in model projections.......Models used for climate change impact projections are typically not tested for simulation beyond current climate conditions. Since we have no data truly reflecting future conditions, a key challenge in this respect is to rigorously test models using proxies of future conditions. This paper presents...

  7. Modeling Change in Effort across a Low-Stakes Testing Session: A Latent Growth Curve Modeling Approach (United States)

    Barry, Carol L.; Finney, Sara J.


    We examined change in test-taking effort over the course of a three-hour, five test, low-stakes testing session. Latent growth modeling results indicated that change in test-taking effort was well-represented by a piecewise growth form, wherein effort increased from test 1 to test 4 and then decreased from test 4 to test 5. There was significant…

  8. Chinese College Test Takers' Individual Differences and Reading Test Performance: A Structural Equation Modeling Approach. (United States)

    Zhang, Limei


    This study reports on the relationships between test takers' individual differences and their performance on a reading comprehension test. A total of 518 Chinese college students (252 women and 256 men; M age = 19.26 year, SD = 0.98) answered a questionnaire and sit for a reading comprehension test. The study found that test takers' L2 language proficiency was closely linked to their test performance. Test takers' employment of strategies was significantly and positively associated with their performance on the test. Test takers' motivation was found to be significantly associated with reading test performance. Test anxiety was negatively related to their use of reading strategies and test performance. The results of the study lent support to the threshold hypothesis of language proficiency. The implications for classroom teaching were provided. © The Author(s) 2016.

  9. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence (United States)

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.


    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  10. Testing process predictions of models of risky choice: a quantitative model comparison approach. (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard


    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called "similarity." In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  11. Biopsychosocial determinants of men's sexual desire: testing an integrative model. (United States)

    Carvalho, Joana; Nobre, Pedro


    There is a severe lack of studies on male sexual desire and its biopsychosocial determinants. Most of the studies are focused on female sexual interest and are based on the contribution of single dimensions instead of the interaction between them. The aim of the present study was to test a conceptual model considering the interrelated role of biopsychosocial factors on male sexual desire. This model allowed us to test not only the unique impact of predictors that are traditionally related to sexual desire, but also how their interaction affects sexual desire in men. Two hundred and thirty seven men from the general population were assessed according to age (mean age = 35, standard deviation = 11), medical problems, psychopathology, dyadic adjustment, and cognitive-emotional factors. Psychopathology measured by the Brief Symptom Inventory, dysfunctional sexual beliefs measured by the Sexual Dysfunctional Beliefs Questionnaire, thoughts and emotions in sexual context measured by the Sexual Modes Questionnaire, dyadic adjustment measured by the Dyadic Adjustment Scale, medical condition measured by the Medical History Formulation, and sexual desire measured by the Sexual Desire subscale of the International Index of Erectile Function. Results showed that cognitive factors (sexual beliefs and automatic thoughts during sexual activity) were the best predictors of sexual desire in men. Specifically, beliefs related to restrictive attitudes toward sexuality, erection concerns, and lack of erotic thoughts in sexual context, had a significant direct effect on reduced sexual desire. Moreover, this set of cognitive-emotional factors also mediated the relationship between medical problems, age, and sexual desire. Results from this integrative approach highlighted the role of cognitive factors related to cultural values (dysfunctional sexual beliefs) and distraction mechanisms during sexual context (automatic thoughts) in male sexual interest. Findings support the need to

  12. Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions (United States)

    Sessoms, John; Finney, Sara J.


    Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…

  13. The Testing of Geomagnetic Reversal Models: Recent Developments (United States)

    Hoffman, K. A.


    The number of available palaeomagnetic records displaying detailed transitional field behaviour has increased significantly over the past few years. The expanded data set of transitions now includes records of sequential reversals from the same site locality as well as multiple recordings of a particular reversal from widely separated sites. Such data are most useful with regard to the testing of geomagnetic reversal models. First, records of successive reversals may make it possible to distinguish whether transitional fields originate primarily with the configurational characteristics of the reversing geodynamo or whether a non-reversing portion of the field is responsible. Such an analysis does not yet provide conclusive evidence in support of either hypothesis. Second, multiple records from distant sites furnish the best possible data with regard to the determination of the harmonic content of particular transitional fields. In this regard, two quite independent, testable models have been applied to the several recordings of the Matuyama-Brunhes transition. Findings support the hypothesis that the intermediate field geometry during this particular polarity transition was indeed controlled by non-dipole zonal harmonic terms. Moreover, analysis of the paths of the virtual geomagnetic pole associated with the available records strongly suggests that the most extreme dominance of transitional fields by axisymmetric components occurs during the onset of the reversal, a finding that now has support on purely theoretical grounds. Finally, field behaviour associated with existing igneous-recorded palaeomagnetic excursions is not unlike that observed at the onset of field reversals. Hence, there is growing evidence in support of the hypothesis that attempts by the geodynamo to reverse are not always successful. These recordings of apparent abortive reversals may be of considerable value with regard to our understanding of transitional fields and geomagnetic reversal.

  14. Using ISOS consensus test protocols for development of quantitative life test models in ageing of organic solar cells

    DEFF Research Database (Denmark)

    Kettle, J.; Stoichkov, V.; Kumar, D.


    As Organic Photovoltaic (OPV) development matures, the demand grows for rapid characterisation of degradation and application of Quantitative Accelerated Life Tests (QALT) models to predict and improve reliability. To date, most accelerated testing on OPVs has been conducted using ISOS consensus...... standards. This paper identifies some of the problems in using and interpreting the results for predicting ageing based upon ISOS consensus standard test data. Design of Experiments (DOE) in conjunction with data from ISOS consensus standards are used as the basis for developing life test models for OPV...... modules. This is used to study their temperature-humidity and light-induced degradation, which enables failure rates during accelerated testing to be assessed against the typical outdoor operational conditions. The life test models are used to assess the relative severity of the ISOS standards...

  15. Estimating the parameters of a structural model for the latent traits in Rasch's model for speed tests

    NARCIS (Netherlands)

    Jansen, MGH; Jansen, G.G.H.

    This article considers the multiplicative gamma model for reading speed originally proposed by Rasch (1960/1980) in his monograph Probabilistic Models for Some Intelligence and Attainment Tests. The model can be viewed as a latent trait model for a set of pure speed tests with known length,

  16. Experiments and Modeling to Support Field Test Design

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Peter Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bourret, Suzanne Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zyvoloski, George Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weaver, Douglas James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    Disposition of heat-generating nuclear waste (HGNW) remains a continuing technical and sociopolitical challenge. We define HGNW as the combination of both heat generating defense high level waste (DHLW) and civilian spent nuclear fuel (SNF). Numerous concepts for HGNW management have been proposed and examined internationally, including an extensive focus on geologic disposal (c.f. Brunnengräber et al., 2013). One type of proposed geologic material is salt, so chosen because of its viscoplastic deformation that causes self-repair of damage or deformation induced in the salt by waste emplacement activities (Hansen and Leigh, 2011). Salt as a repository material has been tested at several sites around the world, notably the Morsleben facility in Germany (c.f. Fahland and Heusermann, 2013; Wollrath et al., 2014; Fahland et al., 2015) and at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, NM. Evaluating the technical feasibility of a HGNW repository in salt is an ongoing process involving experiments and numerical modeling of many processes at many facilities.

  17. Can atom-surface potential measurements test atomic structure models? (United States)

    Lonij, Vincent P A; Klauss, Catherine E; Holmgren, William F; Cronin, Alexander D


    van der Waals (vdW) atom-surface potentials can be excellent benchmarks for atomic structure calculations. This is especially true if measurements are made with two different types of atoms interacting with the same surface sample. Here we show theoretically how ratios of vdW potential strengths (e.g., C₃(K)/C₃(Na)) depend sensitively on the properties of each atom, yet these ratios are relatively insensitive to properties of the surface. We discuss how C₃ ratios depend on atomic core electrons by using a two-oscillator model to represent the contribution from atomic valence electrons and core electrons separately. We explain why certain pairs of atoms are preferable to study for future experimental tests of atomic structure calculations. A well chosen pair of atoms (e.g., K and Na) will have a C₃ ratio that is insensitive to the permittivity of the surface, whereas a poorly chosen pair (e.g., K and He) will have a ratio of C₃ values that depends more strongly on the permittivity of the surface.

  18. Caries risk assessment in school children using a reduced Cariogram model without saliva tests

    DEFF Research Database (Denmark)

    Petersson, Gunnel Hänsel; Isberg, Per-Erik; Twetman, Svante


    To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren.......To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren....

  19. Testing for parameter instability across different modeling frameworks

    NARCIS (Netherlands)

    Calvori, Francesco; Creal, Drew; Koopman, Siem Jan; Lucas, André


    We develop a new parameter instability test that generalizes the seminal ARCHLagrange Multiplier test of Engle (1982) for a constant variance against the alternative of autoregressive conditional heteroskedasticity to settings with nonlinear timevarying parameters and non-Gaussian distributions. We

  20. Testing hypotheses involving Cronbach's alpha using marginal models

    NARCIS (Netherlands)

    Kuijpers, R.E.; van der Ark, L.A.; Croon, M.A.


    We discuss the statistical testing of three relevant hypotheses involving Cronbach's alpha: one where alpha equals a particular criterion; a second testing the equality of two alpha coefficients for independent samples; and a third testing the equality of two alpha coefficients for dependent

  1. A Human Capital Model of Educational Test Scores

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    measure of pure cognitive ability. We find that variables which are not closely associated with traditional notions of intelligence explain a significant proportion of the variation in test scores. This adds to the complexity of interpreting test scores and suggests that school culture, attitudes......, and possible incentive problems make it more difficult to elicit true values of what the tests measure....

  2. Testing the Conditional Mean Function of Autoregressive Conditional Duration Models

    DEFF Research Database (Denmark)

    Hautsch, Nikolaus

    This paper proposes a dynamic proportional hazard (PH) model with non-specified baseline hazard for the modelling of autoregressive duration processes. A categorization of the durations allows us to reformulate the PH model as an ordered response model based on extreme value distributed errors. I...

  3. Ares I Scale Model Acoustic Test Liftoff Acoustic Results and Comparisons (United States)

    Counter, Doug; Houston, Janice


    Conclusions: Ares I-X flight data validated the ASMAT LOA results. Ares I Liftoff acoustic environments were verified with scale model test results. Results showed that data book environments were under-conservative for Frustum (Zone 5). Recommendations: Data book environments can be updated with scale model test and flight data. Subscale acoustic model testing useful for future vehicle environment assessments.

  4. Test of the classic model for predicting endurance running performance. (United States)

    McLaughlin, James E; Howley, Edward T; Bassett, David R; Thompson, Dixie L; Fitzhugh, Eugene C


    To compare the classic physiological variables linked to endurance performance (VO2max, %VO2max at lactate threshold (LT), and running economy (RE)) with peak treadmill velocity (PTV) as predictors of performance in a 16-km time trial. Seventeen healthy, well-trained distance runners (10 males and 7 females) underwent laboratory testing to determine maximal oxygen uptake (VO2max), RE, percentage of maximal oxygen uptake at the LT (%VO2max at LT), running velocity at LT, and PTV. Velocity at VO2max (vVO2max) was calculated from RE and VO2max. Three stepwise regression models were used to determine the best predictors (classic vs treadmill performance protocols) for the 16-km running time trial. Simple Pearson correlations of the variables with 16-km performance showed vVO2max to have the highest correlation (r = -0.972) and %VO2max at the LT the lowest (r = 0.136). The correlation coefficients for LT, VO2max, and PTV were very similar in magnitude (r = -0.903 to r = -0.892). When VO2max, %VO2max at LT, RE, and PTV were entered into SPSS stepwise analysis, VO2max explained 81.3% of the total variance, and RE accounted for an additional 10.7%. vVO2max was shown to be the best predictor of the 16-km performance, accounting for 94.4% of the total variance. The measured velocity at VO2max (PTV) was highly correlated with the estimated velocity at vVO2max (r = 0.8867). Among well-trained subjects heterogeneous in VO2max and running performance, vVO2max is the best predictor of running performance because it integrates both maximal aerobic power and the economy of running. The PTV is linked to the same physiological variables that determine vVO2max.

  5. Development of wheelchair caster testing equipment and preliminary testing of caster models (United States)

    Mhatre, Anand; Ott, Joseph


    Background Because of the adverse environmental conditions present in less-resourced environments (LREs), the World Health Organization (WHO) has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO) Wheelchair Testing Standards (ISO 7176). Objectives To develop and demonstrate the feasibility of a caster system test method. Method Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures. Results The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures. Conclusion The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE. PMID:29062762

  6. Development of wheelchair caster testing equipment and preliminary testing of caster models

    Directory of Open Access Journals (Sweden)

    Anand Mhatre


    Full Text Available Background: Because of the adverse environmental conditions present in less-resourced environments (LREs, the World Health Organization (WHO has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO Wheelchair Testing Standards (ISO 7176.Objectives: To develop and demonstrate the feasibility of a caster system test method.Method: Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures.Results: The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures.Conclusion: The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE.

  7. Rational GARCH model: An empirical test for stock returns (United States)

    Takaishi, Tetsuya


    We propose a new ARCH-type model that uses a rational function to capture the asymmetric response of volatility to returns, known as the "leverage effect". Using 10 individual stocks on the Tokyo Stock Exchange and two stock indices, we compare the new model with several other asymmetric ARCH-type models. We find that according to the deviance information criterion, the new model ranks first for several stocks. Results show that the proposed new model can be used as an alternative asymmetric ARCH-type model in empirical applications.

  8. The GAPS Programme with HARPS-N at TNG. XV. A substellar companion around a K giant star identified with quasi-simultaneous HARPS-N and GIANO measurements (United States)

    González-Álvarez, E.; Affer, L.; Micela, G.; Maldonado, J.; Carleo, I.; Damasso, M.; D'Orazi, V.; Lanza, A. F.; Biazzo, K.; Poretti, E.; Gratton, R.; Sozzetti, A.; Desidera, S.; Sanna, N.; Harutyunyan, A.; Massi, F.; Oliva, E.; Claudi, R.; Cosentino, R.; Covino, E.; Maggio, A.; Masiero, S.; Molinari, E.; Pagano, I.; Piotto, G.; Smareglia, R.; Benatti, S.; Bonomo, A. S.; Borsa, F.; Esposito, M.; Giacobbe, P.; Malavolta, L.; Martinez-Fiorenzano, A.; Nascimbeni, V.; Pedani, M.; Rainer, M.; Scandariato, G.


    Context. Identification of planetary companions of giant stars is made difficult because of the astrophysical noise, that may produce radial velocity variations similar to those induced by a companion. On the other hand any stellar signal is wavelength dependent, while signals due to a companion are achromatic. Aims: Our goal is to determine the origin of the Doppler periodic variations observed in the thick disk K giant star TYC 4282-605-1 by HARPS-N at the Telescopio Nazionale Galileo (TNG) and verify if they can be due to the presence of a substellar companion. Methods: Several methods have been used to exclude the stellar origin of the observed signal including detailed analysis of activity indicators and bisector and the analysis of the photometric light curve. Finally we have conducted an observational campaign to monitor the near infrared (NIR) radial velocity with GIANO at the TNG in order to verify whether the NIR amplitude variations are comparable with those observed in the visible. Results: Both optical and NIR radial velocities show consistent variations with a period at 101 days and similar amplitude, pointing to the presence of a companion orbiting the target. The main orbital properties obtained for our giant star with a derived mass of M = 0.97 ± 0.03M⊙ are MPsini = 10.78 ± 0.12MJ; P = 101.54 ± 0.05 days; e = 0.28 ± 0.01 and a = 0.422 ± 0.009 AU. The chemical analysis shows a significant enrichment in the abundance of Na I, Mg I, Al I and Si I while the rest of analyzed elements are consistent with the solar value demonstrating that the chemical composition corresponds with an old K giant (age = 10.1 Gyr) belonging to local thick disk. Conclusions: We conclude that the substellar companion hypothesis for this K giant is the best explanation for the observed periodic radial velocity variation. This study also shows the high potential of multi-wavelength radial velocity observations for the validation of planet candidates. Based on

  9. Ware Star - Scale 1:40 model test, test report 2; Wave Star - Skala 1:40 modelforsoeg, forsoegsrapport 2

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, M.; Lykke Andersen, Thomas


    This report describes model tests with the wave energy converter Wave Star carried out at Aalborg University. This report succeeds to reports presenting numerical calculations. The objective of the tests presented in this report is to determine and optimize the Wave Star concept's power uptake for different physical configurations of the converter. (BA)

  10. Vibratory gyroscopes : identification of mathematical model from test data

    CSIR Research Space (South Africa)

    Shatalov, MY


    Full Text Available Simple mathematical model of vibratory gyroscopes imperfections is formulated, which includes anisotropic damping and variation of mass-stiffness parameters and their harmonics. The method of identification of parameters of the mathematical model...

  11. Models of little Higgs and electroweak precision tests

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Mu-Chun; /Fermilab


    The little Higgs idea is an alternative to supersymmetry as a solution to the gauge hierarchy problem. In this note, the author reviews various little Higgs models and their phenomenology with emphasis on the precision electroweak constraints in these models.

  12. A test of the hierarchical model of litter decomposition

    DEFF Research Database (Denmark)

    Bradford, Mark A.; Veen, G. F.; Bonis, Anne


    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...

  13. Computational Modeling in Support of High Altitude Testing Facilities Project (United States)

    National Aeronautics and Space Administration — Simulation technology plays an important role in rocket engine test facility design and development by assessing risks, identifying failure modes and predicting...

  14. Computational Modeling in Support of High Altitude Testing Facilities Project (United States)

    National Aeronautics and Space Administration — Simulation technology plays an important role in propulsion test facility design and development by assessing risks, identifying failure modes and predicting...

  15. Supersymmetric $\\sigma$-models on toric varieties a test case

    CERN Document Server

    Bourdeau, M


    In this letter we study supersymmetric sigma models on toric varieties. These manifolds are generalizations of CP^n manifolds. We examine here sigma models, viewed as gauged linear sigma models, on one of the simplest such manifold, the blow-up of P^2_(2,1,1), and determine their properties using the techniques of topological- antitopological fusion. We find that the model contains solitons which become massless at the singular point of the theory where a gauge symmetry remains unbroken.

  16. A generic testing framework for agent-based simulation models


    Gürcan, Önder; Dikenelli, Oguz; Bernon, Carole


    Agent-based modelling and simulation (ABMS) had an increasing attention during the last decade. However, the weak validation and verification of agent-based simulation models makes ABMS hard to trust. There is no comprehensive tool set for verification and validation of agent-based simulation models, which demonstrates that inaccuracies exist and/or reveals the existing errors in the model. Moreover, on the practical side, many ABMS frameworks are in use. In this sense, we designed and develo...

  17. Independent test of a model to predict severe acute esophagitis

    Directory of Open Access Journals (Sweden)

    Ellen X. Huang, PhD


    Conclusions: The previously published model was validated on an independent data set and determined to be nearly as predictive as the best possible two-parameter logistic model even though it overpredicted risk systematically. A novel, machine learning-based model using a bootstrapping approach showed reasonable predictive power.

  18. A Validation Process for the Groundwater Flow and Transport Model of the Faultless Nuclear Test at Central Nevada Test Area

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed Hassan


    Many sites of groundwater contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This has created a need for tools and approaches that can be used to build confidence in model predictions and make it apparent to regulators, policy makers, and the public that these models are sufficient for decision making. This confidence building is a long-term iterative process and it is this process that should be termed ''model validation.'' Model validation is a process not an end result. That is, the process of model validation cannot always assure acceptable prediction or quality of the model. Rather, it provides safeguard against faulty models or inadequately developed and tested models. Therefore, development of a systematic approach for evaluating and validating subsurface predictive models and guiding field activities for data collection and long-term monitoring is strongly needed. This report presents a review of model validation studies that pertain to groundwater flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general in nature, but the focus of the discussion is on site-specific, predictive groundwater models that are used for making decisions regarding remediation activities and site closure. An attempt is made to compile most of the published studies on groundwater model validation and assemble what has been proposed or used for validating subsurface models. The aim is to provide a reasonable starting point to aid the development of the validation plan for the groundwater flow and transport model of the Faultless nuclear test conducted at the Central Nevada Test Area (CNTA). The review of previous studies on model validation shows that there does not exist a set of specific procedures and tests that can be easily adapted and

  19. Addressing Standardized Testing through a Novel Assesment Model (United States)

    Schifter, Catherine C.; Carey, Martha


    The No Child Left Behind (NCLB) legislation spawned a plethora of standardized testing services for all the high stakes testing required by the law. We argue that one-size-fits all assessments disadvantage students who are English Language Learners, in the USA, as well as students with limited economic resources, special needs, and not reading on…

  20. TorX: Automated Model-Based Testing

    NARCIS (Netherlands)

    Tretmans, G.J.; Brinksma, Hendrik; Hartman, A.; Dussa-Ziegler, K.


    Systematic testing is very important for assessing and improving the quality of software systems. Yet, testing turns out to be expensive, laborious, time-consuming and error-prone. The Dutch research and development project Côte de Resyste worked on methods, techniques and tools for automating

  1. An integrated service excellence model for strategic military test

    African Journals Online (AJOL)


    processes exist in these facilities without any common quality assurance and control and performance management systems. The purpose of this article is to introduce an. ISEM for empowering the leadership core of the test and evaluation facilities to provide strategic military test and evaluation facility services and to ...

  2. Testing the reliability of ice-cream cone model (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui


    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  3. Creating a Test Validated Structural Dynamic Finite Element Model of the Multi-Utility Technology Test Bed Aircraft (United States)

    Pak, Chan-Gi; Truong, Samson S.


    Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test Bed, X-56A, aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of X-56A. The ground vibration test validated structural dynamic finite element model of the X-56A is created in this study. The structural dynamic finite element model of the X-56A is improved using a model tuning tool. In this study, two different weight configurations of the X-56A have been improved in a single optimization run.

  4. Test technique development in interference free testing, flow visualization, and remote control model technology at Langley's Unitary Plan wind tunnel (United States)

    Corlett, W. A.


    A metric half-span model is considered as a means of mechanical support for a wind-tunnel model which allows measurement of aerodynamic forces and moments without support interference or model distortion. This technique can be applied to interference-free propulsion models. The vapor screen method of flow visualization at supersonic Mach numbers is discussed. The use of smoke instead of water vapor as a medium to produce the screen is outlined. Vapor screen data are being used in the development of analytical vortex tracking programs. Test results for a remote control model system are evaluated. Detailed control effectiveness and cross-coupling data were obtained with a single run. For the afterbody tail configuration, tested control boundaries at several roll orientations were established utilizing the facility's on-line capability to 'fly' the model in the wind tunnel.

  5. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  6. Development of a model and test equipment for cold flow tests at 500 atm of small nuclear light bulb configurations (United States)

    Jaminet, J. F.


    A model and test equipment were developed and cold-flow-tested at greater than 500 atm in preparation for future high-pressure rf plasma experiments and in-reactor tests with small nuclear light bulb configurations. With minor exceptions, the model chamber is similar in design and dimensions to a proposed in-reactor geometry for tests with fissioning uranium plasmas in the nuclear furnace. The model and the equipment were designed for use with the UARL 1.2-MW rf induction heater in tests with rf plasmas at pressures up to 500 atm. A series of cold-flow tests of the model was then conducted at pressures up to about 510 atm. At 504 atm, the flow rates of argon and cooling water were 3.35 liter/sec (STP) and 26 gal/min, respectively. It was demonstrated that the model is capable of being operated for extended periods at the 500-atm pressure level and is, therefore, ready for use in initial high-pressure rf plasma experiments.

  7. NASA Langley Distributed Propulsion VTOL Tilt-Wing Aircraft Testing, Modeling, Simulation, Control, and Flight Test Development (United States)

    Rothhaar, Paul M.; Murphy, Patrick C.; Bacon, Barton J.; Gregory, Irene M.; Grauer, Jared A.; Busan, Ronald C.; Croom, Mark A.


    Control of complex Vertical Take-Off and Landing (VTOL) aircraft traversing from hovering to wing born flight mode and back poses notoriously difficult modeling, simulation, control, and flight-testing challenges. This paper provides an overview of the techniques and advances required to develop the GL-10 tilt-wing, tilt-tail, long endurance, VTOL aircraft control system. The GL-10 prototype's unusual and complex configuration requires application of state-of-the-art techniques and some significant advances in wind tunnel infrastructure automation, efficient Design Of Experiments (DOE) tunnel test techniques, modeling, multi-body equations of motion, multi-body actuator models, simulation, control algorithm design, and flight test avionics, testing, and analysis. The following compendium surveys key disciplines required to develop an effective control system for this challenging vehicle in this on-going effort.

  8. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    Energy Technology Data Exchange (ETDEWEB)

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.


    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  9. Channel Modelling for Multiprobe Over-the-Air MIMO Testing

    Directory of Open Access Journals (Sweden)

    Pekka Kyösti


    a fading emulator, an anechoic chamber, and multiple probes. Creation of a propagation environment inside an anechoic chamber requires unconventional radio channel modelling, namely, a specific mapping of the original models onto the probe antennas. We introduce two novel methods to generate fading emulator channel coefficients; the prefaded signals synthesis and the plane wave synthesis. To verify both methods we present a set of simulation results. We also show that the geometric description is a prerequisite for the original channel model.

  10. Testing Neural Models of the Development of Infant Visual Attention


    Richards, John E.; Hunter, Sharon K.


    Several models of the development of infant visual attention have used information about neural development. Most of these models have been based on nonhuman animal studies and have relied on indirect measures of neural development in human infants. This article discusses methods for studying a “neurodevelopmental” model of infant visual attention using indirect and direct measures of cortical activity. We concentrate on the effect of attention on eye movement control and show how animal-base...

  11. Modelling and test of aeration tank settling (ATS)

    DEFF Research Database (Denmark)

    Nielsen, M. K.; Bechmann, H.; Henze, Mogens


    that a qualitatively correct model can be established. The simplicity of the model allows for on-line identification of the necessary parameters, so that no maintenance is needed to use of the on-line model for control. The practical implementation on three plants indicates that implementation of STAR with ATS control......The use of aeration tank settling during high hydraulic loads on large wastewater treatment plants has previously been demonstrated as a reliable technique and proven valuable. The paper proposes a simplified deterministic model to predict the efficiency of the method. It is shown...

  12. Testing a biobehavioral model of irritable bowel syndrome. (United States)

    van der Veek, Patrick P J; Dusseldorp, Elise; van Rood, Yanda R; Masclee, Ad A M


    The pathogenesis of irritable bowel syndrome (IBS) is probably multifactorial with dysfunction at different levels of the brain-gut axis. The aim of this study was to evaluate an existing biobehavioral model of IBS symptom generation in a large group of patients. In 104 IBS patients, we assessed symptom severity by a symptom diary, visceral hypersensitivity using a barostat, autonomic function by measuring arterial baroreflex sensitivity and psychological functioning using questionnaires. Structural equation modeling was used to calculate the reciprocal and chronological relationships between the model variables. Analysis of the adjusted original model indicated poor fit [Satorra-Bentler chi=28.47; degrees of freedom (df)=11, Pbehavior-IBS symptoms and trauma-IBS symptoms). The revised model yielded a reasonable fit (chi=13.88, df=9, P=0.13; CFI=0.94). The model explained 18.7% of the variance in IBS symptoms. Illness behavior completely mediated the effect of cognitions on IBS symptoms and partly mediated the effect of trauma on IBS symptoms. The fit of this alternative model was good (chi=9.85, df=8, P=0.28; CFI=0.98). The alternative model explained 20.0% of the variance in IBS symptoms. The proposed biobehavioral model could not be validated. Although visceral hypersensitivity and IBS symptom severity significantly correlate, autonomic function and IBS symptoms do not. Cognitive-behavioral aspects are important in the clinical expression of IBS, with illness behavior playing an intermediate and central role.

  13. Modeling and Testing of Unbalanced Loading and Voltage Regulation

    Energy Technology Data Exchange (ETDEWEB)

    Davis, M. W.; Broadwater, R.; Hambrick, J.


    This report covers work to (1) develop and validate distribution circuit models, (2) determine optimum distributed generator operating conditions, and (3) determine distributed generation penetration limits.

  14. Animal models for microbicide safety and efficacy testing. (United States)

    Veazey, Ronald S


    Early studies have cast doubt on the utility of animal models for predicting success or failure of HIV-prevention strategies, but results of multiple human phase 3 microbicide trials, and interrogations into the discrepancies between human and animal model trials, indicate that animal models were, and are, predictive of safety and efficacy of microbicide candidates. Recent studies have shown that topically applied vaginal gels, and oral prophylaxis using single or combination antiretrovirals are indeed effective in preventing sexual HIV transmission in humans, and all of these successes were predicted in animal models. Further, prior discrepancies between animal and human results are finally being deciphered as inadequacies in study design in the model, or quite often, noncompliance in human trials, the latter being increasingly recognized as a major problem in human microbicide trials. Successful microbicide studies in humans have validated results in animal models, and several ongoing studies are further investigating questions of tissue distribution, duration of efficacy, and continued safety with repeated application of these, and other promising microbicide candidates in both murine and nonhuman primate models. Now that we finally have positive correlations with prevention strategies and protection from HIV transmission, we can retrospectively validate animal models for their ability to predict these results, and more importantly, prospectively use these models to select and advance even safer, more effective, and importantly, more durable microbicide candidates into human trials.

  15. Results From a Pressure Sensitive Paint Test Conducted at the National Transonic Facility on Test 197: The Common Research Model (United States)

    Watkins, A. Neal; Lipford, William E.; Leighty, Bradley D.; Goodman, Kyle Z.; Goad, William K.; Goad, Linda R.


    This report will serve to present results of a test of the pressure sensitive paint (PSP) technique on the Common Research Model (CRM). This test was conducted at the National Transonic Facility (NTF) at NASA Langley Research Center. PSP data was collected on several surfaces with the tunnel operating in both cryogenic mode and standard air mode. This report will also outline lessons learned from the test as well as possible approaches to challenges faced in the test that can be applied to later entries.

  16. Testing Intertemporal Substitution, Implicit Contracts, and Hours Restriction Models of the Labor Market Using Micro Data


    HAM, John C.; Reilly, Kevin T


    We present new tests of three theories of the labor market: intertemporal substitution, hours restrictions, and implicit contracts. The intertemporal substitution test we implement is an exclusion test robust to many specification errors and we consistently reject this model. We model hours restrictions as part of an endogenous switching model. We compare the implicit probit equation to an unrestricted probit equation for unemployment and reject the hours restriction model. For the implicit c...

  17. Tunnel fire testing and modeling the Morgex North tunnel experiment

    CERN Document Server

    Borghetti, Fabio; Gandini, Paolo; Frassoldati, Alessio; Tavelli, Silvia


    This book aims to cast light on all aspects of tunnel fires, based on experimental activities and theoretical and computational fluid dynamics (CFD) analyses. In particular, the authors describe a transient full-scale fire test (~15 MW), explaining how they designed and performed the experimental activity inside the Morgex North tunnel in Italy. The entire organization of the experiment is described, from preliminary evaluations to the solutions found for management of operational difficulties and safety issues. This fire test allowed the collection of different measurements (temperature, air velocity, smoke composition, pollutant species) useful for validating and improving CFD codes and for testing the real behavior of the tunnel and its safety systems during a diesel oil fire with a significant heat release rate. Finally, the fire dynamics are compared with empirical correlations, CFD simulations, and literature measurements obtained in other similar tunnel fire tests. This book will be of interest to all ...

  18. Model Uncertainty and Test of a Segmented Mirror Telescope (United States)


    ring joint fixed boundary condition 16 6. 2011 laser vibrometer data collection (Jennings & Cobb, 2013, p. 4) 17 7. September 2013 laser -Doppler...shapes were obtained and used to design a test plan for modal experimentation on the SMT. Previous laser vibrometer testing was conducted by the Air...The experimental setup used is shown in Figure 6. Figure 6. 2011 laser vibrometer data collection (Jennings & Cobb, 2013, p. 4) The SMT

  19. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C


    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  20. Application of multiple-point geostatistics on modelling pumping tests and tracer tests in heterogeneous environments with complex geological structures (United States)

    Huysmans, Marijke; Dassargues, Alain


    In heterogeneous environments with complex geological structures, analysis of pumping and tracer tests is often problematic. Standard interpretation methods do not account for heterogeneity or simulate this heterogeneity introducing empirical zonation of the calibrated parameters or using variogram-based geostatistical techniques that are often not able to describe realistic heterogeneity in complex geological environments where e.g. sedimentary structures, multi-facies deposits, structures with large connectivity or curvi-linear structures can be present. Multiple-point geostatistics aims to overcome the limitations of the variogram and can be applied in different research domains to simulate heterogeneity in complex environments. In this project, multiple-point geostatistics is applied to the interpretation of pumping tests and a tracer test in an actual case of a sandy heterogeneous aquifer. This study allows to deduce the main advantages and disadvantages of this technique compared to variogram-based techniques for interpretation of pumping tests and tracer tests. A pumping test and a tracer test were performed in the same sandbar deposit consisting of cross-bedded units composed of materials with different grain sizes and hydraulic conductivities. The pumping test and the tracer test are analyzed with a local 3D groundwater model in which fine-scale sedimentary heterogeneity is modelled using multiple-point geostatistics. To reduce CPU and RAM requirements of the multiple-point geostatistical simulation steps, edge properties indicating the presence of irregularly-shaped surfaces are directly simulated. Results show that for the pumping test as well as for the tracer test, incorporating heterogeneity results in a better fit between observed and calculated drawdowns/concentrations. The improvement of the fit is however not as large as expected. In this paper, the reasons for these somewhat unsatisfactory results are explored and recommendations for future

  1. Tests of control in the Audit Risk Model : Effective? Efficient?

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans)


    Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes

  2. Approximate Tests of Hypotheses in Regression Models with Grouped Data (United States)


    in terms of Kolmogoroff -Smirnov statistic in the next section. I 1 1 I t A 4. Simulations Two models have been considered for simulations. Model I. Yuk...Fort Meade, MD 20755 2 Commanding Officer Navy LibraryrnhOffice o Naval Research National Space Technology LaboratoryBranch Office *Attn: Navy

  3. Explaining Cooperation in Groups: Testing Models of Reciprocity and Learning (United States)

    Biele, Guido; Rieskamp, Jorg; Czienskowski, Uwe


    What are the cognitive processes underlying cooperation in groups? This question is addressed by examining how well a reciprocity model, two learning models, and social value orientation can predict cooperation in two iterated n-person social dilemmas with continuous contributions. In the first of these dilemmas, the public goods game,…

  4. Testing the specifications of parametric models using anchoring vignettes

    NARCIS (Netherlands)

    van Soest, A.H.O.; Vonkova, H.

    Comparing assessments on a subjective scale across countries or socio-economic groups is often hampered by differences in response scales across groups. Anchoring vignettes help to correct for such differences, either in parametric models (the compound hierarchical ordered probit (CHOPIT) model and

  5. Testing the Solow model in Nigeria's economy | Ahuru | Journal of ...

    African Journals Online (AJOL)

    While the Basic Solow model was completely validated using the Nigeria's economy, the Augmented Solow model was non-compliant with prescriptions by Romer, Mankiw & Weil. The study recommended among others the creation of enabling environment for an effective macroeconomic policy framework that supports the ...

  6. A Test Based on the Contingency Model of Leadership. (United States)

    Vecchio, Robert P.


    The contingency model of leadership was extended to investigate subordinate satisfaction. It was hypothesized that subordinate satisfaction with a leader would yield evidence of an interaction between leadership style and situational parameters. Results indicated moderate support for an extension of the contingency model formulation. (RC)

  7. Choice of a High-Level Fault Model for the Optimization of Validation Test Set Reused for Manufacturing Test

    Directory of Open Access Journals (Sweden)

    Yves Joannon


    Full Text Available With the growing complexity of wireless systems on chip integrating hundreds-of-millions of transistors, electronic design methods need to be upgraded to reduce time-to-market. In this paper, the test benches defined for design validation or characterization of AMS & RF SoCs are optimized and reused for production testing. Although the original validation test set allows the verification of both design functionalities and performances, this test set is not well adapted to manufacturing test due to its high execution time and high test equipment costs requirement. The optimization of this validation test set is based on the evaluation of each test vector. This evaluation relies on high-level fault modeling and fault simulation. Hence, a fault model based on the variations of the parameters of high abstraction level descriptions and its related qualification metric are presented. The choice of functional or behavioral abstraction levels is discussed by comparing their impact on structural fault coverage. Experiments are performed on the receiver part of a WCDMA transceiver. Results show that for this SoC, using behavioral abstraction level is justified for the generation of manufacturing test benches.

  8. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming


    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... the exchange of axial momentum between the tunnel and the ambient room is represented by a simple formula, derived from actuator disc computations. The correction model is validated against Navier-Stokes computations of the flow about a wind turbine rotor. Generally, the corrections from the model are in very...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  9. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan


    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  10. Modeling the Impact of Test Anxiety and Test Familiarity on the Criterion-Related Validity of Cognitive Ability Tests (United States)

    Reeve, Charlie L.; Heggestad, Eric D.; Lievens, Filip


    The assessment of cognitive abilities, whether it is for purposes of basic research or applied decision making, is potentially susceptible to both facilitating and debilitating influences. However, relatively little research has examined the degree to which these factors might moderate the criterion-related validity of cognitive ability tests. To…

  11. Complete Model-Based Equivalence Class Testing for the ETCS Ceiling Speed Monitor

    DEFF Research Database (Denmark)

    Braunstein, Cécile; Haxthausen, Anne Elisabeth; Huang, Wen-ling


    In this paper we present a new test model written in SysML and an associated blackbox test suite for the Ceiling Speed Monitor (CSM) of the European Train Control System (ETCS). The model is publicly available and intended to serve as a novel benchmark for investigating new testing theories...

  12. 2-D Model Test Study of the Breakwater at Porto de Dande , Angola

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Ramirez, Jorge Robert Rodriguez; Burcharth, Hans F.

    This report deals with a two-dimensional model test study of the new breakwater at Porto de Dande, Angola. One cross-section was tested for stability and overtopping in various sea conditions. The length scale used for the model tests was 1:32. Unless otherwise specified all values given in this ...

  13. A match-mismatch test of a stage model of behaviour change in tobacco smoking

    NARCIS (Netherlands)

    Dijkstra, A; Conijn, B; De Vries, H

    Aims An innovation offered by stage models of behaviour change is that of stage-matched interventions. Match-mismatch studies are the primary test of this idea but also the primary test of the validity of stage models. This study aimed at conducting such a test among tobacco smokers using the Social

  14. 3-D Hydraulic Model Testing of the New Roundhead in Suape, Brazil

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Burcharth, Hans F.; Sipavicius, A.

    This report deals with a three-dimensional model test study of the extension of the breakwater in Suape, Brazil. The roundhead was tested for stability in various sea conditions. The length scale used for the model tests was 1:35. Unless otherwise specified all values given in this report...

  15. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter


    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space, the j...

  16. The Rasch model for speed tests and some extensions with applications to incomplete designs

    NARCIS (Netherlands)

    Jansen, MGH


    In psychological measurement a distinction can be made between speed and power tests. Although most tests are partially speeded, the speed element is usually neglected. Here, the focus will be on latent trait models for pure speed tests. A particularly simple model has been developed by Rasch for

  17. Modeling the Extremely Lightweight Zerodur Mirror (ELZM) Thermal Soak Test (United States)

    Brooks, Thomas E.; Eng, Ron; Hull, Tony; Stahl, H. Philip


    Exoplanet science requires extreme wavefront stability (10 pm change/10 minutes), so every source of wavefront error (WFE) must be characterized in detail. This work illustrates the testing and characterization process that will be used to determine how much surface figure error (SFE) is produced by mirror substrate materials' CTE distributions. Schott's extremely lightweight Zerodur mirror (ELZM) was polished to a sphere, mounted, and tested at Marshall Space Flight Center (MSFC) in the X-Ray and Cryogenic Test Facility (XRCF). The test transitioned the mirror's temperature from an isothermal state at 292K to isothermal states at 275K, 250K and 230K to isolate the effects of the mirror's CTE distribution. The SFE was measured interferometrically at each temperature state and finite element analysis (FEA) has been completed to assess the predictability of the change in the mirror's surface due to a change in the mirror's temperature. The coefficient of thermal expansion (CTE) distribution in the ELZM is unknown, so the analysis has been correlated to the test data. The correlation process requires finding the sensitivity of SFE to a given CTE distribution in the mirror. A novel hand calculation is proposed to use these sensitivities to estimate thermally induced SFE. The correlation process was successful and is documented in this paper. The CTE map that produces the measured SFE is in line with the measured data of typical boules of Schott's Zerodur glass.

  18. Wall Correction Model for Wind Tunnels with Open Test Section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming


    In th paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, that is based on a onedimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. Generally......, the corrections from the model are in very good agreement with the CFD computaions, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections. Keywords: Wind tunnel correction, momentum theory...

  19. Thermomechanical modeling of the Spent Fuel Test-Climax

    Energy Technology Data Exchange (ETDEWEB)

    Butkovich, T.R.; Patrick, W.C.


    The Spent Fuel Test-Climax (SFT-C) was conducted to evaluate the feasibility of retrievable deep geologic storage of commercially generated spent nuclear-reactor fuel assemblies. One of the primary aspects of the test was to measure the thermomechanical response of the rock mass to the extensive heating of a large volume of rock. Instrumentation was emplaced to measure stress changes, relative motion of the rock mass, and tunnel closures during three years of heating from thermally decaying heat sources, followed by a six-month cooldown period. The calculations reported here were performed using the best available input parameters, thermal and mechanical properties, and power levels which were directly measured or inferred from measurements made during the test. This report documents the results of these calculations and compares the results with selected measurements made during heating and cooling of the SFT-C.

  20. Sunscreens--what is the ideal testing model? (United States)

    Cole, Curtis


    Sunscreen protection assessment methodologies have been evolving in tandem with the innovation and evolution of sunscreen products themselves; from initial human testing in the Swiss Alps, to laboratory testing with high intensity solar simulators, to spectrophotometers with modern CCD array photocells and diffuse reflectance spectroscopy techniques. The progress in the science leads regulatory development of standard methods, and provides new and improved ways to assess sunscreen protection properties. This review scans much of the history of the development of these methods and highlights the latest development in non-invasive sunscreen testing as an opportunity to improve accuracy while eliminating human UV exposures. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Randomised testing of a microprocessor model using SMT-solver state generation


    Campbell, Brian; Stark, Ian


    We validate a HOL4 model of the ARM Cortex-M0 microcontroller core by testing the model's behaviour on randomly chosen instructions against real chips from several manufacturers. The model and our intended application involve precise timing information about instruction execution, but the implementations are pipelined, so checking the behaviour of single instructions would not give us sufficient confidence in the model. Thus we test the model using sequences of randomly chosen instructions. T...

  2. Modelling Test of Autothermal Gasification Process Using CFD (United States)

    Janoszek, Tomasz; Stańczyk, Krzysztof; Smoliński, Adam


    There are many complex physical and chemical processes, which take place among the most notable are the chemical reactions, mass and energy transport, and phase transitions. The process itself takes place in a block of coal, which properties are variable and not always easy to determine in the whole volume. The complexity of the phenomena results in the need for a construction of a complex model in order to study the process on the basis of simulation. In the present study attempts to develop a numerical model of the fixed bed coal gasification process in homogeneous solid block with a given geometry were mode. On the basis of analysis and description of the underground coal gasification simulated in the ex-situ experiment, a numerical model of the coal gasification process was developed. The model was implemented with the use of computational fluid dynamic CFD methods. Simulations were conducted using commercial numerical CFD code and the results were verified with the experimental data.

  3. Tests of the standard electroweak model in beta decay

    CERN Document Server

    Severijns, N; Naviliat-Cuncic, O


    We review the current status of precision measurements in allowed nuclear beta decay, including neutron decay, with emphasis on their potential to look for new physics beyond the standard electroweak model. The experimental results are interpreted in the framework of phenomenological model-independent descriptions of nuclear beta decay as well as in some specific extensions of the standard model. The values of the standard couplings and the constraints on the exotic couplings of the general beta decay Hamiltonian are updated. For the ratio between the axial and the vector couplings we obtain C_A/C_V = -1.26992(69) under the standard model assumptions. Particular attention is devoted to the discussion of the sensitivity and complementarity of different precision experiments in direct beta decay. The prospects and the impact of recent developments of precision tools and of high intensity low energy beams are also addressed.


    Directory of Open Access Journals (Sweden)

    A. P. Melnikov


    Full Text Available It is shown that the ready-built mathematical model of molding sand will enable to control its characteristics and to make forecasting of technological parameters for providing of the given characteristics.

  5. Tests of the standard electroweak model in beta decay

    Energy Technology Data Exchange (ETDEWEB)

    Severijns, N.; Beck, M. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium); Naviliat-Cuncic, O. [Caen Univ., CNRS-ENSI, 14 (France). Lab. de Physique Corpusculaire


    We review the current status of precision measurements in allowed nuclear beta decay, including neutron decay, with emphasis on their potential to look for new physics beyond the standard electroweak model. The experimental results are interpreted in the framework of phenomenological model-independent descriptions of nuclear beta decay as well as in some specific extensions of the standard model. The values of the standard couplings and the constraints on the exotic couplings of the general beta decay Hamiltonian are updated. For the ratio between the axial and the vector couplings we obtain C{sub A},/C{sub V} = -1.26992(69) under the standard model assumptions. Particular attention is devoted to the discussion of the sensitivity and complementarity of different precision experiments in direct beta decay. The prospects and the impact of recent developments of precision tools and of high intensity low energy beams are also addressed. (author)

  6. Modeling Asteroid Dynamics using AMUSE: First Test Cases (United States)

    Frantseva, Kateryna; Mueller, Michael; van der Tak, Floris; Helmich, Frank P.


    We are creating a dynamic model of the current asteroid population. The goal is to reproduce measured impact rates in the current Solar System, from which we'll derive delivery rates of water and organic material by tracing low-albedo C-class asteroids (using the measured albedo distribution from WISE catalog), the parent bodies of carbonaceous chondrite meteorites. Ultimately, we aim at studying the role of "exo-asteroids" in the delivery of water to exoplanets.Our model is set up using the Astrophysical Multipurpose Software Environment (AMUSE; AMUSE provides a common Python wrapper around numerous astrophysical codes including N-body gravity codes such as Mercury and Huayno.We report first results towards a validation of our model: long-term integrations of the planets alone as well as studies of the depletion of the Kirkwood gaps in the asteroid belt. Further model developments will be discussed.

  7. Vehicle-Snow Interaction: Modeling, Testing and Validation (United States)


    Stochastic in nature: – Stochastic models at each scale (e.g., Gaussian Random Field at the mesoscale, semi- variogram at the macroscale) – Key...A. Approved for public release Background: Needs • Microstructure ( uncertainty ) effect not assessed • Need better understanding of deformation and...Goals and Approaches • Goals: – Develop models for the mechanical properties of different types of snow – Quantify the associated uncertainties and

  8. Scalable Power-Component Models for Concept Testing (United States)


    Technology: Permanent Magnet Brushless DC machine • Model: Self-generating torque-speed-efficiency map • Future improvements: Induction machine ...system to the standard driveline – Example: BAS System – 3 kW system ISG Block, Rev. 2.0 Revision 2.0 • Four quadrant • PM Brushless Machine • Speed...and systems engineering. • Scope: Scalable, generic MATLAB/Simulink models in three areas: – Electromechanical machines (Integrated Starter

  9. Nonparametric Kernel Testing in Semiparametric Autoregressive Conditional Duration Model


    Pipat Wongsaart; Jiti Gao


    A crucially important advantage of the semiparametric regression approach to the nonlinear autoregressive conditional duration (ACD) model developed in Wongsaart et al. (2011), i.e. the so-called Semiparametric ACD (SEMI-ACD) model, is the fact that its estimation method does not require a parametric assumption on the conditional distribution of the standardized duration process and, therefore, the shape of the baseline hazard function. The research in this paper complements that of Wongsaart...

  10. Testing the Empirical Shock Arrival Model Using Quadrature Observations (United States)

    Gopalswamy, N.; Makela, P.; Xie, H.; Yashiro, S.


    The empirical shock arrival (ESA) model was developed based on quadrature data from Helios (in situ) and P-78 (remote sensing) to predict the Sun-Earth travel time of coronal mass ejections (CMEs). The ESA model requires earthward CME speed as input, which is not directly measurable from coronagraphs along the Sun-Earth line. The Solar Terrestrial Relations Observatory (STEREO) and the Solar and Heliospheric Observatory (SOHO) were in quadrature during 20102012, so the speeds of Earth-directed CMEs were observed with minimal projection effects. We identified a set of 20 full halo CMEs in the field of view of SOHO that were also observed in quadrature by STEREO. We used the earthward speed from STEREO measurements as input to the ESA model and compared the resulting travel times with the observed ones from L1 monitors. We find that the model predicts the CME travel time within about 7.3 h, which is similar to the predictions by the ENLIL model. We also find that CME-CME and CME-coronal hole interaction can lead to large deviations from model predictions.


    Energy Technology Data Exchange (ETDEWEB)

    Hu, Renyu [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Demory, Brice-Olivier [Astrophysics Group, Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Seager, Sara; Lewis, Nikole [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Showman, Adam P., E-mail: [Department of Planetary Sciences, University of Arizona, Tucson, AZ 85721 (United States)


    Kepler has detected numerous exoplanet transits by measuring stellar light in a single visible-wavelength band. In addition to detection, the precise photometry provides phase curves of exoplanets, which can be used to study the dynamic processes on these planets. However, the interpretation of these observations can be complicated by the fact that visible-wavelength phase curves can represent both thermal emission and scattering from the planets. Here we present a semi-analytical model framework that can be applied to study Kepler and future visible-wavelength phase curve observations of exoplanets. The model efficiently computes reflection and thermal emission components for both rocky and gaseous planets, considering both homogeneous and inhomogeneous surfaces or atmospheres. We analyze the phase curves of the gaseous planet Kepler- 7 b and the rocky planet Kepler- 10 b using the model. In general, we find that a hot exoplanet’s visible-wavelength phase curve having a significant phase offset can usually be explained by two classes of solutions: one class requires a thermal hot spot shifted to one side of the substellar point, and the other class requires reflective clouds concentrated on the same side of the substellar point. Particularly for Kepler- 7 b, reflective clouds located on the west side of the substellar point can best explain its phase curve. The reflectivity of the clear part of the atmosphere should be less than 7% and that of the cloudy part should be greater than 80%, and the cloud boundary should be located at 11° ± 3° to the west of the substellar point. We suggest single-band photometry surveys could yield valuable information on exoplanet atmospheres and surfaces.

  12. Model to Test Electric Field Comparisons in a Composite Fairing Cavity (United States)

    Trout, Dawn H.; Burford, Janessa


    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. This work is an extension of the bare aluminum fairing perfect electric conductor (PEC) model. Test and model data correlation is shown.

  13. A Model-Based Method for Content Validation of Automatically Generated Test Items (United States)

    Zhang, Xinxin; Gierl, Mark


    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  14. The Stice model of overeating: Tests in clinical and non-clinical samples

    NARCIS (Netherlands)

    Strien, T. van; Engels, R.C.M.E.; Leeuwe, J.F.J. van; Snoek, H.M.


    The present study tested the dual pathway model of Stice [Stice, E (1994). A review of the evidence for a sociocultural model of bulimia nervosa and an exploration of the mechanisms of action. Clinical Psychology Review, 14, 633-661 and Stice, E. (2001). A prospective test of the dual-pathway model

  15. Stochastic Processes as True-Score Models for Highly Speeded Mental Tests. (United States)

    Moore, William E.

    The previous theoretical development of the Poisson process as a strong model for the true-score theory of mental tests is discussed, and additional theoretical properties of the model from the standpoint of individual examinees are developed. The paper introduces the Erlang process as a family of test theory models and shows in the context of…

  16. Review Random regression test-day model for the analysis of dairy ...

    African Journals Online (AJOL)


    Abstract. Genetic evaluation of dairy cattle using test-day models is now common internationally. In South. Africa a fixed regression test-day model is used to generate breeding values for dairy animals on a routine basis. The model is, however, often criticized for erroneously assuming a standard lactation curve for cows.

  17. Random regression test-day model for the analysis of dairy cattle ...

    African Journals Online (AJOL)

    Genetic evaluation of dairy cattle using test-day models is now common internationally. In South Africa a fixed regression test-day model is used to generate breeding values for dairy animals on a routine basis. The model is, however, often criticized for erroneously assuming a standard lactation curve for cows in similar ...

  18. Testing the semantic differential as a model of task processes with the implicit association test. (United States)

    Xiong, Maggie J; Logan, Gordon D; Franks, Jeffery J


    In this study, we examined the hypothesis that semantic judgment tasks share overlapping processes if they require processing on common dimensions but not if they require processing on orthogonal dimensions in semantic space (Osgood, Suci, & Tannenbaum, 1957). We tested the hypothesis with the implicit association test (IATl Greenwald, McGhee, & Schwartz, 1998) in three experiments. Consistent with the hypothesis, IAT effects (costs in reaction time because of incompatible response mapping between associated judgment tasks) occurred consistently when judgment tasks tapped into common semantic dimensions, whereas no IAT effect appeared when judgment tasks entailed processing on orthogonal semantic dimensions.

  19. OPNET Modeler Simulation Testing of the New Model Used to Cooperation Between QoS and Security Mechanisms

    Directory of Open Access Journals (Sweden)

    Jan Papaj


    Full Text Available In this article the performance analysis of the new model, used to integration between QoS and Security, is introduced. OPNET modeler simulation testing of the new model with comparation with the standard model is presented. This new model enables the process of cooperation between QoS and Security in MANET. The introduction how the model is implemented to the simulation OPNET modeler is also showed. Model provides possibilities to integration and cooperation of QoS and security by the cross layer design (CLD with modified security service vector (SSV. An overview of the simulation tested of the new model, comparative study in mobile ad-hoc networks, describe requirements and directions for adapted solutions are presented. Main idea of the testing is to show how QoS and Security related services could be provided simultaneously with using minimal interfering with each service.

  20. An Innovative Physical Model for Testing Bucket Foundations

    DEFF Research Database (Denmark)

    Foglia, Aligi; Ibsen, Lars Bo; Andersen, Lars Vabbersgaard


    Monopod bucket foundations promise to become a reliable and cost-effective solution for offshore wind turbines. In this paper, six small scale tests of a steel bucket foundation subjected to quasi-static lateral load, are presented. When conducting small scale experiments on soil, scale effects can...

  1. Testing a Conceptual Change Model Framework for Visual Data (United States)

    Finson, Kevin D.; Pedersen, Jon E.


    An emergent data analysis technique was employed to test the veracity of a conceptual framework constructed around visual data use and instruction in science classrooms. The framework incorporated all five key components Vosniadou (2007a, 2007b) described as existing in a learner's schema: framework theory, presuppositions, conceptual domains,…

  2. Modeling, Simulation, and Testing of Surf Kites for Power Generation

    NARCIS (Netherlands)

    Williams, P.; Lansdorp, B.; Ruiterkamp, R.; Ockels, W.J.


    Non-powered flight vehicles such as kites can provide a means of transmitting wind energy from higher altitudes to the ground via tethers. At Delft University of Technology, construction and testing of such a high altitude wind machine is ongoing. The concept is called the Laddermill. It generates

  3. Testing Adaptive Toolbox Models: A Bayesian Hierarchical Approach (United States)

    Scheibehenne, Benjamin; Rieskamp, Jorg; Wagenmakers, Eric-Jan


    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox…

  4. Numerical Modeling and Experimental Testing of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Zurkinden, Andrew Stephen; Kramer, Morten; Ferri, Francesco

    numerical values for comparison with the experimental test results which were carried out in the same time. It is for this reason why Chapter 4 does consist exclusively of numerical values. Experimental values and measured time series of wave elevations have been used throughout the report in order to a...

  5. How Tests Change Teaching: A Model for Reference (United States)

    Shih, Chih-Min


    The purpose of this study was to investigate the washback effects of the General English Proficiency Test (GEPT) on English teaching in two applied foreign language departments in Taiwan. One had prescribed its GEPT requirement to its day-division students whereas the other had not. Overall, the GEPT did not induce a high level of washback on…

  6. Development of Phenomenological Models of Underground Nuclear Tests on Pahute Mesa, Nevada Test Site - BENHAM and TYBO

    Energy Technology Data Exchange (ETDEWEB)

    Pawloski, G.A.


    Although it is well accepted that underground nuclear explosions modify the in situ geologic media around the explosion point, the details of these changes are neither well understood nor well documented. As part of the engineering and containment process before a nuclear test, the physical environment is characterized to some extent to predict how the explosion will interact with the in situ media. However, a more detailed characterization of the physical environment surrounding an expended site is needed to successfully model radionuclide transport in the groundwater away from the detonation point. It is important to understand how the media have been altered and where the radionuclides are deposited. Once understood, this information on modified geologic media can be incorporated into a phenomenological model that is suitable for input to computer simulations of groundwater flow and radionuclide transport. The primary goals of this study are to (1) identify the modification of the media at a pertinent scale, and (2) provide this information to researchers modeling radionuclide transport in groundwater for the US Department of Energy (DOE) Nevada Operations Office Underground Test Area (UGTA) Project. Results from this study are most applicable at near-field scale (a model domain of about 500 m) and intermediate-field scale (a model domain of about 5 km) for which detailed information can be maximized as it is incorporated in the modeling grids. UGTA collected data on radionuclides in groundwater during recent drilling at the ER-20-5 site, which is near BENHAM and TYBO on Pahute Mesa at the Nevada Test Site (NTS). Computer simulations are being performed to better understand radionuclide transport. The objectives of this modeling effort include: evaluating site-specific information from the BENHAM and TYBO tests on Pahute Mesa; augmenting the above data set with generalized containment data; and developing a phenomenological model suitable for input to

  7. Data Collecting and Processing System and Hydraulic Control System of Hydraulic Support Model Test

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU


    Full Text Available Hydraulic support is an important equipment of mechanization caving coal in modernization coal mine. Hydraulic support must pass national strength test before it quantity production and use. Hydraulic support model test based on similarity theory is a new effective hydraulic support design and test method. The test information such as displacement, stress, strain and so on can be generalized to hydraulic support prototype, which can prompt hydraulic support design. In order to satisfy the need of hydraulic support model test, the data collecting and processing system of hydraulic support model test was established, relative software was programmed, the tress computation software of practical measurement data of hydraulic support model test was programmed, which provide practical and convenient research method for hydraulic support model test. By the data collecting and processing system software of hydraulic support model test and related software, user can realize the function such as data collecting, real time display, saving, analysis and processing to strain signals. The construction of load equipment and hydraulic control system of hydraulic support model test provides a practical and convenient research way for hydraulic support model test.

  8. Development of a Model Job Performance Test for a Combat Occupational Specialty. Volume I. Test Development (United States)


    testing in general. Thorndike , Robert L., Editor, Educational Measurement, Fitzpatrick, R. and Morrison, E.J., Performance and Product Evalu- ation...Rucker, ATTN: PC Drawer O 1 HQUSA Aviation Sys Cmd. St Louis, ATTN: AMSAV-ZDR 2 USA Aviation Sys Te»t Act., Edward » AFB, ATTN: SAVTE-T 1 USA Air

  9. The methodology of the pedagogical tests in computer science based on the integrated model

    Directory of Open Access Journals (Sweden)

    Дмитрий Владимирович Шойтов


    Full Text Available The article deals with the creation of pedagogical tests based on the integrated model. The algorithm of the system design of test tasks in the formation of students' tests are defined by a positive test students in mastering course material.

  10. A Model for Quantifying Sources of Variation in Test-day Milk Yield ...

    African Journals Online (AJOL)

    A cow's test-day milk yield is influenced by several systematic environmental effects, which have to be removed when estimating the genetic potential of an animal. The present study quantified the variation due to test date and month of test in test-day lactation yield records using full and reduced models. The data consisted ...

  11. Applicability of land use models for the Houston area test site (United States)

    Petersburg, R. K.; Bradford, L. H.


    Descriptions of land use models are presented which were considered for their applicability to the Houston Area Test Site. These models are representative both of the prevailing theories of land use dynamics and of basic approaches to simulation. The models considered are: a model of metropolis, land use simulation model, emperic land use forecasting model, a probabilistic model for residential growth, and the regional environmental management allocation process. Sources of environmental/resource information are listed.

  12. Testing Mercury Porosimetry with 3D Printed Porosity Models (United States)

    Hasiuk, F.; Ewing, R. P.; Hu, Q.


    Mercury intrusion porosimetry is one of the most widely used techniques to study the porous nature of a geological and man-made materials. In the geosciences, it is commonly used to describe petroleum reservoir and seal rocks as well as to grade aggregates for the design of asphalt and portland cement concretes. It's wide utility stems from its ability to characterize a wide range of pore throat sizes (from nanometers to around a millimeter). The fundamental physical model underlying mercury intrusion porosimetry, the Washburn Equation, is based on the assumption that rock porosity can be described as a bundle of cylindrical tubes. 3D printing technology, also known as rapid prototyping, allows the construction of intricate and accurate models, exactly what is required to build models of rock porosity. We evaluate the applicability of the Washburn Equation by comparing properties (like porosity, pore and pore throat size distribution, and surface area) computed on digital porosity models (built from CT data, CAD designs, or periodic geometries) to properties measured via mercury intrusion porosimetry on 3D printed versions of the same digital porosity models.

  13. GEMMs as preclinical models for testing pancreatic cancer therapies. (United States)

    Gopinathan, Aarthi; Morton, Jennifer P; Jodrell, Duncan I; Sansom, Owen J


    Pancreatic ductal adenocarcinoma is the most common form of pancreatic tumour, with a very limited survival rate and currently no available disease-modifying treatments. Despite recent advances in the production of genetically engineered mouse models (GEMMs), the development of new therapies for pancreatic cancer is still hampered by a lack of reliable and predictive preclinical animal models for this disease. Preclinical models are vitally important for assessing therapies in the first stages of the drug development pipeline, prior to their transition to the clinical arena. GEMMs carry mutations in genes that are associated with specific human diseases and they can thus accurately mimic the genetic, phenotypic and physiological aspects of human pathologies. Here, we discuss different GEMMs of human pancreatic cancer, with a focus on the Lox-Stop-Lox (LSL)-Kras(G12D); LSL-Trp53(R172H); Pdx1-cre (KPC) model, one of the most widely used preclinical models for this disease. We describe its application in preclinical research, highlighting its advantages and disadvantages, its potential for predicting clinical outcomes in humans and the factors that can affect such outcomes, and, finally, future developments that could advance the discovery of new therapies for pancreatic cancer. © 2015. Published by The Company of Biologists Ltd.

  14. GEMMs as preclinical models for testing pancreatic cancer therapies

    Directory of Open Access Journals (Sweden)

    Aarthi Gopinathan


    Full Text Available Pancreatic ductal adenocarcinoma is the most common form of pancreatic tumour, with a very limited survival rate and currently no available disease-modifying treatments. Despite recent advances in the production of genetically engineered mouse models (GEMMs, the development of new therapies for pancreatic cancer is still hampered by a lack of reliable and predictive preclinical animal models for this disease. Preclinical models are vitally important for assessing therapies in the first stages of the drug development pipeline, prior to their transition to the clinical arena. GEMMs carry mutations in genes that are associated with specific human diseases and they can thus accurately mimic the genetic, phenotypic and physiological aspects of human pathologies. Here, we discuss different GEMMs of human pancreatic cancer, with a focus on the Lox-Stop-Lox (LSL-KrasG12D; LSL-Trp53R172H; Pdx1-cre (KPC model, one of the most widely used preclinical models for this disease. We describe its application in preclinical research, highlighting its advantages and disadvantages, its potential for predicting clinical outcomes in humans and the factors that can affect such outcomes, and, finally, future developments that could advance the discovery of new therapies for pancreatic cancer.

  15. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.


    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  16. Test of interaction models with the KASCADE hadron calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Milkea, J. E-mail:; Antonib, T.; Apel, W.D.; Badea, F.; Bekk, K.; Bercuci, A.; Bluemer, H.; Bozdog, H.; Brancus, I.M.; Buettner, C.; Chilingarian, A.; Daumiller, K.; Doll, P.; Engler, J.; Fessler, F.; Gils, H.J.; Glasstetter, R.; Haeusler, R.; Haungs, A.; Heck, D.; Hoerandel, J.R.; Iwan, A.; Kampert, K.-H.; Klages, H.O.; Maier, G.; Mathes, H.J.; Mayer, H.J.; Mueller, M.; Obenland, R.; Oehlschlaegera, J.; Ostapchenko, S.; Petcu, M.; Rebel, H.; Risse, M.; Roth, M.; Schatz, G.; Schieler, H.; Scholz, J.; Thouw, T.; Ulrich, H.; Weber, J.H.; Weindl, A.; Wentz, J.; Wochele, J.; Zabierowski, J


    The interpretation of extensive air shower measurements often requires the comparison with EAS simulations. These calculations rely on hadronic interaction models which have to extrapolate into kinematical and energy regions not covered by present-day collider experiments. The KASCADE experiment with its large hadron calorimeter and its detectors for the electromagnetic and muonic components provides experimental data to check hadronic interaction models. For the EAS simulations the program CORSIKA with several hadronic event generators embedded is used. Different hadronic observables are investigated as well as their correlations with the electromagnetic and muonic components. Comparing the interaction models QGSJET 98, NEXUS II, and DPMJET 11.5, it is found, that QGSJET describes the data best.

  17. Simulation Models in Testing Reliability of Transport Process

    Directory of Open Access Journals (Sweden)

    Jacyna Marianna


    Full Text Available The paper touches the problem of applying simulation models to assess the reliability of services in transport networks. Investigation of the transport processes in terms of their reliability is a complex decision-making task. The paper describes a method for assessing the reliability of transport process on the base of the criterion of minimizing the normalized lost time of vehicles. The time is wasted in a result of conflict situations occurring in the transport network during the transport process. The study includes stochastic distributions of system input. It enables studying the quality parameters of the transport network equipment, including service providers working under different workload and all kinds of disturbances. The method uses simulation models. Simulation studies were performed with Java Modelling Tools.

  18. Synthetic clusters of massive stars to test stellar evolution models (United States)

    Georgy, Cyril; Ekström, Sylvia


    During the last few years, the Geneva stellar evolution group has released new grids of stellar models, including the effect of rotation and with updated physical inputs (Ekström et al. 2012; Georgy et al. 2013a, b). To ease the comparison between the outputs of the stellar evolution computations and the observations, a dedicated tool was developed: the Syclist toolbox (Georgy et al. 2014). It allows to compute interpolated stellar models, isochrones, synthetic clusters, and to simulate the time-evolution of stellar populations.

  19. Testing APT Model upon a BVB Stocks’ Portfolio

    Directory of Open Access Journals (Sweden)

    Alexandra BONTAŞ


    Full Text Available Applying the Arbitrage Pricing Theory model (APT, there can be identified the major factors of influence for a BVB’ portfolio stocks' trend. There were taken into consideration two of the APT theory models, establishing influences upon portfolio's yield: given to macroeconomic environment and to some stochastic factors. The researchs results certify that, on the long term, what influences the stocks’ movement in the stock market is mostly the action of specific short-term factors, without general covering, like the ones that are classified in the research area of behavioral finance (investors’ preference towards risk and towards time.

  20. Sonora: A New Generation Model Atmosphere Grid for Brown Dwarfs and Young Extrasolar Giant Planets (United States)

    Marley, Mark S.; Saumon, Didier; Fortney, Jonathan J.; Morley, Caroline; Lupu, Roxana Elena; Freedman, Richard; Visscher, Channon


    Brown dwarf and giant planet atmospheric structure and composition has been studied both by forward models and, increasingly so, by retrieval methods. While indisputably informative, retrieval methods are of greatest value when judged in the context of grid model predictions. Meanwhile retrieval models can test the assumptions inherent in the forward modeling procedure. In order to provide a new, systematic survey of brown dwarf atmospheric structure, emergent spectra, and evolution, we have constructed a new grid of brown dwarf model atmospheres. We ultimately aim for our grid to span substantial ranges of atmospheric metallilcity, C/O ratios, cloud properties, atmospheric mixing, and other parameters. Spectra predicted by our modeling grid can be compared to both observations and retrieval results to aid in the interpretation and planning of future telescopic observations. We thus present Sonora, a new generation of substellar atmosphere models, appropriate for application to studies of L, T, and Y-type brown dwarfs and young extrasolar giant planets. The models describe the expected temperature-pressure profile and emergent spectra of an atmosphere in radiative-convective equilibrium for ranges of effective temperatures and gravities encompassing 200 less than or equal to T(sub eff) less than or equal to 2400 K and 2.5 less than or equal to log g less than or equal to 5.5. In our poster we briefly describe our modeling methodology, enumerate various updates since our group's previous models, and present our initial tranche of models for cloudless, solar metallicity, and solar carbon-to-oxygen ratio, chemical equilibrium atmospheres. These models will be available online and will be updated as opacities and cloud modeling methods continue to improve.

  1. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling


    Johnson, S. D.; Groff, E.


    Objectives: The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity—agent-based computational modeling—that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interes...

  2. Leader Attributions and Leader Behavior. First Stage Testing of Theoretical Model (United States)



  3. An experimental test of two mathematical models applied to the size-weight illusion. (United States)

    Sarris, V; Heineken, E


    Two quantitative models, which make different quantitative predictions for the amount of the size-weight illusion, were tested according to the psychophysical methods employed by the respective authors (magnitude estimation versus category ratings). Both models with their corresponding method were supported. This causes uncertainty over Anderson's chaim that the validity of both a model and the applied scale used is sufficiently test by the socalled joint testing procedure.

  4. Computer-aided-engineering system for modeling and analysis of ECLSS integration testing (United States)

    Sepahban, Sonbol


    The accurate modeling and analysis of two-phase fluid networks found in environmental control and life support systems is presently undertaken by computer-aided engineering (CAE) techniques whose generalized fluid dynamics package can solve arbitrary flow networks. The CAE system for integrated test bed modeling and analysis will also furnish interfaces and subsystem/test-article mathematical models. Three-dimensional diagrams of the test bed are generated by the system after performing the requisite simulation and analysis.

  5. Ares I Scale Model Acoustic Test Above Deck Water Sound Suppression Results (United States)

    Counter, Douglas D.; Houston, Janice D.


    The Ares I Scale Model Acoustic Test (ASMAT) program test matrix was designed to determine the acoustic reduction for the Liftoff acoustics (LOA) environment with an above deck water sound suppression system. The scale model test can be used to quantify the effectiveness of the water suppression system as well as optimize the systems necessary for the LOA noise reduction. Several water flow rates were tested to determine which rate provides the greatest acoustic reductions. Preliminary results are presented.

  6. Testing Nested Additive, Multiplicative, and General Multitrait-Multimethod Models. (United States)

    Coenders, Germa; Saris, Willem E.


    Provides alternatives to the definitions of additive and multiplicative method effects in multitrait-multimethod data given by D. Campbell and E. O'Connell (1967). The alternative definitions can be formulated by means of constraints in the parameters of the correlated uniqueness model (H. Marsh, 1989). (SLD)

  7. Torpedo modelling in TORSIM and torpedo defence test bed

    NARCIS (Netherlands)

    Grootendorst, H.J.; Benders, F.P.A.; Driessen, F.P.G.; Witberg, R.


    The validated TORSIM (TORpedo SlMulation) model simulates the behaviour and determines the effectiveness of different torpedo types (MK46 and MK48), launched from a surface ship, from an air vehicle or from a submarine , against different types of submarine s or surface ships. Evasive manoeuvres of

  8. Improved Testing of Distributed Lag Model in Presence of ...

    African Journals Online (AJOL)

    The finite distributed lag models (DLM) are often used in econometrics and statistics. Application of the ordinary least square (OLS) directly on the DLM for estimation may have serious problems. To overcome these problems, some alternative estimation procedures are available in the literature. One popular method to ...

  9. Modeling Asteroid Dynamics using AMUSE: First Test Cases

    NARCIS (Netherlands)

    Frantseva, Kateryna; Mueller, Michael; van der Tak, Floris; Helmich, Frank P.


    We are creating a dynamic model of the current asteroid population. The goal is to reproduce measured impact rates in the current Solar System, from which we'll derive delivery rates of water and organic material by tracing low-albedo C-class asteroids (using the measured albedo distribution from

  10. Testing a biobehavioral model of irritable bowel syndrome

    NARCIS (Netherlands)

    Veek, P.P.J. van der; Dusseldorp, E.; Rood, Y.R. van; Masclee, A.A.M.


    Objective: The pathogenesis of irritable bowel syndrome (IBS) is probably multifactorial with dysfunction at different levels of the brain-gut axis. The aim of this study was to evaluate an existing biobehavioral model of IBS symptom generation in a large group of patients. Material and Methods: In

  11. Using models to provide a virtual test of forest treatments (United States)

    Janet Sullivan; Kevin Hyde


    BEMRP's participation in the Bitterroot National Forest's proposed Trapper Bunkhouse Land Stewardship Project (Trapper-Bunkhouse Project) consists of two parts. One is the field study mentioned elsewhere in this ECO-Report that is looking into the effects of thinning and burning on various resources. The other part involves modeling to determine where...

  12. A review of experiments testing the shoving model

    DEFF Research Database (Denmark)

    Hecksher, Tina; Dyre, J. C.


    According to the shoving model the non-Arrhenius temperature dependence of supercooled liquids' relaxation time (or viscosity) derives from the fact that the high-frequency shear modulus is temperature dependent in the supercooled phase, often increasing a factor of three or four in the temperatu...

  13. Two Tests of Piaget's Equilibration Model: A Replication and Extension. (United States)

    Silverman, Irwin W.; Litman, Ruth


    Pairs of elementary school children at different concept development levels were given problems to discuss, in order to examine the prediction, derived from the equilibration model, that when two children holding different beliefs must arrive at a consenus, the child possessing the higher level of cognitive development will prevail over the child…

  14. Tests of risk premia in linear factor models

    NARCIS (Netherlands)

    Kleibergen, F.R.


    We show that inference on risk premia in linear factor models that is based on the Fama-MacBeth and GLS risk premia estimators is misleading when the ß’s are small and/or the number of assets is large. We propose some novel statistics that remain trustworthy in these cases. The inadequacy of

  15. Studies on statistical models for polytomously scored test items

    NARCIS (Netherlands)

    Akkermans, Wies


    This dissertation, which is structured as a collection of self-contained papers, will be concerned mainly with di�erences between item response models. The purpose of item response theory (IRT) is estimation of a hypothesized latent variable, such as, for example, intelligence or ability in

  16. Relevant Criteria for Testing the Quality of Turbulence Models

    DEFF Research Database (Denmark)

    Frandsen, Sten; Jørgensen, Hans E.; Sørensen, John Dalsgaard


    % smaller than the IEC model, for wind turbine hub height levels. The mean is only marginally dependent on trends in time series. It is also found that the coefficient of variation of the measured length scales is about 50%. 3sec and 10sec pre-averaging of wind speed data are relevant for MW-size wind...

  17. Modeling, design and testing of the electrostatic shuffle motor

    NARCIS (Netherlands)

    Tas, Niels Roelof; Wissink, Jeroen; Lammerink, Theodorus S.J.; Sander, Louis; Elwenspoek, Michael Curt; Sander, A.F.M.


    The shuffle motor is a linear electrostatic stepper motor employing a mechanical transformation to obtain large forces and small steps. A model has been made to calculate the step size and the driving voltage as a function of the load force and the motor geometry. The motor consists of three

  18. Testing the HTA core model: experiences from two pilot projects

    DEFF Research Database (Denmark)

    Pasternack, Iris; Anttila, Heidi; Mäkelä, Marjukka


    , their validation feedback, questionnaires to investigators, meeting minutes, emails, and discussions in the coordinating team meetings in the Finnish Office for Health Technology Assessment (FINOHTA). RESULTS: The elementary structure of the HTA Core Model proved useful in preparing HTAs. Clear scoping and good...

  19. Numerical Modelling and Measurement in a Test Secondary Settling Tank

    DEFF Research Database (Denmark)

    Dahl, C.; Larsen, Torben; Petersen, O.


    and for comparing measured and calculated result. The numerical model could, fairly accuratly, predict the measured results and both the measured and the calculated results showed a flow field pattern identical to flow fields in full-scale secondary setling tanks. A specific calibration of the Bingham plastic...

  20. Toward Modeling the Intrinsic Complexity of Test Problems (United States)

    Shoufan, Abdulhadi


    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…