WorldWideScience

Sample records for models reproduce observed

  1. Can a regional climate model reproduce observed extreme temperatures?

    Directory of Open Access Journals (Sweden)

    Peter F. Craigmile

    2013-10-01

    Full Text Available Using output from a regional Swedish climate model and observations from the Swedish synoptic observational network, we compare seasonal minimum temperatures from model output and observations using marginal extreme value modeling techniques. We make seasonal comparisons using generalized extreme value models and empirically estimate the shift in the distribution as a function of the regional climate model values, using the Doksum shift function. Spatial and temporal comparisons over south central Sweden are made by building hierarchical Bayesian generalized extreme value models for the observed minima and regional climate model output. Generally speaking the regional model is surprisingly well calibrated for minimum temperatures. We do detect a problem in the regional model to produce minimum temperatures close to 0◦C. The seasonal spatial effects are quite similar between data and regional model. The observations indicate relatively strong warming, especially in the northern region. This signal is present in the regional model, but is not as strong.

  2. Can a global model reproduce observed trends in summertime surface ozone levels?

    Directory of Open Access Journals (Sweden)

    S. Koumoutsaris

    2012-01-01

    Full Text Available Quantifying trends in surface ozone concentrations are critical for assessing pollution control strategies. Here we use observations and results from a global chemical transport model to examine the trends (1991–2005 in daily maximum 8-hour average concentrations in summertime surface ozone at rural sites in Europe and the United States. We find a decrease in observed ozone concentrations at the high end of the probability distribution at many of the sites in both regions. The model attributes these trends to a decrease in local anthropogenic ozone precursors, although simulated decreasing trends are overestimated in comparison with observed ones. The low end of observed distribution show small upward trends over Europe and the western US and downward trends in Eastern US. The model cannot reproduce these observed trends, especially over Europe and the western US. In particular, simulated changes between the low and high end of the distributions in these two regions are not significant. Sensitivity simulations indicate that emissions from far away source regions do not affect significantly ozone trends at both ends of the distribution. This is in contrast with previously available results, which indicated that increasing ozone trends at the low percentiles may reflect an increase in ozone background associated with increasing remote sources of ozone precursors. Possible reasons for discrepancies between observed and simulated trends are discussed.

  3. Can a global model reproduce observed trends in summertime surface ozone levels?

    OpenAIRE

    S. Koumoutsaris; I. Bey

    2012-01-01

    Quantifying trends in surface ozone concentrations are critical for assessing pollution control strategies. Here we use observations and results from a global chemical transport model to examine the trends (1991–2005) in daily maximum 8-hour average concentrations in summertime surface ozone at rural sites in Europe and the United States. We find a decrease in observed ozone concentrations at the high end of the probability distribution at many of the sites in both regions. The model attribut...

  4. Can model observers be developed to reproduce radiologists' diagnostic performances? Our study says not so fast!

    Science.gov (United States)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.

    2016-03-01

    The purpose of this study was to determine radiologists' diagnostic performances on different image reconstruction algorithms that could be used to optimize image-based model observers. We included a total of 102 pathology proven breast computed tomography (CT) cases (62 malignant). An iterative image reconstruction (IIR) algorithm was used to obtain 24 reconstructions with different image appearance for each image. Using quantitative image feature analysis, three IIRs and one clinical reconstruction of 50 lesions (25 malignant) were selected for a reader study. The reconstructions spanned a range of smooth-low noise to sharp-high noise image appearance. The trained classifiers' AUCs on the above reconstructions ranged from 0.61 (for smooth reconstruction) to 0.95 (for sharp reconstruction). Six experienced MQSA radiologists read 200 cases (50 lesions times 4 reconstructions) and provided the likelihood of malignancy of each lesion. Radiologists' diagnostic performances (AUC) ranged from 0.7 to 0.89. However, there was no agreement among the six radiologists on which image appearance was the best, in terms of radiologists' having the highest diagnostic performances. Specifically, two radiologists indicated sharper image appearance was diagnostically superior, another two radiologists indicated smoother image appearance was diagnostically superior, and another two radiologists indicated all image appearances were diagnostically similar to each other. Due to the poor agreement among radiologists on the diagnostic ranking of images, it may not be possible to develop a model observer for this particular imaging task.

  5. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2017-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  6. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    Science.gov (United States)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  7. Reproducing the observed energy-dependent structure of Earth's electron radiation belts during storm recovery with an event-specific diffusion model

    Science.gov (United States)

    Ripoll, J.-F.; Reeves, G. D.; Cunningham, G. S.; Loridan, V.; Denton, M.; Santolík, O.; Kurth, W. S.; Kletzing, C. A.; Turner, D. L.; Henderson, M. G.; Ukhorskiy, A. Y.

    2016-06-01

    We present dynamic simulations of energy-dependent losses in the radiation belt "slot region" and the formation of the two-belt structure for the quiet days after the 1 March storm. The simulations combine radial diffusion with a realistic scattering model, based data-driven spatially and temporally resolved whistler-mode hiss wave observations from the Van Allen Probes satellites. The simulations reproduce Van Allen Probes observations for all energies and L shells (2-6) including (a) the strong energy dependence to the radiation belt dynamics (b) an energy-dependent outer boundary to the inner zone that extends to higher L shells at lower energies and (c) an "S-shaped" energy-dependent inner boundary to the outer zone that results from the competition between diffusive radial transport and losses. We find that the characteristic energy-dependent structure of the radiation belts and slot region is dynamic and can be formed gradually in ~15 days, although the "S shape" can also be reproduced by assuming equilibrium conditions. The highest-energy electrons (E > 300 keV) of the inner region of the outer belt (L ~ 4-5) also constantly decay, demonstrating that hiss wave scattering affects the outer belt during times of extended plasmasphere. Through these simulations, we explain the full structure in energy and L shell of the belts and the slot formation by hiss scattering during storm recovery. We show the power and complexity of looking dynamically at the effects over all energies and L shells and the need for using data-driven and event-specific conditions.

  8. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  9. Assessment of Modeling Capability for Reproducing Storm Impacts on TEC

    Science.gov (United States)

    Shim, J. S.; Kuznetsova, M. M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Foerster, M.; Foster, B.; Fuller-Rowell, T. J.; Huba, J. D.; Goncharenko, L. P.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.

    2014-12-01

    During geomagnetic storm, the energy transfer from solar wind to magnetosphere-ionosphere system adversely affects the communication and navigation systems. Quantifying storm impacts on TEC (Total Electron Content) and assessment of modeling capability of reproducing storm impacts on TEC are of importance to specifying and forecasting space weather. In order to quantify storm impacts on TEC, we considered several parameters: TEC changes compared to quiet time (the day before storm), TEC difference between 24-hour intervals, and maximum increase/decrease during the storm. We investigated the spatial and temporal variations of the parameters during the 2006 AGU storm event (14-15 Dec. 2006) using ground-based GPS TEC measurements in the selected 5 degree eight longitude sectors. The latitudinal variations were also studied in two longitude sectors among the eight sectors where data coverage is relatively better. We obtained modeled TEC from various ionosphere/thermosphere (IT) models. The parameters from the models were compared with each other and with the observed values. We quantified performance of the models in reproducing the TEC variations during the storm using skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

  10. Venusian Polar Vortex reproduced by a general circulation model

    Science.gov (United States)

    Ando, Hiroki; Sugimoto, Norihiko; Takagi, Masahiro

    2016-10-01

    Unlike the polar vortices observed in the Earth, Mars and Titan atmospheres, the observed Venus polar vortex is warmer than the mid-latitudes at cloud-top levels (~65 km). This warm polar vortex is zonally surrounded by a cold latitude band located at ~60 degree latitude, which is a unique feature called 'cold collar' in the Venus atmosphere [e.g. Taylor et al. 1980; Piccioni et al. 2007]. Although these structures have been observed in numerous previous observations, the formation mechanism is still unknown. In addition, an axi-asymmetric feature is always seen in the warm polar vortex. It changes temporally and sometimes shows a hot polar dipole or S-shaped structure as shown by a lot of infrared measurements [e.g. Garate-Lopez et al. 2013; 2015]. However, its vertical structure has not been investigated. To solve these problems, we performed a numerical simulation of the Venus atmospheric circulation using a general circulation model named AFES for Venus [Sugimoto et al. 2014] and reproduced these puzzling features.And then, the reproduced structures of the atmosphere and the axi-asymmetirc feature are compared with some previous observational results.In addition, the quasi-periodical zonal-mean zonal wind fluctuation is also seen in the Venus polar vortex reproduced in our model. This might be able to explain some observational results [e.g. Luz et al. 2007] and implies that the polar vacillation might also occur in the Venus atmosphere, which is silimar to the Earth's polar atmosphere. We will also show some initial results about this point in this presentation.

  11. Using a 1-D model to reproduce diurnal SST signals

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2014-01-01

    of measurement. A generally preferred approach to bridge the gap between in situ and remotely obtained measurements is through modelling of the upper ocean temperature. This ESA supported study focuses on the implementation of the 1 dimensional General Ocean Turbulence Model (GOTM), in order to resolve...... profiles, along with the selection of the coefficients for the 2-band parametrisation of light’s penetration in the water column, hold a key role in the agreement of the modelled output with observations. To improve the surface heat budget and the distribution of heat, the code was modified to include...... Institution Upper Ocean Processes Group archive. The successful implementation of the new parametrisations is verified while the model reproduces the diurnal signals seen from in situ measurements. Special focus is given to testing and validation of different set-ups using campaign data from the Atlantic...

  12. Reproducing the observed Cosmic microwave background anisotropies with causal scaling seeds

    OpenAIRE

    Durrer, R.; Kunz, M.; Melchiorri, A.

    2000-01-01

    During the last years it has become clear that global O(N) defects and U(1) cosmic strings do not lead to the pronounced first acoustic peak in the power spectrum of anisotropies of the cosmic microwave background which has recently been observed to high accuracy. Inflationary models cannot easily accommodate the low second peak indicated by the data. Here we construct causal scaling seed models which reproduce the first and second peak. Future, more precise CMB anisotropy and polarization ex...

  13. The inter-observer reproducibility of Shafer's sign.

    Science.gov (United States)

    Qureshi, F; Goble, R

    2009-03-01

    Pigment cells in the anterior vitreous (Shafer's sign) are known to be associated with retinal breaks. We sought to identify the reproducibility of Shafer's sign between different grades of ophthalmic staff. In all 47 patients were examined by a consultant vitreo-retinal surgeon, a senior house officer (SHO) and optician for Shafer's sign. Cohen's kappa for consultant vs SHO assessment of Shafer's sign was 0.55 while for consultant vs optician assessment, kappa was 0.28. Retinal tears were present in 63.8% of our series. Consultant assessment of Shafer's sign with fundoscopy findings, we found specificity to be 93.5% while sensitivity was 93.8%. Kappa for consultant assessment of Shafer's sign vs break presence was 0.86.Consultant and SHO assessment of Shafer's sign is of moderate agreement while optician assessment is fair. These results suggest a relationship between training and the assessment of Shafer's sign. We feel this study suggests caution in undue reliance on Shafer's sign particularly for inexperienced members of staff.

  14. Reproducing the observed Cosmic microwave background anisotropies with causal scaling seeds

    CERN Document Server

    Durrer, R; Melchiorri, A; Durrer, {R.

    2001-01-01

    During the last years it has become clear that global O(N) defects and U(1) cosmic strings do not lead to the pronounced first acoustic peak in the power spectrum of anisotropies of the cosmic microwave background which has recently been observed to high accuracy. Inflationary models cannot easily accommodate the low second peak indicated by the data. Here we construct causal scaling seed models which reproduce the first and second peak. Future, more precise CMB anisotropy and polarization experiments will however be able to distinguish them from the ordinary adiabatic models.

  15. SNe Ia: Can Chandrasekhar Mass Explosions Reproduce the Observed Zoo?

    CERN Document Server

    Baron, E

    2014-01-01

    The question of the nature of the progenitor of Type Ia supernovae (SNe Ia) is important both for our detailed understanding of stellar evolution and for their use as cosmological probes of the dark energy. Much of the basic features of SNe Ia can be understood directly from the nuclear physics, a fact which Gerry would have appreciated. We present an overview of the current observational and theoretical situation and show that it not incompatible with most SNe Ia being the results of thermonuclear explosions near the Chandrasekhar mass.

  16. Can a coupled meteorology–chemistry model reproduce the ...

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere has been evaluated through a comparison of 21-year simulated results with observation-derived records from 1990 to 2010. Six satellite-retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-Terra and MODIS-Aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both the top of atmosphere (TOA) and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling) and decreased surface SWR (downwelling) in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling) and increased surface SWR (downwelling) in the eastern US, Europe and the northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and northern Indian Ocean. Estimates of the aerosol direct radiative effect (DRE) at TOA a

  17. Reproducibility of LCA models of crude oil production.

    Science.gov (United States)

    Vafi, Kourosh; Brandt, Adam R

    2014-11-04

    Scientific models are ideally reproducible, with results that converge despite varying methods. In practice, divergence between models often remains due to varied assumptions, incompleteness, or simply because of avoidable flaws. We examine LCA greenhouse gas (GHG) emissions models to test the reproducibility of their estimates for well-to-refinery inlet gate (WTR) GHG emissions. We use the Oil Production Greenhouse gas Emissions Estimator (OPGEE), an open source engineering-based life cycle assessment (LCA) model, as the reference model for this analysis. We study seven previous studies based on six models. We examine the reproducibility of prior results by successive experiments that align model assumptions and boundaries. The root-mean-square error (RMSE) between results varies between ∼1 and 8 g CO2 eq/MJ LHV when model inputs are not aligned. After model alignment, RMSE generally decreases only slightly. The proprietary nature of some of the models hinders explanations for divergence between the results. Because verification of the results of LCA GHG emissions is often not possible by direct measurement, we recommend the development of open source models for use in energy policy. Such practice will lead to iterative scientific review, improvement of models, and more reliable understanding of emissions.

  18. Reproducibility Issues : Avoiding Pitfalls in Animal Inflammation Models

    NARCIS (Netherlands)

    Laman, Jon D; Kooistra, Susanne M; Clausen, Björn E; Clausen, Björn E.; Laman, Jon D.

    2017-01-01

    In light of an enhanced awareness of ethical questions and ever increasing costs when working with animals in biomedical research, there is a dedicated and sometimes fierce debate concerning the (lack of) reproducibility of animal models and their relevance for human inflammatory diseases. Despite

  19. Modeling and evaluating repeatability and reproducibility of ordinal classifications

    NARCIS (Netherlands)

    J. de Mast; W.N. van Wieringen

    2010-01-01

    This paper argues that currently available methods for the assessment of the repeatability and reproducibility of ordinal classifications are not satisfactory. The paper aims to study whether we can modify a class of models from Item Response Theory, well established for the study of the reliability

  20. Reproducibility Issues: Avoiding Pitfalls in Animal Inflammation Models.

    Science.gov (United States)

    Laman, Jon D; Kooistra, Susanne M; Clausen, Björn E

    2017-01-01

    In light of an enhanced awareness of ethical questions and ever increasing costs when working with animals in biomedical research, there is a dedicated and sometimes fierce debate concerning the (lack of) reproducibility of animal models and their relevance for human inflammatory diseases. Despite evident advancements in searching for alternatives, that is, replacing, reducing, and refining animal experiments-the three R's of Russel and Burch (1959)-understanding the complex interactions of the cells of the immune system, the nervous system and the affected tissue/organ during inflammation critically relies on in vivo models. Consequently, scientific advancement and ultimately novel therapeutic interventions depend on improving the reproducibility of animal inflammation models. As a prelude to the remaining hands-on protocols described in this volume, here, we summarize potential pitfalls of preclinical animal research and provide resources and background reading on how to avoid them.

  1. The variability of Sun-like stars: reproducing observed photometric trends

    CERN Document Server

    Shapiro, A I; Krivova, N A; Schmutz, W K; Ball, W T; Knaack, R; Rozanov, E V; Unruh, Y C

    2014-01-01

    The Sun and stars with low magnetic activity levels, become photometrically brighter when their activity increases. Magnetically more active stars display the opposite behaviour and get fainter when their activity increases. We reproduce the observed photometric trends in stellar variations with a model that treats stars as hypothetical Suns with coverage by magnetic features different from that of the Sun. The presented model attributes the variability of stellar spectra to the imbalance between the contributions from different components of the solar atmosphere, such as dark starspots and bright faculae. A stellar spectrum is calculated from spectra of the individual components, by weighting them with corresponding disc area coverages. The latter are obtained by extrapolating the solar dependences of spot and facular disc area coverages on chromospheric activity to stars with different levels of mean chromospheric activity. We have found that the contribution by starspots to the variability increases faster...

  2. Reproducibility and Transparency in Ocean-Climate Modeling

    Science.gov (United States)

    Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.

    2015-12-01

    Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.

  3. Research Spotlight: Improved model reproduces the 2003 European heat wave

    Science.gov (United States)

    Schultz, Colin

    2011-04-01

    In August 2003, record-breaking temperatures raged across much of Europe. In France, maximum temperatures of 37°C (99°F) persisted for 9 days straight, the longest such stretch since 1873. About 40,000 deaths (14,000 in France alone) were attributed to the extreme heat and low humidity. Various climate conditions must come into alignment to produce extreme weather like the 2003 heat wave, and despite a concerted effort, forecasting models have so far been unable to accurately reproduce the event—including the modern European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble modeling system for seasonal forecasts, which went into operation in 2007. (Geophysical Research Letters, doi:10.1029/2010GL046455, 2011)

  4. Reproducibility of UAV-based photogrammetric surface models

    Science.gov (United States)

    Anders, Niels; Smith, Mike; Cammeraat, Erik; Keesstra, Saskia

    2016-04-01

    Soil erosion, rapid geomorphological change and vegetation degradation are major threats to the human and natural environment in many regions. Unmanned Aerial Vehicles (UAVs) and Structure-from-Motion (SfM) photogrammetry are invaluable tools for the collection of highly detailed aerial imagery and subsequent low cost production of 3D landscapes for an assessment of landscape change. Despite the widespread use of UAVs for image acquisition in monitoring applications, the reproducibility of UAV data products has not been explored in detail. This paper investigates this reproducibility by comparing the surface models and orthophotos derived from different UAV flights that vary in flight direction and altitude. The study area is located near Lorca, Murcia, SE Spain, which is a semi-arid medium-relief locale. The area is comprised of terraced agricultural fields that have been abandoned for about 40 years and have suffered subsequent damage through piping and gully erosion. In this work we focused upon variation in cell size, vertical and horizontal accuracy, and horizontal positioning of recognizable landscape features. The results suggest that flight altitude has a significant impact on reconstructed point density and related cell size, whilst flight direction affects the spatial distribution of vertical accuracy. The horizontal positioning of landscape features is relatively consistent between the different flights. We conclude that UAV data products are suitable for monitoring campaigns for land cover purposes or geomorphological mapping, but special care is required when used for monitoring changes in elevation.

  5. A reproducible nonlethal animal model for studying cyanide poisoning.

    Science.gov (United States)

    Vick, J; Marino, M T; von Bredow, J D; Kaminskis, A; Brewer, T

    2000-12-01

    Previous studies using bolus intravenous injections of sodium cyanide have been used to model the sudden exposure to high concentrations of cyanide that could occur on the battlefield. This study was designed to develop a model that would simulate the type of exposure to cyanide gas that could happen during actual low-level continuous types of exposure and then compare it with the bolus model. Cardiovascular and respiratory recordings taken from anesthetized dogs have been used previously to characterize the lethal effects of cyanide. The intravenous, bolus injection of 2.5 mg/kg sodium cyanide provides a model in which a greater than lethal concentration is attained. In contrast, our model uses a slow, intravenous infusion of cyanide to titrate each animal to its own inherent end point, which coincides with the amount of cyanide needed to induce death through respiratory arrest. In this model, therapeutic intervention can be used to restore respiration and allow for the complete recovery of the animals. After recovery, the same animal can be given a second infusion of cyanide, followed again by treatment and recovery, providing a reproducible end point. This end point can then be expressed as the total amount of cyanide per body weight (mg/kg) required to kill. In this study, the average dose of sodium cyanide among 12 animals was 1.21 mg/kg, which is approximately half the cyanide used in the bolus model. Thus, titration to respiratory arrest followed by resuscitation provides a repetitive-use animal model that can be used to test the efficacy of various forms of pretreatment and/or therapy without the loss of a single animal.

  6. Feasibility and observer reproducibility of speckle tracking echocardiography in congenital heart disease patients.

    Science.gov (United States)

    Mokhles, Palwasha; van den Bosch, Annemien E; Vletter-McGhie, Jackie S; Van Domburg, Ron T; Ruys, Titia P E; Kauer, Floris; Geleijnse, Marcel L; Roos-Hesselink, Jolien W

    2013-09-01

    The twisting motion of the heart has an important role in the function of the left ventricle. Speckle tracking echocardiography is able to quantify left ventricular (LV) rotation and twist. So far this new technique has not been used in congenital heart disease patients. The aim of our study was to investigate the feasibility and the intra- and inter-observer reproducibility of LV rotation parameters in adult patients with congenital heart disease. The study population consisted of 66 consecutive patients seen in the outpatient clinic (67% male, mean age 31 ± 7.7 years, NYHA class 1 ± 0.3) with a variety of congenital heart disease. First, feasibility was assessed in all patients. Intra- and inter-observer reproducibility was assessed for the patients in which speckle tracking echocardiography was feasible. Adequate image quality, for performing speckle echocardiography, was found in 80% of patients. The bias for the intra-observer reproducibility of the LV twist was 0.0°, with 95% limits of agreement of -2.5° and 2.5° and for interobserver reproducibility the bias was 0.0°, with 95% limits of agreement of -3.0° and 3.0°. Intra- and inter-observer measurements showed a strong correlation (0.86 and 0.79, respectively). Also a good repeatability was seen. The mean time to complete full analysis per subject for the first and second measurement was 9 and 5 minutes, respectively. Speckle tracking echocardiography is feasible in 80% of adult patients with congenital heart disease and shows excellent intra- and inter-observer reproducibility. © 2013, Wiley Periodicals, Inc.

  7. A reproducible brain tumour model established from human glioblastoma biopsies

    Directory of Open Access Journals (Sweden)

    Li Xingang

    2009-12-01

    Full Text Available Abstract Background Establishing clinically relevant animal models of glioblastoma multiforme (GBM remains a challenge, and many commonly used cell line-based models do not recapitulate the invasive growth patterns of patient GBMs. Previously, we have reported the formation of highly invasive tumour xenografts in nude rats from human GBMs. However, implementing tumour models based on primary tissue requires that these models can be sufficiently standardised with consistently high take rates. Methods In this work, we collected data on growth kinetics from a material of 29 biopsies xenografted in nude rats, and characterised this model with an emphasis on neuropathological and radiological features. Results The tumour take rate for xenografted GBM biopsies were 96% and remained close to 100% at subsequent passages in vivo, whereas only one of four lower grade tumours engrafted. Average time from transplantation to the onset of symptoms was 125 days ± 11.5 SEM. Histologically, the primary xenografts recapitulated the invasive features of the parent tumours while endothelial cell proliferations and necrosis were mostly absent. After 4-5 in vivo passages, the tumours became more vascular with necrotic areas, but also appeared more circumscribed. MRI typically revealed changes related to tumour growth, several months prior to the onset of symptoms. Conclusions In vivo passaging of patient GBM biopsies produced tumours representative of the patient tumours, with high take rates and a reproducible disease course. The model provides combinations of angiogenic and invasive phenotypes and represents a good alternative to in vitro propagated cell lines for dissecting mechanisms of brain tumour progression.

  8. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    negligible and the procedure was simple to perform and easily reproduced. It may be a useful tool in the investigation of antiangiogenic and anticancer therapeutics. Ultrasound was found to be a highly accurate tool for tumor diagnosis, localization and measurement and may be recommended for monitoring tumor growth in this model.

  9. A force-based model to reproduce stop-and-go waves in pedestrian dynamics

    CERN Document Server

    Chraibi, Mohcine; Schadschneider, Andreas

    2015-01-01

    Stop-and-go waves in single-file movement are a phenomenon that is ob- served empirically in pedestrian dynamics. It manifests itself by the co-existence of two phases: moving and stopping pedestrians. We show analytically based on a simplified one-dimensional scenario that under some conditions the system can have instable homogeneous solutions. Hence, oscillations in the trajectories and in- stabilities emerge during simulations. To our knowledge there exists no force-based model which is collision- and oscillation-free and meanwhile can reproduce phase separation. We develop a new force-based model for pedestrian dynamics able to reproduce qualitatively the phenomenon of phase separation. We investigate analytically the stability condition of the model and define regimes of parameter values where phase separation can be observed. We show by means of simulations that the predefined conditions lead in fact to the expected behavior and validate our model with respect to empirical findings.

  10. Recent reproducibility estimates indicate that negative evidence is observed 30-200 times before publication

    CERN Document Server

    Ingre, Michael

    2016-01-01

    The Open Science Collaboration recently reported that 36% of published findings from psychological studies were reproducible by their independent team of researchers. We can use this information to estimate the statistical power needed to produce these findings under various assumptions of prior probabilities and type-1 errors to calculate the expected distribution of positive and negative evidence. And we can compare this distribution to observations indicating that 90% of published findings in the psychological literature is statistically significant and supporting the authors hypothesis to get an estimate of publication bias. Such estimate indicates that negative evidence was expected to be observed 30-200 times before one was published assuming plausible priors.

  11. New model for datasets citation and extraction reproducibility in VAMDC

    CERN Document Server

    Zwölf, Carlo Maria; Dubernet, Marie-Lise

    2016-01-01

    In this paper we present a new paradigm for the identification of datasets extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science infrastructure. Such identification includes information on the origin and version of the datasets, references associated to individual data in the datasets, as well as timestamps linked to the extraction procedure. This paradigm is described through the modifications of the language used to exchange data within the VAMDC and through the services that will implement those modifications. This new paradigm should enforce traceability of datasets, favour reproducibility of datasets extraction, and facilitate the systematic citation of the authors having originally measured and/or calculated the extracted atomic and molecular data.

  12. New model for datasets citation and extraction reproducibility in VAMDC

    Science.gov (United States)

    Zwölf, Carlo Maria; Moreau, Nicolas; Dubernet, Marie-Lise

    2016-09-01

    In this paper we present a new paradigm for the identification of datasets extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science infrastructure. Such identification includes information on the origin and version of the datasets, references associated to individual data in the datasets, as well as timestamps linked to the extraction procedure. This paradigm is described through the modifications of the language used to exchange data within the VAMDC and through the services that will implement those modifications. This new paradigm should enforce traceability of datasets, favor reproducibility of datasets extraction, and facilitate the systematic citation of the authors having originally measured and/or calculated the extracted atomic and molecular data.

  13. Radiographic signs of scaphoid union after bone grafting: The analysis of inter-observer agreement and intra-observer reproducibility

    Directory of Open Access Journals (Sweden)

    Mirić Dragan

    2005-01-01

    Full Text Available INTRODUCTION The diagnosis of radiological union of scaphoid bone after bone grafting requires clear evidence of bony trabeculae traversing the graft from the proximal to the distal pole on at least two of four standard scaphoid views. This sign is the only objective assessments of union. Radiographs of the scaphoid taken 18 weeks after operation, however, can be difficult to interpret. This fact led us to question whether radiographs of scaphoid at 18 weeks provide reliable and objective indication of union. OBJECTIVE Our study was, therefore, designed to determine the reliability of the radiographic diagnosis of scaphoid union after bone grafting by testing the degree of inter-observer agreement and reproducibility. METHODS Out of 30 sets of the scaphoid bone radiographs after bone grafting taken 18 weeks after operation, 15 of good quality were selected. Each set included four views: postero-anterior, lateral, semi- pronated and semi-supinated. Seven observers were tested: three orthopedic consultants, three residents and one consultant in radiology. Each was presented with 15 sets of radiographs designated from 1 to 15 and each was asked to answer the question: "Are there trabeculae crossing the fracture site?" Possible answers were 'yes' or 'no'. Eight weeks later, the same 15 sets of radiographs were marked in alphabetic order from A to K and presented to the same seven observers. Data was then analyzed and expressed in terms of interobserver agreement in pairs and intra-observer reproducibility. Calculation was done by kappa statistics so that the degree of disagreement was taken into account and allowance was made for chance agreement. Kappa values can vary from -1.0 (complete disagreement through 1 (chance agreement to +1 (complete agreement. RESULTS For all 15 sets of radiographs, the degree of agreement between each pair of observers was illustrated in Table 2. It demonstrated the level of agreement between each pairs of seven

  14. Global Simulations of the March 17, 2013 Storm: Importance of Boundary Conditions in Reproducing Ring Current Observations

    Science.gov (United States)

    Yu, Y.; Jordanova, V.; Larsen, B.; Claudepierre, S. G.; Welling, D. T.; Skoug, R. M.; Kletzing, C.

    2013-12-01

    As modeling capabilities become increasingly available for the study of inner magnetospheric dynamics, the models' boundary conditions remain a crucial controlling factor in reproducing observations. In this study, we use the kinetic Ring current-Atmosphere Interaction Model (RAM) two-way coupled with the global MHD model BATS-R-US to study the evolution of the ring current and its feedback to the ionospheric electrodynamics during the March 17, 2013 storm. The MHD code solves fluid quantities and provides the inner magnetosphere code with plasma sheet plasma, which is the primary source for the development of the ring current. In this study, we examine the effect of different boundary conditions in specifying the plasma sheet plasma source on reproducing observations of the inner magnetospheric/subauroral region, such as in-situ observations (e.g., flux, magnetic fields, and electric fields) from Van Allen Probes (RBSP), field-aligned currents from AMPERE, and global convection maps from SuperDARN. These different boundary settings include a Maxwellian distribution assumption with MHD single-fluid temperature and density, a Kappa distribution assumption with MHD single-fluid temperature and density, and a bi-Maxwellian distribution with anisotropic pressures passed from the MHD code. Results indicate that a Kappa distribution at the boundary of RAM leads to a better ring current flux prediction than that with a Maxwellian distribution assumption, as well as a more realistic spatial distribution of ion anisotropy, which is important in driving electromagnetic ion cyclotron waves. The anisotropic pressure coupling between the kinetic code and the MHD code with a bi-Maxwellian function significantly improves the agreement with observations, especially the Dst index prediction.

  15. A model project for reproducible papers: critical temperature for the Ising model on a square lattice

    CERN Document Server

    Dolfi, M; Hehn, A; Imriška, J; Pakrouski, K; Rønnow, T F; Troyer, M; Zintchenko, I; Chirigati, F; Freire, J; Shasha, D

    2014-01-01

    In this paper we present a simple, yet typical simulation in statistical physics, consisting of large scale Monte Carlo simulations followed by an involved statistical analysis of the results. The purpose is to provide an example publication to explore tools for writing reproducible papers. The simulation estimates the critical temperature where the Ising model on the square lattice becomes magnetic to be Tc /J = 2.26934(6) using a finite size scaling analysis of the crossing points of Binder cumulants. We provide a virtual machine which can be used to reproduce all figures and results.

  16. A structured model of video reproduces primary visual cortical organisation.

    Directory of Open Access Journals (Sweden)

    Pietro Berkes

    2009-09-01

    Full Text Available The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1. In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition.

  17. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    Science.gov (United States)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  18. Reproducible Infection Model for Clostridium perfringens in Broiler Chickens

    DEFF Research Database (Denmark)

    Pedersen, Karl; Friis-Holm, Lotte Bjerrum; Heuer, Ole Eske

    2008-01-01

    Experiments were carried out to establish an infection and disease model for Clostridium perfringens in broiler chickens. Previous experiments had failed to induce disease and only a transient colonization with challenge strains had been obtained. In the present study, two series of experiments w...

  19. Current reinforcement model reproduces center-in-center vein trajectory of Physarum polycephalum.

    Science.gov (United States)

    Akita, Dai; Schenz, Daniel; Kuroda, Shigeru; Sato, Katsuhiko; Ueda, Kei-Ichi; Nakagaki, Toshiyuki

    2017-06-01

    Vein networks span the whole body of the amoeboid organism in the plasmodial slime mould Physarum polycephalum, and the network topology is rearranged within an hour in response to spatio-temporal variations of the environment. It has been reported that this tube morphogenesis is capable of solving mazes, and a mathematical model, named the 'current reinforcement rule', was proposed based on the adaptability of the veins. Although it is known that this model works well for reproducing some key characters of the organism's maze-solving behaviour, one important issue is still open: In the real organism, the thick veins tend to trace the shortest possible route by cutting the corners at the turn of corridors, following a center-in-center trajectory, but it has not yet been examined whether this feature also appears in the mathematical model, using corridors of finite width. In this report, we confirm that the mathematical model reproduces the center-in-center trajectory of veins around corners observed in the maze-solving experiment. © 2017 Japanese Society of Developmental Biologists.

  20. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  1. Extreme Rainfall Events Over Southern Africa: Assessment of a Climate Model to Reproduce Daily Extremes

    Science.gov (United States)

    Williams, C.; Kniveton, D.; Layberry, R.

    2007-12-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable extreme events, due to a number of factors including extensive poverty, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of a state-of-the-art climate model to simulate climate at daily timescales is carried out using satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA). This dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. Once the model's ability to reproduce extremes has been assessed, idealised regions of SST anomalies are used to force the model, with the overall aim of investigating the ways in which SST anomalies influence rainfall extremes over southern Africa. In this paper, results from sensitivity testing of the UK Meteorological Office Hadley Centre's climate model's domain size are firstly presented. Then simulations of current climate from the model, operating in both regional and global mode, are compared to the MIRA dataset at daily timescales. Thirdly, the ability of the model to reproduce daily rainfall extremes will be assessed, again by a comparison with extremes from the MIRA dataset. Finally, the results from the idealised SST experiments are briefly presented, suggesting associations between rainfall extremes and both local and remote SST anomalies.

  2. Accuracy and reproducibility of measurements on plaster models and digital models created using an intraoral scanner.

    Science.gov (United States)

    Camardella, Leonardo Tavares; Breuning, Hero; de Vasconcellos Vilella, Oswaldo

    2017-05-01

    The purpose of the present study was to evaluate the accuracy and reproducibility of measurements made on digital models created using an intraoral color scanner compared to measurements on dental plaster models. This study included impressions of 28 volunteers. Alginate impressions were used to make plaster models, and each volunteers' dentition was scanned with a TRIOS Color intraoral scanner. Two examiners performed measurements on the plaster models using a digital caliper and measured the digital models using Ortho Analyzer software. The examiners measured 52 distances, including tooth diameter and height, overjet, overbite, intercanine and intermolar distances, and the sagittal relationship. The paired t test was used to assess intra-examiner performance and measurement accuracy of the two examiners for both plaster and digital models. The level of clinically relevant differences between the measurements according to the threshold used was evaluated and a formula was applied to calculate the chance of finding clinically relevant errors on measurements on plaster and digital models. For several parameters, statistically significant differences were found between the measurements on the two different models. However, most of these discrepancies were not considered clinically significant. The measurement of the crown height of upper central incisors had the highest measurement error for both examiners. Based on the interexaminer performance, reproducibility of the measurements was poor for some of the parameters. Overall, our findings showed that most of the measurements on digital models created using the TRIOS Color scanner and measured with Ortho Analyzer software had a clinically acceptable accuracy compared to the same measurements made with a caliper on plaster models, but the measuring method can affect the reproducibility of the measurements.

  3. Quantitative Evaluation of Ionosphere Models for Reproducing Regional TEC During Geomagnetic Storms

    Science.gov (United States)

    Shim, J. S.; Kuznetsova, M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B.; Foster, B.; Fuller-Rowell, T. J.; Goncharenko, L. P.; Huba, J.; Mitchell, C. N.; Ridley, A. J.; Fedrizzi, M.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.

    2015-12-01

    TEC (Total Electron Content) is one of the key parameters in description of the ionospheric variability that has influence on the accuracy of navigation and communication systems. To assess current TEC modeling capability of ionospheric models during geomagnetic storms and to establish a baseline against which future improvement can be compared, we quantified the ionospheric models' performance by comparing modeled vertical TEC values with ground-based GPS TEC measurements and Multi-Instrument Data Analysis System (MIDAS) TEC. The comparison focused on North America and Europe sectors during selected two storm events: 2006 AGU storm (14-15 Dec. 2006) and 2013 March storm (17-19 Mar. 2013). The ionospheric models used for this study range from empirical to physics-based, and physics-based data assimilation models. We investigated spatial and temporal variations of TEC during the storms. In addition, we considered several parameters to quantify storm impacts on TEC: TEC changes compared to quiet time, rate of TEC change, and maximum increase/decrease during the storms. In this presentation, we focus on preliminary results of the comparison of the models performance in reproducing the storm-time TEC variations using the parameters and skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

  4. Fourier modeling of the BOLD response to a breath-hold task: Optimization and reproducibility.

    Science.gov (United States)

    Pinto, Joana; Jorge, João; Sousa, Inês; Vilela, Pedro; Figueiredo, Patrícia

    2016-07-15

    Cerebrovascular reactivity (CVR) reflects the capacity of blood vessels to adjust their caliber in order to maintain a steady supply of brain perfusion, and it may provide a sensitive disease biomarker. Measurement of the blood oxygen level dependent (BOLD) response to a hypercapnia-inducing breath-hold (BH) task has been frequently used to map CVR noninvasively using functional magnetic resonance imaging (fMRI). However, the best modeling approach for the accurate quantification of CVR maps remains an open issue. Here, we compare and optimize Fourier models of the BOLD response to a BH task with a preparatory inspiration, and assess the test-retest reproducibility of the associated CVR measurements, in a group of 10 healthy volunteers studied over two fMRI sessions. Linear combinations of sine-cosine pairs at the BH task frequency and its successive harmonics were added sequentially in a nested models approach, and were compared in terms of the adjusted coefficient of determination and corresponding variance explained (VE) of the BOLD signal, as well as the number of voxels exhibiting significant BOLD responses, the estimated CVR values, and their test-retest reproducibility. The brain average VE increased significantly with the Fourier model order, up to the 3rd order. However, the number of responsive voxels increased significantly only up to the 2nd order, and started to decrease from the 3rd order onwards. Moreover, no significant relative underestimation of CVR values was observed beyond the 2nd order. Hence, the 2nd order model was concluded to be the optimal choice for the studied paradigm. This model also yielded the best test-retest reproducibility results, with intra-subject coefficients of variation of 12 and 16% and an intra-class correlation coefficient of 0.74. In conclusion, our results indicate that a Fourier series set consisting of a sine-cosine pair at the BH task frequency and its two harmonics is a suitable model for BOLD-fMRI CVR measurements

  5. Models that include supercoiling of topological domains reproduce several known features of interphase chromosomes.

    Science.gov (United States)

    Benedetti, Fabrizio; Dorier, Julien; Burnier, Yannis; Stasiak, Andrzej

    2014-03-01

    Understanding the structure of interphase chromosomes is essential to elucidate regulatory mechanisms of gene expression. During recent years, high-throughput DNA sequencing expanded the power of chromosome conformation capture (3C) methods that provide information about reciprocal spatial proximity of chromosomal loci. Since 2012, it is known that entire chromatin in interphase chromosomes is organized into regions with strongly increased frequency of internal contacts. These regions, with the average size of ∼1 Mb, were named topological domains. More recent studies demonstrated presence of unconstrained supercoiling in interphase chromosomes. Using Brownian dynamics simulations, we show here that by including supercoiling into models of topological domains one can reproduce and thus provide possible explanations of several experimentally observed characteristics of interphase chromosomes, such as their complex contact maps.

  6. An analytical nonlinear model for laminate multiferroic composites reproducing the DC magnetic bias dependent magnetoelectric properties.

    Science.gov (United States)

    Lin, Lizhi; Wan, Yongping; Li, Faxin

    2012-07-01

    In this work, we propose an analytical nonlinear model for laminate multiferroic composites in which the magnetic-field-induced strain in magnetostrictive phase is described by a standard square law taking the stress effect into account, whereas the ferroelectric phase retains a linear piezoelectric response. Furthermore, differing from previous models which assume uniform deformation, we take into account the stress attenuation and adopt non-uniform deformation along the layer thickness in both piezoelectric and magnetostrictive phases. Analysis of this model on L-T and L-L modes of sandwiched Terfenol-D/lead zirconate titanate/Terfenol-D composites can well reproduce the observed dc magnetic field (H(dc)) dependent magnetoelectric coefficients, which reach their maximum with the H(dc) all at about 500 Oe. The model also suggests that stress attenuation along the layer thickness in practical composites should be taken into account. Furthermore, the model also indicates that a high volume fraction of magnetostrictive phase is required to get giant magnetoelectric coupling, coinciding with existing models.

  7. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    Science.gov (United States)

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  8. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    Directory of Open Access Journals (Sweden)

    R. Li

    2015-11-01

    Full Text Available Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  9. Accuracy and reproducibility of dental replica models reconstructed by different rapid prototyping techniques

    NARCIS (Netherlands)

    Hazeveld, Aletta; Huddleston Slater, James J. R.; Ren, Yijin

    INTRODUCTION: Rapid prototyping is a fast-developing technique that might play a significant role in the eventual replacement of plaster dental models. The aim of this study was to investigate the accuracy and reproducibility of physical dental models reconstructed from digital data by several rapid

  10. Voxel-level reproducibility assessment of modality independent elastography in a pre-clinical murine model

    Science.gov (United States)

    Flint, Katelyn M.; Weis, Jared A.; Yankeelov, Thomas E.; Miga, Michael I.

    2015-03-01

    Changes in tissue mechanical properties, measured non-invasively by elastography methods, have been shown to be an important diagnostic tool, particularly for cancer. Tissue elasticity information, tracked over the course of therapy, may be an important prognostic indicator of tumor response to treatment. While many elastography techniques exist, this work reports on the use of a novel form of elastography that uses image texture to reconstruct elastic property distributions in tissue (i.e., a modality independent elastography (MIE) method) within the context of a pre-clinical breast cancer system.1,2 The elasticity results have previously shown good correlation with independent mechanical testing.1 Furthermore, MIE has been successfully utilized to localize and characterize lesions in both phantom experiments and simulation experiments with clinical data.2,3 However, the reproducibility of this method has not been characterized in previous work. The goal of this study is to evaluate voxel-level reproducibility of MIE in a pre-clinical model of breast cancer. Bland-Altman analysis of co-registered repeat MIE scans in this preliminary study showed a reproducibility index of 24.7% (scaled to a percent of maximum stiffness) at the voxel level. As opposed to many reports in the magnetic resonance elastography (MRE) literature that speak to reproducibility measures of the bulk organ, these results establish MIE reproducibility at the voxel level; i.e., the reproducibility of locally-defined mechanical property measurements throughout the tumor volume.

  11. Assessment of a climate model to reproduce rainfall variability and extremes over Southern Africa

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2010-01-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with

  12. A stable and reproducible human blood-brain barrier model derived from hematopoietic stem cells.

    Directory of Open Access Journals (Sweden)

    Romeo Cecchelli

    Full Text Available The human blood brain barrier (BBB is a selective barrier formed by human brain endothelial cells (hBECs, which is important to ensure adequate neuronal function and protect the central nervous system (CNS from disease. The development of human in vitro BBB models is thus of utmost importance for drug discovery programs related to CNS diseases. Here, we describe a method to generate a human BBB model using cord blood-derived hematopoietic stem cells. The cells were initially differentiated into ECs followed by the induction of BBB properties by co-culture with pericytes. The brain-like endothelial cells (BLECs express tight junctions and transporters typically observed in brain endothelium and maintain expression of most in vivo BBB properties for at least 20 days. The model is very reproducible since it can be generated from stem cells isolated from different donors and in different laboratories, and could be used to predict CNS distribution of compounds in human. Finally, we provide evidence that Wnt/β-catenin signaling pathway mediates in part the BBB inductive properties of pericytes.

  13. Development of new criteria for cortical bone histomorphometry in femoral neck: intra- and inter-observer reproducibility.

    Science.gov (United States)

    Tong, Xiao-Yu; Malo, Markus; Tamminen, Inari S; Isaksson, Hanna; Jurvelin, Jukka S; Kröger, Heikki

    2015-01-01

    Histomorphometry is commonly applied to study bone remodeling. Histological definitions of cortical bone boundaries have not been consistent. In this study, new criteria for specific definition of the transitional zone between the cortical and cancellous bone in the femoral neck were developed. The intra- and inter-observer reproducibility of this method was determined by quantitative histomorphometry and areal overlapping analysis. The undecalcified histological sections of femoral neck specimens (n = 6; from men aged 17-59 years) were processed and scanned to acquire histological images of complete bone sections. Specific criteria were applied to define histological boundaries. "Absolute cortex area" consisted of pure cortical bone tissue only, and was defined mainly based on the size of composite canals and their distance to an additional "guide" boundary (so-called "preliminary cortex boundary," the clear demarcation line of density between compact cortex and sparse trabeculae). Endocortical bone area was defined by recognizing characteristic endocortical structures adjacent to the preliminary cortical boundary. The present results suggested moderate to high reproducibility for low-magnification parameters (e.g., cortical bone area). The coefficient of variation (CV %) ranged from 0.02 to 5.61 in the intra-observer study and from 0.09 to 16.41 in the inter-observer study. However, the intra-observer reproducibility of some high-magnification parameters (e.g., osteoid perimeter/endocortical perimeter) was lower (CV %, 0.33-87.9). The overlapping of three histological areas in repeated analyses revealed highest intra- and inter-observer reproducibility for the absolute cortex area. This study provides specific criteria for the definition of histological boundaries for femoral neck bone specimens, which may aid more precise cortical bone histomorphometry.

  14. Impact of soil parameter and physical process on reproducibility of hydrological processes by land surface model in semiarid grassland

    Science.gov (United States)

    Miyazaki, S.; Yorozu, K.; Asanuma, J.; Kondo, M.; Saito, K.

    2014-12-01

    The land surface model (LSM) takes part in the land-atmosphere interaction on the earth system model for the climate change research. In this study, we evaluated the impact of soil parameters and physical process on reproducibility of hydrological process by LSM Minimal Advanced Treatments of Surface Interaction and RunOff (MATSIRO; Takata et al, 2003, GPC) forced by the meteorological data observed at grassland in semiarid climate in China and Mongolia. The testing of MATSIRO was carried out offline mode over the semiarid grassland sites at Tongyu (44.42 deg. N, 122.87 deg. E, altitude: 184m) in China, Kherlen Bayan Ulaan (KBU; 47.21 deg. N, 108.74 deg. E, altitude: 1235m) and Arvaikheer (46.23 N, 102.82E, altitude: 1,813m) in Mongolia. Although all sites locate semiarid grassland, the climate condition is different among sites, which the annual air temperature and precipitation are 5.7 deg. C and 388mm (Tongyu), 1.2 deg.C and 180mm (KBU), and 0.4 deg. C and 245mm(Arvaikheer). We can evaluate the effect of climate condition on the model performance. Three kinds of experiments have been carried out, which was run with the default parameters (CTL), the observed parameters (OBS) for soil physics and hydrology, and vegetation, and refined MATSIRO with the effect of ice in thermal parameters and unfrozen water below the freezing with same parameters as OBS run (OBSr). The validation data has been provided by CEOP(http://www.ceop.net/) , RAISE(http://raise.suiri.tsukuba.ac.jp/), GAME-AAN (Miyazaki et al., 2004, JGR) for Tongyu, KBU, and Arvaikheer, respectively. The reproducibility of the net radiation, the soil temperature (Ts), and latent heat flux (LE) were well reproduced by OBS and OBSr run. The change of soil physical and hydraulic parameter affected the reproducibility of soil temperature (Ts) and soil moisture (SM) as well as energy flux component especially for the sensible heat flux (H) and soil heat flux (G). The reason for the great improvement on the

  15. A standardised and reproducible model of intra-abdominal infection and abscess formation in rats

    NARCIS (Netherlands)

    Bosscha, K; Nieuwenhuijs, VB; Gooszen, AW; van Duijvenbode-Beumer, H; Visser, MR; Verweij, Willem; Akkermans, LMA

    2000-01-01

    Objective: To develop a standardised and reproducible model of intra-abdominal infection and abscess formation in rats. Design: Experimental study. Setting: University hospital, The Netherlands. Subjects: 36 adult male Wistar rats. Interventions: In 32 rats, peritonitis was produced using two differ

  16. Reproducibility of summertime diurnal precipitation over northern Eurasia simulated by CMIP5 climate models

    Science.gov (United States)

    Hirota, N.; Takayabu, Y. N.

    2015-12-01

    Reproducibility of diurnal precipitation over northern Eurasia simulated by CMIP5 climate models in their historical runs were evaluated, in comparison with station data (NCDC-9813) and satellite data (GSMaP-V5). We first calculated diurnal cycles by averaging precipitation at each local solar time (LST) in June-July-August during 1981-2000 over the continent of northern Eurasia (0-180E, 45-90N). Then we examined occurrence time of maximum precipitation and a contribution of diurnally varying precipitation to the total precipitation.The contribution of diurnal precipitation was about 21% in both NCDC-9813 and GSMaP-V5. The maximum precipitation occurred at 18LST in NCDC-9813 but 16LST in GSMaP-V5, indicating some uncertainties even in the observational datasets. The diurnal contribution of the CMIP5 models varied largely from 11% to 62%, and their timing of the precipitation maximum ranged from 11LST to 20LST. Interestingly, the contribution and the timing had strong negative correlation of -0.65. The models with larger diurnal precipitation showed precipitation maximum earlier around noon. Next, we compared sensitivity of precipitation to surface temperature and tropospheric humidity between 5 models with large diurnal precipitation (LDMs) and 5 models with small diurnal precipitation (SDMs). Precipitation in LDMs showed high sensitivity to surface temperature, indicating its close relationship with local instability. On the other hand, synoptic disturbances were more active in SDMs with a dominant role of the large scale condensation, and precipitation in SDMs was more related with tropospheric moisture. Therefore, the relative importance of the local instability and the synoptic disturbances was suggested to be an important factor in determining the contribution and timing of the diurnal precipitation. Acknowledgment: This study is supported by Green Network of Excellence (GRENE) Program by the Ministry of Education, Culture, Sports, Science and Technology

  17. An uncombed inversion of multi-wavelength observations reproducing the Net Circular Polarization in a sunspots' penumbra

    CERN Document Server

    Beck, C

    2010-01-01

    I derived a geometrical model of the penumbral magnetic field topology from an uncombed inversion setup that aimed at reproducing the NCP of simultaneous spectra in near-IR (1.56 mu) and VIS (630 nm) spectral lines. I inverted the spectra of five photospheric lines with a model that mimicked vertically interlaced magnetic fields with two components, labeled background field and flow channels. The flow channels were modeled as a perturbation of the background field with a Gaussian shape using the SIRGAUS code. The location and extension of the Gaussian perturbation in the optical depth scale was then converted to a geometrical height scale. I investigated the relative amount of magnetic flux in the flow channels and the background field atmosphere. The uncombed model is able to reproduce the NCP well on the limb side of the spot and worse on the center side; the VIS lines are better reproduced than the near-IR lines. The Evershed flow happens along nearly horizontal field lines close to the solar surface. The ...

  18. Traction force needed to reproduce physiologically observed uterine movement: technique development, feasibility assessment, and preliminary findings.

    Science.gov (United States)

    Swenson, Carolyn W; Luo, Jiajia; Chen, Luyun; Ashton-Miller, James A; DeLancey, John O L

    2016-08-01

    This study aimed to describe a novel strategy to determine the traction forces needed to reproduce physiologic uterine displacement in women with and without prolapse. Participants underwent dynamic stress magnetic resonance imaging (MRI) testing as part of a study examining apical uterine support. Physiologic uterine displacement was determined by analyzing uterine location in images taken at rest and at maximal Valsalva. Force-displacement curves were calculated based on intraoperative cervical traction testing. The intraoperative force required to achieve the uterine displacement measured during MRI was then estimated from these curves. Women were categorized into three groups based on pelvic organ support: group 1 (normal apical and vaginal support), group 2 (normal apical support but vaginal prolapse present), and group 3 (apical prolapse). Data from 19 women were analyzed: five in group 1, five in group 2, and nine in group 3. Groups were similar in terms of age, body mass index (BMI), and parity. Median operating room (OR) force required for uterine displacement measured during MRI was 0.8 N [interquartile range (IQR) 0.62-3.22], and apical ligament stiffness determined using MRI uterine displacement was 0.04 N/mm (IQR 0.02-0.08); differences between groups were nonsignificant. Uterine locations determined at rest and during maximal traction were lower in the OR compared with MRI in all groups. Using this investigative strategy, we determined that only 0.8 N of traction force in the OR was required to achieve maximal physiologic uterine displacement seen during dynamic (maximal Valsalva) MRI testing, regardless of the presence or absence of prolapse.

  19. Using the mouse to model human disease: increasing validity and reproducibility

    Directory of Open Access Journals (Sweden)

    Monica J. Justice

    2016-02-01

    Full Text Available Experiments that use the mouse as a model for disease have recently come under scrutiny because of the repeated failure of data, particularly derived from preclinical studies, to be replicated or translated to humans. The usefulness of mouse models has been questioned because of irreproducibility and poor recapitulation of human conditions. Newer studies, however, point to bias in reporting results and improper data analysis as key factors that limit reproducibility and validity of preclinical mouse research. Inaccurate and incomplete descriptions of experimental conditions also contribute. Here, we provide guidance on best practice in mouse experimentation, focusing on appropriate selection and validation of the model, sources of variation and their influence on phenotypic outcomes, minimum requirements for control sets, and the importance of rigorous statistics. Our goal is to raise the standards in mouse disease modeling to enhance reproducibility, reliability and clinical translation of findings.

  20. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer.

    Science.gov (United States)

    Kondo, Kosuke; Nemoto, Masaaki; Masuda, Hiroyuki; Okonogi, Shinichi; Nomoto, Jun; Harada, Naoyuki; Sugo, Nobuo; Miyazaki, Chikao

    2015-01-01

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors.

  1. MRI assessment of knee osteoarthritis: Knee Osteoarthritis Scoring System (KOSS) - inter-observer and intra-observer reproducibility of a compartment-based scoring system

    Energy Technology Data Exchange (ETDEWEB)

    Kornaat, Peter R.; Ceulemans, Ruth Y.T.; Kroon, Herman M.; Bloem, Johan L. [Leiden University Medical Center, Department of Radiology, Leiden (Netherlands); Riyazi, Naghmeh; Kloppenburg, Margreet [Leiden University Medical Center, Department of Rheumatology, Leiden (Netherlands); Carter, Wayne O.; Woodworth, Thasia G. [Pfizer Groton, Groton, Connecticut (United States)

    2005-02-01

    To develop a scoring system for quantifying osteoarthritic changes of the knee as identified by magnetic resonance (MR) imaging, and to determine its inter- and intra-observer reproducibility, in order to monitor medical therapy in research studies. Two independent observers evaluated 25 consecutive MR examinations of the knee in patients with previously defined clinical symptoms and radiological signs of osteoarthritis. We acquired on a 1.5 T system: coronal and sagittal proton density- and T2-weighted dual spin echo (SE) images, sagittal three-dimensional T1-weighted gradient echo (GE) images with fat suppression, and axial dual turbo SE images with fat suppression. Images were scored for the presence of cartilaginous lesions, osteophytes, subchondral cysts, bone marrow edema, and for meniscal abnormalities. Presence and size of effusion, synovitis and Baker's cyst were recorded. All parameters were ranked on a previously defined, semiquantitative scale, reflecting increasing severity of findings. Kappa, weighted kappa and intraclass correlation coefficient (ICC) were used to determine inter- and intra-observer variability. Inter-observer reproducibility was good (ICC value 0.77). Inter- and intra-observer reproducibility for individual parameters was good to very good (inter-observer ICC value 0.63-0.91; intra-observer ICC value 0.76-0.96). The presented comprehensive MR scoring system for osteoarthritic changes of the knee has a good to very good inter-observer and intra-observer reproducibility. Thus the score form with its definitions can be used for standardized assessment of osteoarthritic changes to monitor medical therapy in research studies. (orig.)

  2. Rainfall variability and extremes over southern Africa: Assessment of a climate model to reproduce daily extremes

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2009-04-01

    It is increasingly accepted that that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA). This dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. The ability of a climate model to simulate current climate provides some indication of how much confidence can be applied to its future predictions. In this paper, simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. This concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of rainfall variability over southern Africa. Secondly, the ability of the model to reproduce daily rainfall extremes will

  3. The intra-observer reproducibility of cardiovascular magnetic resonance myocardial feature tracking strain assessment is independent of field strength

    Energy Technology Data Exchange (ETDEWEB)

    Schuster, Andreas, E-mail: andreas_schuster@gmx.net [Division of Imaging Sciences and Biomedical Engineering, King' s College London British Heart Foundation BHF Centre of Excellence, National Institute of Health Research NIHR Biomedical Research Centre at Guy' s and St. Thomas' NHS Foundation Trust, Wellcome Trust and Engineering and Physical Sciences Research Council EPSRC Medical Engineering Centre, The Rayne Institute, St. Thomas' Hospital, London (United Kingdom); Department of Cardiology and Pulmonology and Heart Research Centre, Georg-August-University, Göttingen (Germany); Morton, Geraint, E-mail: geraint.morton@kcl.ac.uk [Division of Imaging Sciences and Biomedical Engineering, King' s College London British Heart Foundation BHF Centre of Excellence, National Institute of Health Research NIHR Biomedical Research Centre at Guy' s and St. Thomas' NHS Foundation Trust, Wellcome Trust and Engineering and Physical Sciences Research Council EPSRC Medical Engineering Centre, The Rayne Institute, St. Thomas' Hospital, London (United Kingdom); Hussain, Shazia T., E-mail: shazia.1.hussain@kcl.ac.uk [Division of Imaging Sciences and Biomedical Engineering, King' s College London British Heart Foundation BHF Centre of Excellence, National Institute of Health Research NIHR Biomedical Research Centre at Guy' s and St. Thomas' NHS Foundation Trust, Wellcome Trust and Engineering and Physical Sciences Research Council EPSRC Medical Engineering Centre, The Rayne Institute, St. Thomas' Hospital, London (United Kingdom); and others

    2013-02-15

    Background: Cardiovascular magnetic resonance myocardial feature tracking (CMR-FT) is a promising novel method for quantification of myocardial wall mechanics from standard steady-state free precession (SSFP) images. We sought to determine whether magnetic field strength affects the intra-observer reproducibility of CMR-FT strain analysis. Methods: We studied 2 groups, each consisting of 10 healthy subjects, at 1.5 T or 3 T Analysis was performed at baseline and after 4 weeks using dedicated CMR-FT prototype software (Tomtec, Germany) to analyze standard SSFP cine images. Right ventricular (RV) and left ventricular (LV) longitudinal strain (Ell{sub RV} and Ell{sub LV}) and LV long-axis radial strain (Err{sub LAX}) were derived from the 4-chamber cine, and LV short-axis circumferential and radial strains (Ecc{sub SAX}, Err{sub SAX}) from the short-axis orientation. Strain parameters were assessed together with LV ejection fraction (EF) and volumes. Intra-observer reproducibility was determined by comparing the first and the second analysis in both groups. Results: In all volunteers resting strain parameters were successfully derived from the SSFP images. There was no difference in strain parameters, volumes and EF between field strengths (p > 0.05). In general Ecc{sub SAX} was the most reproducible strain parameter as determined by the coefficient of variation (CV) at 1.5 T (CV 13.3% and 46% global and segmental respectively) and 3 T (CV 17.2% and 31.1% global and segmental respectively). The least reproducible parameter was Ell{sub RV} (CV 1.5 T 28.7% and 53.2%; 3 T 43.5% and 63.3% global and segmental respectively). Conclusions: CMR-FT results are similar with reasonable intra-observer reproducibility in different groups of volunteers at 1.5 T and 3 T. CMR-FT is a promising novel technique and our data indicate that results might be transferable between field strengths. However there is a considerable amount of segmental variability indicating that further

  4. Cellular automaton model in the fundamental diagram approach reproducing the synchronized outflow of wide moving jams

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Jun-fang, E-mail: tianhustbjtu@hotmail.com [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China); Yuan, Zhen-zhou; Jia, Bin; Fan, Hong-qiang; Wang, Tao [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China)

    2012-09-10

    Velocity effect and critical velocity are incorporated into the average space gap cellular automaton model [J.F. Tian, et al., Phys. A 391 (2012) 3129], which was able to reproduce many spatiotemporal dynamics reported by the three-phase theory except the synchronized outflow of wide moving jams. The physics of traffic breakdown has been explained. Various congested patterns induced by the on-ramp are reproduced. It is shown that the occurrence of synchronized outflow, free outflow of wide moving jams is closely related with drivers time delay in acceleration at the downstream jam front and the critical velocity, respectively. -- Highlights: ► Velocity effect is added into average space gap cellular automaton model. ► The physics of traffic breakdown has been explained. ► The probabilistic nature of traffic breakdown is simulated. ► Various congested patterns induced by the on-ramp are reproduced. ► The occurrence of synchronized outflow of jams depends on drivers time delay.

  5. Hidden-variable models for the spin singlet: I. Non-local theories reproducing quantum mechanics

    CERN Document Server

    Di Lorenzo, Antonio

    2011-01-01

    A non-local hidden variable model reproducing the quantum mechanical probabilities for a spin singlet is presented. The non-locality is concentrated in the distribution of the hidden variables. The model otherwise satisfies both the hypothesis of outcome independence, made in the derivation of Bell inequality, and of compliance with Malus's law, made in the derivation of Leggett inequality. It is shown through the prescription of a protocol that the non-locality can be exploited to send information instantaneously provided that the hidden variables can be measured, even though they cannot be controlled.

  6. On some problems with reproducing the Standard Model fields and interactions in five-dimensional warped brane world models

    CERN Document Server

    Smolyakov, Mikhail N

    2015-01-01

    In the present paper we discuss some problems which arise, when the matter, gauge and Higgs fields are allowed to propagate in the bulk of five-dimensional brane world models with compact extra dimension and their zero Kaluza-Klein modes are supposed to exactly reproduce the Standard Model fields and their interactions.

  7. Inter-observer reproducibility of measurements of range of motion in patients with shoulder pain using a digital inclinometer

    Directory of Open Access Journals (Sweden)

    de Winter Andrea F

    2004-06-01

    Full Text Available Abstract Background Reproducible measurements of the range of motion are an important prerequisite for the interpretation of study results. The digital inclinometer is considered to be a useful instrument because it is inexpensive and easy to use. No previous study assessed inter-observer reproducibility of range of motion measurements with a digital inclinometer by physical therapists in a large sample of patients. Methods Two physical therapists independently measured the passive range of motion of the glenohumeral abduction and the external rotation in 155 patients with shoulder pain. Agreement was quantified by calculation of the mean differences between the observers and the standard deviation (SD of this difference and the limits of agreement, defined as the mean difference ± 1.96*SD of this difference. Reliability was quantified by means of the intraclass correlation coefficient (ICC. Results The limits of agreement were 0.8 ± 19.6 for glenohumeral abduction and -4.6 ± 18.8 for external rotation (affected side and quite similar for the contralateral side and the differences between sides. The percentage agreement within 10° for these measurements were 72% and 70% respectively. The ICC ranged from 0.28 to 0.90 (0.83 and 0.90 for the affected side. Conclusions The inter-observer agreement was found to be poor. If individual patients are assessed by two different observers, differences in range of motion of less than 20–25 degrees can not be distuinguished from measurement error. In contrast, acceptable reliability was found for the inclinometric measurements of the affected side and the differences between the sides, indicating that the inclimeter can be used in studies in which groups are compared.

  8. Reproducibility blues.

    Science.gov (United States)

    Pulverer, Bernd

    2015-11-12

    Research findings advance science only if they are significant, reliable and reproducible. Scientists and journals must publish robust data in a way that renders it optimally reproducible. Reproducibility has to be incentivized and supported by the research infrastructure but without dampening innovation.

  9. Individual Colorimetric Observer Model.

    Directory of Open Access Journals (Sweden)

    Yuta Asano

    Full Text Available This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches.

  10. Reproducibility of parameter learning with missing observations in naive Wnt Bayesian network trained on colorectal cancer samples and doxycycline-treated cell lines.

    Science.gov (United States)

    Sinha, Shriprakash

    2015-07-01

    In this manuscript the reproducibility of parameter learning with missing observations in a naive Bayesian network and its effect on the prediction results for Wnt signaling activation in colorectal cancer is tested. The training of the network is carried out separately on doxycycline-treated LS174T cell lines (GSE18560) as well as normal and adenoma samples (GSE8671). A computational framework to test the reproducibility of the parameters is designed in order check the veracity of the prediction results. Detailed experimental analysis suggests that the prediction results are accurate and reproducible with negligible deviations. Anomalies in estimated parameters are accounted for due to the representation issues of the Bayesian network model. High prediction accuracies are reported for normal (N) and colon-related adenomas (AD), colorectal cancer (CRC), carcinomas (C), adenocarcinomas (ADC) and replication error colorectal cancer (RER CRC) test samples. Test samples from inflammatory bowel diseases (IBD) do not fare well in the prediction test. Also, an interesting case regarding hypothesis testing came up while proving the statistical significance of the different design setups of the Bayesian network model. It was found that hypothesis testing may not be the correct way to check the significance between design setups, especially when the structure of the model is the same, given that the model is trained on a single piece of test data. The significance test does have value when the datasets are independent. Finally, in comparison to the biologically inspired models, the naive Bayesian model may give accurate results, but this accuracy comes at the cost of a loss of crucial biological knowledge which might help reveal hidden relations among intra/extracellular factors affecting the Wnt pathway.

  11. Validation of EURO-CORDEX regional climate models in reproducing the variability of precipitation extremes in Romania

    Science.gov (United States)

    Dumitrescu, Alexandru; Busuioc, Aristita

    2016-04-01

    EURO-CORDEX is the European branch of the international CORDEX initiative that aims to provide improved regional climate change projections for Europe. The main objective of this paper is to document the performance of the individual models in reproducing the variability of precipitation extremes in Romania. Here three EURO-CORDEX regional climate models (RCMs) ensemble (scenario RCP4.5) are analysed and inter-compared: DMI-HIRHAM5, KNMI-RACMO2.2 and MPI-REMO. Compared to previous studies, when the RCM validation regarding the Romanian climate has mainly been made on mean state and at station scale, a more quantitative approach of precipitation extremes is proposed. In this respect, to have a more reliable comparison with observation, a high resolution daily precipitation gridded data set was used as observational reference (CLIMHYDEX project). The comparison between the RCM outputs and observed grid point values has been made by calculating three extremes precipitation indices, recommended by the Expert Team on Climate Change Detection Indices (ETCCDI), for the 1976-2005 period: R10MM, annual count of days when precipitation ≥10mm; RX5DAY, annual maximum 5-day precipitation and R95P%, precipitation fraction of annual total precipitation due to daily precipitation > 95th percentile. The RCMs capability to reproduce the mean state for these variables, as well as the main modes of their spatial variability (given by the first three EOF patterns), are analysed. The investigation confirms the ability of RCMs to simulate the main features of the precipitation extreme variability over Romania, but some deficiencies in reproducing of their regional characteristics were found (for example, overestimation of the mea state, especially over the extra Carpathian regions). This work has been realised within the research project "Changes in climate extremes and associated impact in hydrological events in Romania" (CLIMHYDEX), code PN II-ID-2011-2-0073, financed by the Romanian

  12. [Amniocentesis trainer: development of a cheap and reproducible new training model].

    Science.gov (United States)

    Tassin, M; Cordier, A-G; Laher, G; Benachi, A; Mandelbrot, L

    2012-11-01

    Amniocentesis is the most common invasive procedure for prenatal diagnosis. It is essential to master this sampling technique prior to performing more complex ultrasound-guided interventions (cordocentesis, drain insertion). Training is a challenge because of the risks associated with the procedure, as well as the impact on the patient's anxiety. An amniocentesis simulator allows for safe training and repeats interventions, thus accelerating the learning curve, and also allows for periodic evaluation of proficiency. We present here a new, simple, and cost-effective amniotrainer model that reproduces real life conditions, using chicken breast and condoms filled with water.

  13. A novel highly reproducible and lethal nonhuman primate model for orthopox virus infection.

    Directory of Open Access Journals (Sweden)

    Marit Kramski

    Full Text Available The intentional re-introduction of Variola virus (VARV, the agent of smallpox, into the human population is of great concern due its bio-terroristic potential. Moreover, zoonotic infections with Cowpox (CPXV and Monkeypox virus (MPXV cause severe diseases in humans. Smallpox vaccines presently available can have severe adverse effects that are no longer acceptable. The efficacy and safety of new vaccines and antiviral drugs for use in humans can only be demonstrated in animal models. The existing nonhuman primate models, using VARV and MPXV, need very high viral doses that have to be applied intravenously or intratracheally to induce a lethal infection in macaques. To overcome these drawbacks, the infectivity and pathogenicity of a particular CPXV was evaluated in the common marmoset (Callithrix jacchus.A CPXV named calpox virus was isolated from a lethal orthopox virus (OPV outbreak in New World monkeys. We demonstrated that marmosets infected with calpox virus, not only via the intravenous but also the intranasal route, reproducibly develop symptoms resembling smallpox in humans. Infected animals died within 1-3 days after onset of symptoms, even when very low infectious viral doses of 5x10(2 pfu were applied intranasally. Infectious virus was demonstrated in blood, saliva and all organs analyzed.We present the first characterization of a new OPV infection model inducing a disease in common marmosets comparable to smallpox in humans. Intranasal virus inoculation mimicking the natural route of smallpox infection led to reproducible infection. In vivo titration resulted in an MID(50 (minimal monkey infectious dose 50% of 8.3x10(2 pfu of calpox virus which is approximately 10,000-fold lower than MPXV and VARV doses applied in the macaque models. Therefore, the calpox virus/marmoset model is a suitable nonhuman primate model for the validation of vaccines and antiviral drugs. Furthermore, this model can help study mechanisms of OPV pathogenesis.

  14. Digital versus plaster study models: how accurate and reproducible are they?

    Science.gov (United States)

    Abizadeh, Neilufar; Moles, David R; O'Neill, Julian; Noar, Joseph H

    2012-09-01

    To compare measurements of occlusal relationships and arch dimensions taken from digital study models with those taken from plaster models. Laboratory study The Orthodontic Department, Kettering General Hospital, Kettering, UK Methods and materials: One hundred and twelve sets of study models with a range of malocclusions and various degrees of crowding were selected. Occlusal features were measured manually with digital callipers on the plaster models. The same measurements were performed on digital images of the study models. Each method was carried out twice in order to check for intra-operator variability. The repeatability and reproducibility of the methods was assessed. Statistically significant differences between the two methods were found. In 8 of the 16 occlusal features measured, the plaster measurements were more repeatable. However, those differences were not of sufficient magnitude to have clinical relevance. In addition there were statistically significant systematic differences for 12 of the 16 occlusal features, with the plaster measurements being greater for 11 of these, indicating the digital model scans were not a true 11 representation of the plaster models. The repeatability of digital models compared with plaster models is satisfactory for clinical applications, although this study demonstrated some systematic differences. Digital study models can therefore be considered for use as an adjunct to clinical assessment of the occlusion, but as yet may not supersede current methods for scientific purposes.

  15. Geomagnetic Observations and Models

    CERN Document Server

    Mandea, Mioara

    2011-01-01

    This volume provides comprehensive and authoritative coverage of all the main areas linked to geomagnetic field observation, from instrumentation to methodology, on ground or near-Earth. Efforts are also focused on a 21st century e-Science approach to open access to all geomagnetic data, but also to the data preservation, data discovery, data rescue, and capacity building. Finally, modeling magnetic fields with different internal origins, with their variation in space and time, is an attempt to draw together into one place the traditional work in producing models as IGRF or describing the magn

  16. On the reproducibility of spatiotemporal traffic dynamics with microscopic traffic models

    CERN Document Server

    Knorr, Florian

    2012-01-01

    Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of...

  17. On the reproducibility of spatiotemporal traffic dynamics with microscopic traffic models

    Science.gov (United States)

    Knorr, Florian; Schreckenberg, Michael

    2012-10-01

    Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.

  18. Reproducibility, reliability and validity of measurements obtained from Cecile3 digital models

    Directory of Open Access Journals (Sweden)

    Gustavo Adolfo Watanabe-Kanno

    2009-09-01

    Full Text Available The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner and 0.80 ± 0.19 (inter-examiner. The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05, although the differences were considered clinically insignificant (differences < 0.1 mm. The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.

  19. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  20. Classical signal model reproducing quantum probabilities for single and coincidence detections

    Science.gov (United States)

    Khrennikov, Andrei; Nilsson, Börje; Nordebo, Sven

    2012-05-01

    We present a simple classical (random) signal model reproducing Born's rule. The crucial point of our approach is that the presence of detector's threshold and calibration procedure have to be treated not as simply experimental technicalities, but as the basic counterparts of the theoretical model. We call this approach threshold signal detection model (TSD). The experiment on coincidence detection which was done by Grangier in 1986 [22] played a crucial role in rejection of (semi-)classical field models in favour of quantum mechanics (QM): impossibility to resolve the wave-particle duality in favour of a purely wave model. QM predicts that the relative probability of coincidence detection, the coefficient g(2) (0), is zero (for one photon states), but in (semi-)classical models g(2)(0) >= 1. In TSD the coefficient g(2)(0) decreases as 1/ɛ2d, where ɛd > 0 is the detection threshold. Hence, by increasing this threshold an experimenter can make the coefficient g(2) (0) essentially less than 1. The TSD-prediction can be tested experimentally in new Grangier type experiments presenting a detailed monitoring of dependence of the coefficient g(2)(0) on the detection threshold. Structurally our model has some similarity with the prequantum model of Grossing et al. Subquantum stochasticity is composed of the two counterparts: a stationary process in the space of internal degrees of freedom and the random walk type motion describing the temporal dynamics.

  1. Accuracy and reproducibility of linear measurements of resin, plaster, digital and printed study-models.

    Science.gov (United States)

    Saleh, Waleed K; Ariffin, Emy; Sherriff, Martyn; Bister, Dirk

    2015-01-01

    To compare the accuracy and reproducibility of measurements of on-screen three-dimensional (3D) digital surface models captured by a 3Shape R700™ laser-scanner, with measurements made using a digital caliper on acrylic, plaster models or model replicas. Four sets of typodont models were used. Acrylic models, alginate impressions, plaster models and physical replicas were measured. The 3Shape R700™ laser-scanning device with 3Shape™ software was used for scans and measurements. Linear measurements were recorded for selected landmarks, on each of the physical models and on the 3D digital surface models on ten separate occasions by a single examiner. Comparing measurements taken on the physical models the mean difference of the measurements was 0.32 mm (SD 0.15 mm). For the different methods (physical versus digital) the mean difference was 0.112 mm (SD 0.15 mm). None of the values showed a statistically significant difference (p plaster and acrylic models. The comparison of measurements on the physical models showed no significant difference. The 3Shape R700™ is a reliable device for capturing surface details of models in a digital format. When comparing measurements taken manually and digitally there was no statistically significant difference. The Objet Eden 250™ 3D prints proved to be as accurate as the original acrylic, plaster, or alginate impressions as was shown by the accuracy of the measurements taken. This confirms that using virtual study models can be a reliable method, replacing traditional plaster models.

  2. Reproducibility of VPCT parameters in the normal pancreas: comparison of two different kinetic calculation models.

    Science.gov (United States)

    Kaufmann, Sascha; Schulze, Maximilian; Horger, Thomas; Oelker, Aenne; Nikolaou, Konstantin; Horger, Marius

    2015-09-01

    To assess the reproducibility of volume computed tomographic perfusion (VPCT) measurements in normal pancreatic tissue using two different kinetic perfusion calculation models at three different time points. Institutional ethical board approval was obtained for retrospective analysis of pancreas perfusion data sets generated by our prospective study for liver response monitoring to local therapy in patients experiencing unresectable hepatocellular carcinoma, which was approved by the institutional review board. VPCT of the entire pancreas was performed in 41 patients (mean age, 64.8 years) using 26 consecutive volume measurements and intravenous injection of 50 mL of iodinated contrast at a flow rate of 5 mL/s. Blood volume(BV) and blood flow (BF) were calculated using two mathematical methods: maximum slope + Patlak analysis versus deconvolution method. Pancreas perfusion was calculated using two volume of interests. Median interval between the first and the second VPCT was 2 days and between the second and the third VPCT 82 days. Variability was assessed with within-patient coefficients of variation (CVs) and Bland-Altman analyses. Interobserver agreement for all perfusion parameters was calculated using intraclass correlation coefficients (ICCs). BF and BV values varied widely by method of analysis as did within-patient CVs for BF and BV at the second versus the first VPCT by 22.4%/50.4% (method 1) and 24.6%/24.0% (method 2) measured in the pancreatic head and 18.4%/62.6% (method 1) and 23.8%/28.1% (method 2) measured in the pancreatic corpus and at the third versus the first VPCT by 21.7%/61.8% (method 1) and 25.7%/34.5% (method 2) measured also in the pancreatic head and 19.1%/66.1% (method 1) and 22.0%/31.8% (method 2) measured in the pancreatic corpus, respectively. Interobserver agreement measured with ICC shows fair-to-good reproducibility. VPCT performed with the presented examinational protocol is reproducible and can be used for monitoring

  3. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  4. Can a global model chemical mechanism reproduce NO, NO2, and O3 measurements above a tropical rainforest?

    Directory of Open Access Journals (Sweden)

    C. N. Hewitt

    2009-12-01

    Full Text Available A cross-platform field campaign, OP3, was conducted in the state of Sabah in Malaysian Borneo between April and July of 2008. Among the suite of observations recorded, the campaign included measurements of NOx and O3–crucial outputs of any model chemistry mechanism. We describe the measurements of these species made from both the ground site and aircraft. We examine the output from the global model p-TOMCAT at two resolutions for this location during the April campaign period. The models exhibit reasonable ability in capturing the NOx diurnal cycle, but ozone is overestimated. We use a box model containing the same chemical mechanism to explore the weaknesses in the global model and the ability of the simplified global model chemical mechanism to capture the chemistry at the rainforest site. We achieve a good fit to the data for all three species (NO, NO2, and O3, though the model is much more sensitive to changes in the treatment of physical processes than to changes in the chemical mechanism. Indeed, without some parameterization of the nighttime boundary layer-free troposphere mixing, a time dependent box model will not reproduce the observations. The final simulation uses this mixing parameterization for NO and NO2 but not O3, as determined by the vertical structure of each species, and matches the measurements well.

  5. Classical signal model reproducing quantum probabilities for single and coincidence detections

    CERN Document Server

    Khrennikov, Andrei; Nordebo, Sven

    2011-01-01

    We present a simple classical (random) signal model reproducing Born's rule. The crucial point of our approach is that the presence of detector's threshold and calibration procedure have to be treated not as simply experimental technicalities, but as the basic counterparts of the theoretical model. We call this approach threshold signal detection model (TSD). The experiment on coincidence detection which was done by Grangier in 1986 \\cite{Grangier} played a crucial role in rejection of (semi-)classical field models in favor of quantum mechanics (QM): impossibility to resolve the wave-particle duality in favor of a purely wave model. QM predicts that the relative probability of coincidence detection, the coefficient $g^{(2)}(0),$ is zero (for one photon states), but in (semi-)classical models $g^{(2)}(0)\\geq 1.$ In TSD the coefficient $g^{(2)}(0)$ decreases as $1/{\\cal E}_d^2,$ where ${\\cal E}_d>0$ is the detection threshold. Hence, by increasing this threshold an experimenter can make the coefficient $g^{(2)}...

  6. Validation and reproducibility assessment of modality independent elastography in a pre-clinical model of breast cancer

    Science.gov (United States)

    Weis, Jared A.; Kim, Dong K.; Yankeelov, Thomas E.; Miga, Michael I.

    2014-03-01

    Clinical observations have long suggested that cancer progression is accompanied by extracellular matrix remodeling and concomitant increases in mechanical stiffness. Due to the strong association of mechanics and tumor progression, there has been considerable interest in incorporating methodologies to diagnose cancer through the use of mechanical stiffness imaging biomarkers, resulting in commercially available US and MR elastography products. Extension of this approach towards monitoring longitudinal changes in mechanical properties along a course of cancer therapy may provide means for assessing early response to therapy; therefore a systematic study of the elasticity biomarker in characterizing cancer for therapeutic monitoring is needed. The elastography method we employ, modality independent elastography (MIE), can be described as a model-based inverse image-analysis method that reconstructs elasticity images using two acquired image volumes in a pre/post state of compression. In this work, we present preliminary data towards validation and reproducibility assessment of our elasticity biomarker in a pre-clinical model of breast cancer. The goal of this study is to determine the accuracy and reproducibility of MIE and therefore the magnitude of changes required to determine statistical differences during therapy. Our preliminary results suggest that the MIE method can accurately and robustly assess mechanical properties in a pre-clinical system and provide considerable enthusiasm for the extension of this technique towards monitoring therapy-induced changes to breast cancer tissue architecture.

  7. A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity.

    Science.gov (United States)

    Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel

    2016-05-01

    The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.

  8. A novel, stable and reproducible acute lung injury model induced by oleic acid in immature piglet

    Institute of Scientific and Technical Information of China (English)

    ZHU Yao-bin; LING Feng; ZHANG Yan-bo; LIU Ai-jun; LIU Dong-hai; QIAO Chen-hui; WANG Qiang; LIU Ying-long

    2011-01-01

    Background Young children are susceptible to pulmonary injury,and acute lung injury (ALl) often results in a high mortality and financial costs in pediatric patients.A good ALl model will help us to gain a better understanding of the real pathophysiological picture and to evaluate novel treatment approaches to acute respiratory distress syndrome (ARDS) more accurately and liberally.This study aimed to establish a hemodynamically stable and reproducible model with ALl in piglet induced by oleic acid.Methods Six Chinese mini-piglets were used to establish ALl models by oleic acid.Hemodynamic and pulmonary function data were measured.Histopathological assessment was performed.Results Mean blood pressure,heart rate (HR),cardiac output (CO),central venous pressure (CVP) and left atrial pressure (LAP) were sharply decreased after oleic acid given,while the mean pulmonary arterial pressure (MPAP) was increased in comparison with baseline (P <0.05).pH,arterial partial pressure of O2 (PaO2),PaO2/inspired O2 fraction (FiO2) and lung compliance decreased,while PaCO2 and airway pressure increased in comparison with baseline (P <0.05).The lung histology showed severe inflammation,hyaline membranes,intra-alveolar and interstitial hemorrhage.Conclusion This experiment established a stable model which allows for a diversity of studies on early lung injury.

  9. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups

    Science.gov (United States)

    Capitán, José A.; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  10. Exploring predictive and reproducible modeling with the single-subject FIAC dataset.

    Science.gov (United States)

    Chen, Xu; Pereira, Francisco; Lee, Wayne; Strother, Stephen; Mitchell, Tom

    2006-05-01

    Predictive modeling of functional magnetic resonance imaging (fMRI) has the potential to expand the amount of information extracted and to enhance our understanding of brain systems by predicting brain states, rather than emphasizing the standard spatial mapping. Based on the block datasets of Functional Imaging Analysis Contest (FIAC) Subject 3, we demonstrate the potential and pitfalls of predictive modeling in fMRI analysis by investigating the performance of five models (linear discriminant analysis, logistic regression, linear support vector machine, Gaussian naive Bayes, and a variant) as a function of preprocessing steps and feature selection methods. We found that: (1) independent of the model, temporal detrending and feature selection assisted in building a more accurate predictive model; (2) the linear support vector machine and logistic regression often performed better than either of the Gaussian naive Bayes models in terms of the optimal prediction accuracy; and (3) the optimal prediction accuracy obtained in a feature space using principal components was typically lower than that obtained in a voxel space, given the same model and same preprocessing. We show that due to the existence of artifacts from different sources, high prediction accuracy alone does not guarantee that a classifier is learning a pattern of brain activity that might be usefully visualized, although cross-validation methods do provide fairly unbiased estimates of true prediction accuracy. The trade-off between the prediction accuracy and the reproducibility of the spatial pattern should be carefully considered in predictive modeling of fMRI. We suggest that unless the experimental goal is brain-state classification of new scans on well-defined spatial features, prediction alone should not be used as an optimization procedure in fMRI data analysis.

  11. Animal models that best reproduce the clinical manifestations of human intoxication with organophosphorus compounds.

    Science.gov (United States)

    Pereira, Edna F R; Aracava, Yasco; DeTolla, Louis J; Beecham, E Jeffrey; Basinger, G William; Wakayama, Edgar J; Albuquerque, Edson X

    2014-08-01

    The translational capacity of data generated in preclinical toxicological studies is contingent upon several factors, including the appropriateness of the animal model. The primary objectives of this article are: 1) to analyze the natural history of acute and delayed signs and symptoms that develop following an acute exposure of humans to organophosphorus (OP) compounds, with an emphasis on nerve agents; 2) to identify animal models of the clinical manifestations of human exposure to OPs; and 3) to review the mechanisms that contribute to the immediate and delayed OP neurotoxicity. As discussed in this study, clinical manifestations of an acute exposure of humans to OP compounds can be faithfully reproduced in rodents and nonhuman primates. These manifestations include an acute cholinergic crisis in addition to signs of neurotoxicity that develop long after the OP exposure, particularly chronic neurologic deficits consisting of anxiety-related behavior and cognitive deficits, structural brain damage, and increased slow electroencephalographic frequencies. Because guinea pigs and nonhuman primates, like humans, have low levels of circulating carboxylesterases-the enzymes that metabolize and inactivate OP compounds-they stand out as appropriate animal models for studies of OP intoxication. These are critical points for the development of safe and effective therapeutic interventions against OP poisoning because approval of such therapies by the Food and Drug Administration is likely to rely on the Animal Efficacy Rule, which allows exclusive use of animal data as evidence of the effectiveness of a drug against pathologic conditions that cannot be ethically or feasibly tested in humans.

  12. A Semi-Analytic dynamical friction model that reproduces core stalling

    CERN Document Server

    Petts, James A; Read, Justin I

    2015-01-01

    We present a new semi-analytic model for dynamical friction based on Chandrasekhar's formalism. The key novelty is the introduction of physically motivated, radially varying, maximum and minimum impact parameters. With these, our model gives an excellent match to full N-body simulations for isotropic background density distributions, both cuspy and shallow, without any fine-tuning of the model parameters. In particular, we are able to reproduce the dramatic core-stalling effect that occurs in shallow/constant density cores, for the first time. This gives us new physical insight into the core-stalling phenomenon. We show that core stalling occurs in the limit in which the product of the Coulomb logarithm and the local fraction of stars with velocity lower than the infalling body tends to zero. For cuspy backgrounds, this occurs when the infalling mass approaches the enclosed background mass. For cored backgrounds, it occurs at larger distances from the centre, due to a combination of a rapidly increasing minim...

  13. Stochastic model of financial markets reproducing scaling and memory in volatility return intervals

    Science.gov (United States)

    Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.

    2016-11-01

    We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.

  14. Hippocampal Astrocyte Cultures from Adult and Aged Rats Reproduce Changes in Glial Functionality Observed in the Aging Brain.

    Science.gov (United States)

    Bellaver, Bruna; Souza, Débora Guerini; Souza, Diogo Onofre; Quincozes-Santos, André

    2017-05-01

    Astrocytes are dynamic cells that maintain brain homeostasis, regulate neurotransmitter systems, and process synaptic information, energy metabolism, antioxidant defenses, and inflammatory response. Aging is a biological process that is closely associated with hippocampal astrocyte dysfunction. In this sense, we demonstrated that hippocampal astrocytes from adult and aged Wistar rats reproduce the glial functionality alterations observed in aging by evaluating several senescence, glutamatergic, oxidative and inflammatory parameters commonly associated with the aging process. Here, we show that the p21 senescence-associated gene and classical astrocyte markers, such as glial fibrillary acidic protein (GFAP), vimentin, and actin, changed their expressions in adult and aged astrocytes. Age-dependent changes were also observed in glutamate transporters (glutamate aspartate transporter (GLAST) and glutamate transporter-1 (GLT-1)) and glutamine synthetase immunolabeling and activity. Additionally, according to in vivo aging, astrocytes from adult and aged rats showed an increase in oxidative/nitrosative stress with mitochondrial dysfunction, an increase in RNA oxidation, NADPH oxidase (NOX) activity, superoxide levels, and inducible nitric oxide synthase (iNOS) expression levels. Changes in antioxidant defenses were also observed. Hippocampal astrocytes also displayed age-dependent inflammatory response with augmentation of proinflammatory cytokine levels, such as TNF-α, IL-1β, IL-6, IL-18, and messenger RNA (mRNA) levels of cyclo-oxygenase 2 (COX-2). Furthermore, these cells secrete neurotrophic factors, including glia-derived neurotrophic factor (GDNF), brain-derived neurotrophic factor (BDNF), S100 calcium-binding protein B (S100B) protein, and transforming growth factor-β (TGF-β), which changed in an age-dependent manner. Classical signaling pathways associated with aging, such as nuclear factor erythroid-derived 2-like 2 (Nrf2), nuclear factor kappa B (NFκ

  15. Commentary on the integration of model sharing and reproducibility analysis to scholarly publishing workflow in computational biomechanics.

    Science.gov (United States)

    Erdemir, Ahmet; Guess, Trent M; Halloran, Jason P; Modenese, Luca; Reinbolt, Jeffrey A; Thelen, Darryl G; Umberger, Brian R; Erdemir, Ahmet; Guess, Trent M; Halloran, Jason P; Modenese, Luca; Reinbolt, Jeffrey A; Thelen, Darryl G; Umberger, Brian R; Umberger, Brian R; Erdemir, Ahmet; Thelen, Darryl G; Guess, Trent M; Reinbolt, Jeffrey A; Modenese, Luca; Halloran, Jason P

    2016-10-01

    The overall goal of this paper is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. As part of a special issue on model sharing and reproducibility in the IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and Schmitz and Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers' feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis was not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models.

  16. Stratospheric dryness: model simulations and satellite observations

    Directory of Open Access Journals (Sweden)

    J. Lelieveld

    2007-01-01

    Full Text Available The mechanisms responsible for the extreme dryness of the stratosphere have been debated for decades. A key difficulty has been the lack of comprehensive models which are able to reproduce the observations. Here we examine results from the coupled lower-middle atmosphere chemistry general circulation model ECHAM5/MESSy1 together with satellite observations. Our model results match observed temperatures in the tropical lower stratosphere and realistically represent the seasonal and inter-annual variability of water vapor. The model reproduces the very low water vapor mixing ratios (below 2 ppmv periodically observed at the tropical tropopause near 100 hPa, as well as the characteristic tape recorder signal up to about 10 hPa, providing evidence that the dehydration mechanism is well-captured. Our results confirm that the entry of tropospheric air into the tropical stratosphere is forced by large-scale wave dynamics, whereas radiative cooling regionally decelerates upwelling and can even cause downwelling. Thin cirrus forms in the cold air above cumulonimbus clouds, and the associated sedimentation of ice particles between 100 and 200 hPa reduces water mass fluxes by nearly two orders of magnitude compared to air mass fluxes. Transport into the stratosphere is supported by regional net radiative heating, to a large extent in the outer tropics. During summer very deep monsoon convection over Southeast Asia, centered over Tibet, moistens the stratosphere.

  17. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  18. Composite model to reproduce the mechanical behaviour of methane hydrate bearing soils

    Science.gov (United States)

    De la Fuente, Maria

    2016-04-01

    Methane hydrate bearing sediments (MHBS) are naturally-occurring materials containing different components in the pores that may suffer phase changes under relative small temperature and pressure variations for conditions typically prevailing a few hundreds of meters below sea level. Their modelling needs to account for heat and mass balance equations of the different components, and several strategies already exist to combine them (e.g., Rutqvist & Moridis, 2009; Sánchez et al. 2014). These equations have to be completed by restrictions and constitutive laws reproducing the phenomenology of heat and fluid flows, phase change conditions and mechanical response. While the formulation of the non-mechanical laws generally includes explicitly the mass fraction of methane in each phase, which allows for a natural update of parameters during phase changes, mechanical laws are, in most cases, stated for the whole solid skeleton (Uchida et al., 2012; Soga et al. 2006). In this paper, a mechanical model is proposed to cope with the response of MHBS. It is based on a composite approach that allows defining the thermo-hydro-mechanical response of mineral skeleton and solid hydrates independently. The global stress-strain-temperature response of the solid phase (grains + hydrate) is then obtained by combining both responses according to energy principle following the work by Pinyol et al. (2007). In this way, dissociation of MH can be assessed on the basis of the stress state and temperature prevailing locally within the hydrate component. Besides, its structuring effect is naturally accounted for by the model according to patterns of MH inclusions within soil pores. This paper describes the fundamental hypothesis behind the model and its formulation. Its performance is assessed by comparison with laboratory data presented in the literature. An analysis of MHBS response to several stress-temperature paths representing potential field cases is finally presented. References

  19. Can a stepwise steady flow computational fluid dynamics model reproduce unsteady particulate matter separation for common unit operations?

    Science.gov (United States)

    Pathapati, Subbu-Srikanth; Sansalone, John J

    2011-07-01

    Computational fluid dynamics (CFD) is emerging as a model for resolving the fate of particulate matter (PM) by unit operations subject to rainfall-runoff loadings. However, compared to steady flow CFD models, there are greater computational requirements for unsteady hydrodynamics and PM loading models. Therefore this study examines if integrating a stepwise steady flow CFD model can reproduce PM separation by common unit operations loaded by unsteady flow and PM loadings, thereby reducing computational effort. Utilizing monitored unit operation data from unsteady events as a metric, this study compares the two CFD modeling approaches for a hydrodynamic separator (HS), a primary clarifier (PC) tank, and a volumetric clarifying filtration system (VCF). Results indicate that while unsteady CFD models reproduce PM separation of each unit operation, stepwise steady CFD models result in significant deviation for HS and PC models as compared to monitored data; overestimating the physical size requirements of each unit required to reproduce monitored PM separation results. In contrast, the stepwise steady flow approach reproduces PM separation by the VCF, a combined gravitational sedimentation and media filtration unit operation that provides attenuation of turbulent energy and flow velocity.

  20. Augmenting a Large-Scale Hydrology Model to Reproduce Groundwater Variability

    Science.gov (United States)

    Stampoulis, D.; Reager, J. T., II; Andreadis, K.; Famiglietti, J. S.

    2016-12-01

    To understand the influence of groundwater on terrestrial ecosystems and society, global assessment of groundwater temporal fluctuations is required. A water table was initialized in the Variable Infiltration Capacity (VIC) hydrologic model in a semi-realistic approach to account for groundwater variability. Global water table depth data derived from observations at nearly 2 million well sites compiled from government archives and published literature, as well as groundwater model simulations, were used to create a new soil layer of varying depth for each model grid cell. The new 4-layer version of VIC, hereafter named VIC-4L, was run with and without assimilating NASA's Gravity Recovery and Climate Experiment (GRACE) observations. The results were compared with simulations using the original VIC version (named VIC-3L) with GRACE assimilation, while all runs were compared with well data.

  1. A novel, recovery, and reproducible minimally invasive cardiopulmonary bypass model with lung injury in rats

    Institute of Scientific and Technical Information of China (English)

    LI Ling-ke; CHENG Wei; LIU Dong-hai; ZHANG Jing; ZHU Yao-bin; QIAO Chen-hui; ZHANG Yan-bo

    2013-01-01

    Background Cardiopulmonary bypass (CPB) has been shown to be associated with a systemic inflammatory response leading to postoperative organ dysfunction.Elucidating the underlying mechanisms and developing protective strategies for the pathophysiological consequences of CPB have been hampered due to the absence of a satisfactory recovery animal model.The purpose of this study was to establish a good rat model of CPB to study the pathophysiology of potential complications.Methods Twenty adult male Sprague-Dawley rats weighing 450-560 g were randomly divided into a CPB group (n=10)and a control group (n=10).All rats were anaesthetized and mechanically ventilated.The carotid artery and jugular vein were cannulated.The blood was drained from the dght atrium via the right jugular and transferred by a miniaturized roller pump to a hollow fiber oxygenator and back to the rat via the left carotid artery.Priming consisted of 8 ml of homologous blood and 8 ml of colloid.The surface of the hollow fiber oxygenator was 0.075 m2.CPB was conducted for 60 minutes at a flow rate of 100-120 ml.kg-1.min-1 in the CPB group.Oxygen flow/perfusion flow was 0.8 to 1.0,and the mean arterial pressure remained 60-80 mmHg.Blood gas analysis,hemodynamic investigations,and lung histology were subsequently examined.Results All CPB rats recovered from the operative process without incident.Normal cardiac function after successful weaning was confirmed by electrocardiography and blood pressure measurements.Mean arterial pressure remained stable.The results of blood gas analysis at different times were within the normal range.Levels of IL-1β and TNF-α were higher in the lung tissue in the CPB group (P <0.005).Histological examination revealed marked increases in interstitial congestion,edema,and inflammation in the CPB group.Conclusion This novel,recovery,and reproducible minimally invasive CPB model may open the field for various studies on the pathophysiological process of CPB and systemic

  2. Assessing reproducibility by the within-subject coefficient of variation with random effects models.

    Science.gov (United States)

    Quan, H; Shih, W J

    1996-12-01

    In this paper we consider the use of within-subject coefficient of variation (WCV) for assessing the reproducibility or reliability of a measurement. Application to assessing reproducibility of biochemical markers for measuring bone turnover is described and the comparison with intraclass correlation is discussed. Both maximum likelihood and moment confidence intervals of WCV are obtained through their corresponding asymptotic distributions. Normal and log-normal cases are considered. In general, WCV is preferred when the measurement scale bears intrinsic meaning and is not subject to arbitrary shifting. The intraclass correlation may be preferred when a fixed population of subjects can be well identified.

  3. Assessment of colorectal liver metastases using MRI and CT: Impact of observer experience on diagnostic performance and inter-observer reproducibility with histopathological correlation

    Energy Technology Data Exchange (ETDEWEB)

    Albrecht, Moritz H., E-mail: MoritzAlbrecht@gmx.net [University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Wichmann, Julian L.; Müller, Cindy [University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Schreckenbach, Theresa [University Hospital Frankfurt, Department of General and Visceral Surgery, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Sakthibalan, Sreekanth [Barts and the London, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); Hammerstingl, Renate [University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Bechstein, Wolf O. [University Hospital Frankfurt, Department of General and Visceral Surgery, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Zangos, Stephan [University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Ackermann, Hanns [University Hospital Frankfurt, Department of Biostatistics and Medical Information, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Vogl, Thomas J. [University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany)

    2014-10-15

    Highlights: • We investigate the impact of experience on CT and MRI reporting of colorectal liver metastases. • Diagnostic quality is significantly influenced by observer experience for both CT and MRI. • MRI is more affected by experience than CT when reporting cases of colorectal liver metastases. - Abstract: Introduction: To compare the diagnostic performance and inter-observer reproducibility of CT and MRI in detecting colorectal liver metastases (CRLM) of observers with different levels of experience. Materials and methods: Data from 51 CT and 54 MRI examinations of 105 patients with CRLM were analysed. Intraoperative and histopathological findings served as the reference standard. Analyses were performed by four observers with varying levels of experience regarding imaging of CRLM (reviewers A, B, C and D with respectively >20, >5, <1 and 0 years of experience). Per-segment sensitivity, specificity, Cohen's kappa (κ) for diagnosed segments and Intra-class Correlation Coefficients (ICC) for reported number of lesions were calculated. Results: CT sensitivity and specificity was for reviewer A 89.71%/94.41%, B 78.50%/88.37%, C 63.55%/85.58%, D 84.11%/78.60% and regarding MRI A 90.40%/95.43%, B 74.40%/90.04%, C 60.00%/85.89% and D 65.60%/75.90%. The overall inter-observer agreement was higher for CT (κ = 0.43, p < 0.001; ICC = 0.75, p < 0.001) than MRI (κ = 0.38, p < 0.001; ICC = 0.65, p < 0.001). The experienced reviewers A and B achieved better agreement for MRI (κ = 0.54, p < 0.001; ICC = 0.77, p < 0.001) than CT (κ = 0.52, p < 0.00; ICC = 0.76, p < 0.001) unlike the less experienced C and D (MRI κ = 0.38, ICC = 0.63 and CT κ = 0.41, ICC = 0.74, respectively, p < 0.001). Conclusions: The proficiency in detection of CRLM is significantly influenced by observer experience, although CT interpretation is less affected than MRI analysis.

  4. Shelter models and observations

    DEFF Research Database (Denmark)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide;

    This report documents part of the work performed by work package (WP) 3 of the ‘Online WAsP’ project funded by the Danish Energy Technology and Demonstration Program (EUDP). WP3 initially identified the shortcomings of the current WAsP engine for small and medium wind turbines (Peña et al., 2014b...... in the wake of a fence. The experiment is the basis of the study of the error and uncertainty of the obstacle models....

  5. Reproducible long-term disc degeneration in a large animal model

    NARCIS (Netherlands)

    Hoogendoorn, R.J.W.; Helder, M.N.; Kroeze, R.J.; Bank, R.A.; Smit, T.H.; Wuisman, P.I.J.M.

    2008-01-01

    STUDY DESIGN. Twelve goats were chemically degenerated and the development of the degenerative signs was followed for 26 weeks to evaluate the progression of the induced degeneration. The results were also compared with a previous study to determine the reproducibility. OBJECTIVES. The purpose of th

  6. Reproducing the Wechsler Intelligence Scale for Children-Fifth Edition: Factor Model Results

    Science.gov (United States)

    Beaujean, A. Alexander

    2016-01-01

    One of the ways to increase the reproducibility of research is for authors to provide a sufficient description of the data analytic procedures so that others can replicate the results. The publishers of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) do not follow these guidelines when reporting their confirmatory factor…

  7. Evaluation of NASA's MERRA Precipitation Product in Reproducing the Observed Trend and Distribution of Extreme Precipitation Events in the United States

    Science.gov (United States)

    Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin; Bosilovich, Michael G.; Lee, Jaechoul; Wehner, Michael F.; Collow, Allison

    2016-01-01

    This study evaluates the performance of NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) precipitation product in reproducing the trend and distribution of extreme precipitation events. Utilizing the extreme value theory, time-invariant and time-variant extreme value distributions are developed to model the trends and changes in the patterns of extreme precipitation events over the contiguous United States during 1979-2010. The Climate Prediction Center (CPC) U.S.Unified gridded observation data are used as the observational dataset. The CPC analysis shows that the eastern and western parts of the United States are experiencing positive and negative trends in annual maxima, respectively. The continental-scale patterns of change found in MERRA seem to reasonably mirror the observed patterns of change found in CPC. This is not previously expected, given the difficulty in constraining precipitation in reanalysis products. MERRA tends to overestimate the frequency at which the 99th percentile of precipitation is exceeded because this threshold tends to be lower in MERRA, making it easier to be exceeded. This feature is dominant during the summer months. MERRA tends to reproduce spatial patterns of the scale and location parameters of the generalized extreme value and generalized Pareto distributions. However, MERRA underestimates these parameters, particularly over the Gulf Coast states, leading to lower magnitudes in extreme precipitation events. Two issues in MERRA are identified: 1) MERRA shows a spurious negative trend in Nebraska and Kansas, which is most likely related to the changes in the satellite observing system over time that has apparently affected the water cycle in the central United States, and 2) the patterns of positive trend over the Gulf Coast states and along the East Coast seem to be correlated with the tropical cyclones in these regions. The analysis of the trends in the seasonal precipitation extremes indicates that

  8. Assessment of the potential forecasting skill of a global hydrological model in reproducing the occurrence of monthly flow extremes

    NARCIS (Netherlands)

    Candogan Yossef, N.A.N.N.; Beek, L.P.H. van; Kwadijk, J.C.J.; Bierkens, M.F.P.

    2012-01-01

    As an initial step in assessing the prospect of using global hydrological models (GHMs) for hydrological forecasting, this study investigates the skill of the GHM PCRGLOBWB in reproducing the occurrence of past extremes in monthly discharge on a global scale. Global terrestrial hydrology from 1958

  9. Can a coupled meteorology–chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere h...

  10. Can a coupled meteorology–chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere h...

  11. Reproducibility of scratch assays is affected by the initial degree of confluence: Experiments, modelling and model selection.

    Science.gov (United States)

    Jin, Wang; Shah, Esha T; Penington, Catherine J; McCue, Scott W; Chopin, Lisa K; Simpson, Matthew J

    2016-02-01

    Scratch assays are difficult to reproduce. Here we identify a previously overlooked source of variability which could partially explain this difficulty. We analyse a suite of scratch assays in which we vary the initial degree of confluence (initial cell density). Our results indicate that the rate of re-colonisation is very sensitive to the initial density. To quantify the relative roles of cell migration and proliferation, we calibrate the solution of the Fisher-Kolmogorov model to cell density profiles to provide estimates of the cell diffusivity, D, and the cell proliferation rate, λ. This procedure indicates that the estimates of D and λ are very sensitive to the initial density. This dependence suggests that the Fisher-Kolmogorov model does not accurately represent the details of the collective cell spreading process, since this model assumes that D and λ are constants that ought to be independent of the initial density. Since higher initial cell density leads to enhanced spreading, we also calibrate the solution of the Porous-Fisher model to the data as this model assumes that the cell flux is an increasing function of the cell density. Estimates of D and λ associated with the Porous-Fisher model are less sensitive to the initial density, suggesting that the Porous-Fisher model provides a better description of the experiments.

  12. An exponent tunable network model for reproducing density driven superlinear relation

    CERN Document Server

    Qin, Yuhao; Xu, Lida; Gao, Zi-You

    2014-01-01

    Previous works have shown the universality of allometric scalings under density and total value at city level, but our understanding about the size effects of regions on them is still poor. Here, we revisit the scaling relations between gross domestic production (GDP) and population (POP) under total and density value. We first reveal that the superlinear scaling is a general feature under density value crossing different regions. The scaling exponent $\\beta$ under density value falls into the range $(1.0, 2.0]$, which unexpectedly goes beyond the range observed by Pan et al. (Nat. Commun. vol. 4, p. 1961 (2013)). To deal with the wider range, we propose a network model based on 2D lattice space with the spatial correlation factor $\\alpha$ as parameter. Numerical experiments prove that the generated scaling exponent $\\beta$ in our model is fully tunable by the spatial correlation factor $\\alpha$. We conjecture that our model provides a general platform for extensive urban and regional studies.

  13. Can a coupled meteorology-chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Directory of Open Access Journals (Sweden)

    J. Xing

    2015-05-01

    Full Text Available The ability of a coupled meteorology-chemistry model, i.e., WRF-CMAQ, in reproducing the historical trend in AOD and clear-sky short-wave radiation (SWR over the Northern Hemisphere has been evaluated through a comparison of 21 year simulated results with observation-derived records from 1990–2010. Six satellite retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-terra and -aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both TOA and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling and decreased surface SWR (downwelling in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling and increased surface SWR (downwelling in eastern US, Europe and northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and north Indian Ocean. Estimates of aerosol direct radiative effect (DRE at TOA are comparable with those derived by measurements. Compared to GCMs, the model exhibits better estimates of surface- aerosol direct radiative efficiency (Eτ. However, surface-DRE tends to be underestimated due to the underestimated AOD in land and dust regions. Further investigation of TOA-Eτ estimations as well as the dust module used for estimates of windblown-dust emissions is needed.

  14. Some problems with reproducing the Standard Model fields and interactions in five-dimensional warped brane world models

    Science.gov (United States)

    Smolyakov, Mikhail N.; Volobuev, Igor P.

    2016-01-01

    In this paper we examine, from the purely theoretical point of view and in a model-independent way, the case, when matter, gauge and Higgs fields are allowed to propagate in the bulk of five-dimensional brane world models with compact extra dimension, and the Standard Model fields and their interactions are supposed to be reproduced by the corresponding zero Kaluza-Klein modes. An unexpected result is that in order to avoid possible pathological behavior in the fermion sector, it is necessary to impose constraints on the fermion field Lagrangian. In the case when the fermion zero modes are supposed to be localized at one of the branes, these constraints imply an additional relation between the vacuum profile of the Higgs field and the form of the background metric. Moreover, this relation between the vacuum profile of the Higgs field and the form of the background metric results in the exact reproduction of the gauge boson and fermion sectors of the Standard Model by the corresponding zero mode four-dimensional effective theory in all the physically relevant cases, allowed by the absence of pathologies. Meanwhile, deviations from these conditions can lead either back to pathological behavior in the fermion sector or to a variance between the resulting zero mode four-dimensional effective theory and the Standard Model, which, depending on the model at hand, may, in principle, result in constraints putting the theory out of the reach of the present day experiments.

  15. Elusive reproducibility.

    Science.gov (United States)

    Gori, Gio Batta

    2014-08-01

    Reproducibility remains a mirage for many biomedical studies because inherent experimental uncertainties generate idiosyncratic outcomes. The authentication and error rates of primary empirical data are often elusive, while multifactorial confounders beset experimental setups. Substantive methodological remedies are difficult to conceive, signifying that many biomedical studies yield more or less plausible results, depending on the attending uncertainties. Real life applications of those results remain problematic, with important exceptions for counterfactual field validations of strong experimental signals, notably for some vaccines and drugs, and for certain safety and occupational measures. It is argued that industrial, commercial and public policies and regulations could not ethically rely on unreliable biomedical results; rather, they should be rationally grounded on transparent cost-benefit tradeoffs.

  16. Towards Reproducibility in Computational Hydrology

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    The ability to reproduce published scientific findings is a foundational principle of scientific research. Independent observation helps to verify the legitimacy of individual findings; build upon sound observations so that we can evolve hypotheses (and models) of how catchments function; and move them from specific circumstances to more general theory. The rise of computational research has brought increased focus on the issue of reproducibility across the broader scientific literature. This is because publications based on computational research typically do not contain sufficient information to enable the results to be reproduced, and therefore verified. Given the rise of computational analysis in hydrology over the past 30 years, to what extent is reproducibility, or a lack thereof, a problem in hydrology? Whilst much hydrological code is accessible, the actual code and workflow that produced and therefore documents the provenance of published scientific findings, is rarely available. We argue that in order to advance and make more robust the process of hypothesis testing and knowledge creation within the computational hydrological community, we need to build on from existing open data initiatives and adopt common standards and infrastructures to: first make code re-useable and easy to find through consistent use of metadata; second, publish well documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; finally, use unique persistent identifiers (e.g. DOIs) to reference re-useable and reproducible code, thereby clearly showing the provenance of published scientific findings. Whilst extra effort is require to make work reproducible, there are benefits to both the individual and the broader community in doing so, which will improve the credibility of the science in the face of the need for societies to adapt to changing hydrological environments.

  17. Identification of Nonlinear Spatiotemporal Dynamical Systems With Nonuniform Observations Using Reproducing-Kernel-Based Integral Least Square Regulation.

    Science.gov (United States)

    Ning, Hanwen; Qing, Guangyan; Jing, Xingjian

    2016-11-01

    The identification of nonlinear spatiotemporal dynamical systems given by partial differential equations has attracted a lot of attention in the past decades. Several methods, such as searching principle-based algorithms, partially linear kernel methods, and coupled lattice methods, have been developed to address the identification problems. However, most existing methods have some restrictions on sampling processes in that the sampling intervals should usually be very small and uniformly distributed in spatiotemporal domains. These are actually not applicable for some practical applications. In this paper, to tackle this issue, a novel kernel-based learning algorithm named integral least square regularization regression (ILSRR) is proposed, which can be used to effectively achieve accurate derivative estimation for nonlinear functions in the time domain. With this technique, a discretization method named inverse meshless collocation is then developed to realize the dimensional reduction of the system to be identified. Thereafter, with this novel inverse meshless collocation model, the ILSRR, and a multiple-kernel-based learning algorithm, a multistep identification method is systematically proposed to address the identification problem of spatiotemporal systems with pointwise nonuniform observations. Numerical studies for benchmark systems with necessary discussions are presented to illustrate the effectiveness and the advantages of the proposed method.

  18. Reproducibility of the heat/capsaicin skin sensitization model in healthy volunteers

    Directory of Open Access Journals (Sweden)

    Cavallone LF

    2013-11-01

    Full Text Available Laura F Cavallone,1 Karen Frey,1 Michael C Montana,1 Jeremy Joyal,1 Karen J Regina,1 Karin L Petersen,2 Robert W Gereau IV11Department of Anesthesiology, Washington University in St Louis, School of Medicine, St Louis, MO, USA; 2California Pacific Medical Center Research Institute, San Francisco, CA, USAIntroduction: Heat/capsaicin skin sensitization is a well-characterized human experimental model to induce hyperalgesia and allodynia. Using this model, gabapentin, among other drugs, was shown to significantly reduce cutaneous hyperalgesia compared to placebo. Since the larger thermal probes used in the original studies to produce heat sensitization are now commercially unavailable, we decided to assess whether previous findings could be replicated with a currently available smaller probe (heated area 9 cm2 versus 12.5–15.7 cm2.Study design and methods: After Institutional Review Board approval, 15 adult healthy volunteers participated in two study sessions, scheduled 1 week apart (Part A. In both sessions, subjects were exposed to the heat/capsaicin cutaneous sensitization model. Areas of hypersensitivity to brush stroke and von Frey (VF filament stimulation were measured at baseline and after rekindling of skin sensitization. Another group of 15 volunteers was exposed to an identical schedule and set of sensitization procedures, but, in each session, received either gabapentin or placebo (Part B.Results: Unlike previous reports, a similar reduction of areas of hyperalgesia was observed in all groups/sessions. Fading of areas of hyperalgesia over time was observed in Part A. In Part B, there was no difference in area reduction after gabapentin compared to placebo.Conclusion: When using smaller thermal probes than originally proposed, modifications of other parameters of sensitization and/or rekindling process may be needed to allow the heat/capsaicin sensitization protocol to be used as initially intended. Standardization and validation of

  19. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome

    Directory of Open Access Journals (Sweden)

    Sonal eGoswami

    2012-06-01

    Full Text Available Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situation known to precipitate PTSD in humans. This study aimed to assess whether the hippocampus-associated deficits observed in the human syndrome are reproduced in this rodent model. Prior to predatory threat, different groups of rats were each tested on one of three object recognition memory tasks that varied in the types of contextual clues (i.e. that require the hippocampus or not the rats could use to identify novel items. After task completion, the rats were subjected to predatory threat and, one week later, tested on the elevated plus maze. Based on their exploratory behavior in the plus maze, rats were then classified as resilient or PTSD-like and their performance on the pre-threat object recognition tasks compared. The performance of PTSD-like rats was inferior to that of resilient rats but only when subjects relied on an allocentric frame of reference to identify novel items, a process thought to be critically dependent on the hippocampus. Therefore, these results suggest that even prior to trauma, PTSD-like rats show a deficit in hippocampal-dependent functions, as reported in twin studies of human PTSD.

  20. The ability of a GCM-forced hydrological model to reproduce global discharge variability

    Directory of Open Access Journals (Sweden)

    F. C. Sperna Weiland

    2010-08-01

    Full Text Available Data from General Circulation Models (GCMs are often used to investigate hydrological impacts of climate change. However GCM data are known to have large biases, especially for precipitation. In this study the usefulness of GCM data for hydrological studies, with focus on discharge variability and extremes, was tested by using bias-corrected daily climate data of the 20CM3 control experiment from a selection of twelve GCMs as input to the global hydrological model PCR-GLOBWB. Results of these runs were compared with discharge observations of the GRDC and discharges calculated from model runs based on two meteorological datasets constructed from the observation-based CRU TS2.1 and ERA-40 reanalysis. In the first dataset the CRU TS 2.1 monthly timeseries were downscaled to daily timeseries using the ERA-40 dataset (ERA6190. This dataset served as a best guess of the past climate and was used to analyze the performance of PCR-GLOBWB. The second dataset was created from the ERA-40 timeseries bias-corrected with the CRU TS 2.1 dataset using the same bias-correction method as applied to the GCM datasets (ERACLM. Through this dataset the influence of the bias-correction method was quantified. The bias-correction was limited to monthly mean values of precipitation, potential evaporation and temperature, as our focus was on the reproduction of inter- and intra-annual variability.

    After bias-correction the spread in discharge results of the GCM based runs decreased and results were similar to results of the ERA-40 based runs, especially for rivers with a strong seasonal pattern. Overall the bias-correction method resulted in a slight reduction of global runoff and the method performed less well in arid and mountainous regions. However, deviations between GCM results and GRDC statistics did decrease for Q, Q90 and IAV. After bias-correction consistency amongst

  1. Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results

    Science.gov (United States)

    Langousis, Andreas; Mamalakis, Antonios; Deidda, Roberto; Marrocu, Marino

    2016-01-01

    To improve the level skill of climate models (CMs) in reproducing the statistics of daily rainfall at a basin level, two types of statistical approaches have been suggested. One is statistical correction of CM rainfall outputs based on historical series of precipitation. The other, usually referred to as statistical rainfall downscaling, is the use of stochastic models to conditionally simulate rainfall series, based on large-scale atmospheric forcing from CMs. While promising, the latter approach attracted reduced attention in recent years, since the developed downscaling schemes involved complex weather identification procedures, while demonstrating limited success in reproducing several statistical features of rainfall. In a recent effort, Langousis and Kaleris () developed a statistical framework for simulation of daily rainfall intensities conditional on upper-air variables, which is simpler to implement and more accurately reproduces several statistical properties of actual rainfall records. Here we study the relative performance of: (a) direct statistical correction of CM rainfall outputs using nonparametric distribution mapping, and (b) the statistical downscaling scheme of Langousis and Kaleris (), in reproducing the historical rainfall statistics, including rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e., the Flumendosa catchment, using rainfall and atmospheric data from four CMs of the ENSEMBLES project. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the CM used and the characteristics of the calibration period. This is particularly the case for yearly rainfall maxima.

  2. How well do CMIP5 climate models reproduce explosive cyclones in the extratropics of the Northern Hemisphere?

    Science.gov (United States)

    Seiler, C.; Zwiers, F. W.

    2016-02-01

    Extratropical explosive cyclones are rapidly intensifying low pressure systems with severe wind speeds and heavy precipitation, affecting livelihoods and infrastructure primarily in coastal and marine environments. This study evaluates how well the most recent generation of climate models reproduces extratropical explosive cyclones in the Northern Hemisphere for the period 1980-2005. An objective-feature tracking algorithm is used to identify and track cyclones from 25 climate models and three reanalysis products. Model biases are compared to biases in the sea surface temperature (SST) gradient, the polar jet stream, the Eady growth rate, and model resolution. Most models accurately reproduce the spatial distribution of explosive cyclones when compared to reanalysis data ( R = 0.94), with high frequencies along the Kuroshio Current and the Gulf Stream. Three quarters of the models however significantly underpredict explosive cyclone frequencies, by a third on average and by two thirds in the worst case. This frequency bias is significantly correlated with jet stream speed in the inter-model spread ( R ≥ 0.51), which in the Atlantic is correlated with a negative meridional SST gradient ( R = -0.56). The importance of the jet stream versus other variables considered in this study also applies to the interannual variability of explosive cyclone frequency. Furthermore, models with fewer explosive cyclones tend to underpredict the corresponding deepening rates ( R ≥ 0.88). A follow-up study will assess the impacts of climate change on explosive cyclones, and evaluate how model biases presented in this study affect the projections.

  3. A computational model for histone mark propagation reproduces the distribution of heterochromatin in different human cell types.

    Science.gov (United States)

    Schwämmle, Veit; Jensen, Ole Nørregaard

    2013-01-01

    Chromatin is a highly compact and dynamic nuclear structure that consists of DNA and associated proteins. The main organizational unit is the nucleosome, which consists of a histone octamer with DNA wrapped around it. Histone proteins are implicated in the regulation of eukaryote genes and they carry numerous reversible post-translational modifications that control DNA-protein interactions and the recruitment of chromatin binding proteins. Heterochromatin, the transcriptionally inactive part of the genome, is densely packed and contains histone H3 that is methylated at Lys 9 (H3K9me). The propagation of H3K9me in nucleosomes along the DNA in chromatin is antagonizing by methylation of H3 Lysine 4 (H3K4me) and acetylations of several lysines, which is related to euchromatin and active genes. We show that the related histone modifications form antagonized domains on a coarse scale. These histone marks are assumed to be initiated within distinct nucleation sites in the DNA and to propagate bi-directionally. We propose a simple computer model that simulates the distribution of heterochromatin in human chromosomes. The simulations are in agreement with previously reported experimental observations from two different human cell lines. We reproduced different types of barriers between heterochromatin and euchromatin providing a unified model for their function. The effect of changes in the nucleation site distribution and of propagation rates were studied. The former occurs mainly with the aim of (de-)activation of single genes or gene groups and the latter has the power of controlling the transcriptional programs of entire chromosomes. Generally, the regulatory program of gene transcription is controlled by the distribution of nucleation sites along the DNA string.

  4. A novel, comprehensive, and reproducible porcine model for determining the timing of bruises in forensic pathology

    DEFF Research Database (Denmark)

    Barington, Kristiane; Jensen, Henrik Elvang

    2016-01-01

    that resulted in bruises were inflicted on the back. In addition, 2 control pigs were included in the study. The pigs were euthanized consecutively from 1 to 10 h after the infliction of bruises. Following gross evaluation, skin, and muscle tissues were sampled for histology. Results Grossly, the bruises...... appeared uniform and identical to the tramline bruises seen in humans and pigs subjected to blunt trauma. Histologically, the number of neutrophils in the subcutis, the number of macrophages in the muscle tissue, and the localization of neutrophils and macrophages in muscle tissue showed a time...... in order to identify gross and histological parameters that may be useful in determining the age of a bruise. Methods The mechanical device was able to apply a single reproducible stroke with a plastic tube that was equivalent to being struck by a man. In each of 10 anesthetized pigs, four strokes...

  5. The link between the Barents Sea and ENSO events reproduced by NEMO model

    Directory of Open Access Journals (Sweden)

    V. N. Stepanov

    2012-05-01

    Full Text Available An analysis of observational data in the Barents Sea along a meridian at 33°30´ E between 70°30´ and 72°30´ N has reported a negative correlation between El Niño/La Niña-Southern Oscillation (ENSO events and water temperature in the top 200 m: the temperature drops about 0.5 °C during warm ENSO events while during cold ENSO events the top 200 m layer of the Barents Sea is warmer. Results from 1 and 1/4-degree global NEMO models show a similar response for the whole Barents Sea. During the strong warm ENSO event in 1997–1998 an anticyclonic atmospheric circulation is settled over the Barents Sea instead of a usual cyclonic circulation. This change enhances heat loses in the Barents Sea, as well as substantially influencing the Barents Sea inflow from the North Atlantic, via changes in ocean currents. Under normal conditions along the Scandinavian peninsula there is a warm current entering the Barents sea from the North Atlantic, however after the 1997–1998 event this current is weakened.

    During 1997–1998 the model annual mean temperature in the Barents Sea is decreased by about 0.8 °C, also resulting in a higher sea ice volume. In contrast during the cold ENSO events in 1999–2000 and 2007–2008 the model shows a lower sea ice volume, and higher annual mean temperatures in the upper layer of the Barents Sea of about 0.7 °C.

    An analysis of model data shows that the Barents Sea inflow is the main source for the variability of Barents Sea heat content, and is forced by changing pressure and winds in the North Atlantic. However, surface heat-exchange with atmosphere can also play a dominant role in the Barents Sea annual heat balance, especially for the subsequent year after ENSO events.

  6. Assessment of the performance of numerical modeling in reproducing a replenishment of sediments in a water-worked channel

    Science.gov (United States)

    Juez, C.; Battisacco, E.; Schleiss, A. J.; Franca, M. J.

    2016-06-01

    The artificial replenishment of sediment is used as a method to re-establish sediment continuity downstream of a dam. However, the impact of this technique on the hydraulics conditions, and resulting bed morphology, is yet to be understood. Several numerical tools have been developed during last years for modeling sediment transport and morphology evolution which can be used for this application. These models range from 1D to 3D approaches: the first being over simplistic for the simulation of such a complex geometry; the latter requires often a prohibitive computational effort. However, 2D models are computationally efficient and in these cases may already provide sufficiently accurate predictions of the morphology evolution caused by the sediment replenishment in a river. Here, the 2D shallow water equations in combination with the Exner equation are solved by means of a weak-coupled strategy. The classical friction approach considered for reproducing the bed channel roughness has been modified to take into account the morphological effect of replenishment which provokes a channel bed fining. Computational outcomes are compared with four sets of experimental data obtained from several replenishment configurations studied in the laboratory. The experiments differ in terms of placement volume and configuration. A set of analysis parameters is proposed for the experimental-numerical comparison, with particular attention to the spreading, covered surface and travel distance of placed replenishment grains. The numerical tool is reliable in reproducing the overall tendency shown by the experimental data. The effect of fining roughness is better reproduced with the approach herein proposed. However, it is also highlighted that the sediment clusters found in the experiment are not well numerically reproduced in the regions of the channel with a limited number of sediment grains.

  7. The diverse broad-band light-curves of Swift GRBs reproduced with the cannonball model

    CERN Document Server

    Dado, Shlomo; De Rújula, A

    2009-01-01

    Two radiation mechanisms, inverse Compton scattering (ICS) and synchrotron radiation (SR), suffice within the cannonball (CB) model of long gamma ray bursts (LGRBs) and X-ray flashes (XRFs) to provide a very simple and accurate description of their observed prompt emission and afterglows. Simple as they are, the two mechanisms and the burst environment generate the rich structure of the light curves at all frequencies and times. This is demonstrated for 33 selected Swift LGRBs and XRFs, which are well sampled from early time until late time and well represent the entire diversity of the broad band light curves of Swift LGRBs and XRFs. Their prompt gamma-ray and X-ray emission is dominated by ICS of glory light. During their fast decline phase, ICS is taken over by SR which dominates their broad band afterglow. The pulse shape and spectral evolution of the gamma-ray peaks and the early-time X-ray flares, and even the delayed optical `humps' in XRFs, are correctly predicted. The canonical and non-canonical X-ra...

  8. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  9. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  10. Cellular automaton model with dynamical 2D speed-gap relation reproduces empirical and experimental features of traffic flow

    CERN Document Server

    Tian, Junfang; Ma, Shoufeng; Zhu, Chenqiang; Jiang, Rui; Ding, YaoXian

    2015-01-01

    This paper proposes an improved cellular automaton traffic flow model based on the brake light model, which takes into account that the desired time gap of vehicles is remarkably larger than one second. Although the hypothetical steady state of vehicles in the deterministic limit corresponds to a unique relationship between speeds and gaps in the proposed model, the traffic states of vehicles dynamically span a two-dimensional region in the plane of speed versus gap, due to the various randomizations. It is shown that the model is able to well reproduce (i) the free flow, synchronized flow, jam as well as the transitions among the three phases; (ii) the evolution features of disturbances and the spatiotemporal patterns in a car-following platoon; (iii) the empirical time series of traffic speed obtained from NGSIM data. Therefore, we argue that a model can potentially reproduce the empirical and experimental features of traffic flow, provided that the traffic states are able to dynamically span a 2D speed-gap...

  11. Comparing models of star formation simulating observed interacting galaxies

    Science.gov (United States)

    Quiroga, L. F.; Muñoz-Cuartas, J. C.; Rodrigues, I.

    2017-07-01

    In this work, we make a comparison between different models of star formation to reproduce observed interacting galaxies. We use observational data to model the evolution of a pair of galaxies undergoing a minor merger. Minor mergers represent situations weakly deviated from the equilibrium configuration but significant changes in star fomation (SF) efficiency can take place, then, minor mergers provide an unique scene to study SF in galaxies in a realistic but yet simple way. Reproducing observed systems also give us the opportunity to compare the results of the simulations with observations, which at the end can be used as probes to characterize the models of SF implemented in the comparison. In this work we compare two different star formation recipes implemented in Gadget3 and GIZMO codes. Both codes share the same numerical background, and differences arise mainly in the star formation recipe they use. We use observations from Pico dos Días and GEMINI telescopes and show how we use observational data of the interacting pair in AM2229-735 to characterize the interacting pair. Later we use this information to simulate the evolution of the system to finally reproduce the observations: Mass distribution, morphology and main features of the merger-induced star formation burst. We show that both methods manage to reproduce roughly the star formation activity. We show, through a careful study, that resolution plays a major role in the reproducibility of the system. In that sense, star formation recipe implemented in GIZMO code has shown a more robust performance. Acknowledgements: This work is supported by Colciencias, Doctorado Nacional - 617 program.

  12. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J.S. [Stanford Linear Accelerator Center, Menlo Park, CA (United States)

    1996-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impedance. This paper discusses three aspects of broadband impedance modelling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f. cavity. The last is a discussion of requirements for the mathematical form of an impedance which follow from the general properties of impedances. (author)

  13. Observations involving broadband impedance modelling

    Energy Technology Data Exchange (ETDEWEB)

    Berg, J.S.

    1995-08-01

    Results for single- and multi-bunch instabilities can be significantly affected by the precise model that is used for the broadband impendance. This paper discusses three aspects of broadband impendance modeling. The first is an observation of the effect that a seemingly minor change in an impedance model has on the single-bunch mode coupling threshold. The second is a successful attempt to construct a model for the high-frequency tails of an r.f cavity. The last is a discussion of requirements for the mathematical form of an impendance which follow from the general properties of impendances.

  14. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models.

    Science.gov (United States)

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-10-07

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations.

  15. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease.

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S; Kovács, Attila D; Meyerholz, David K; Trantzas, Constantin; Lambertz, Allyn M; Darbro, Benjamin W; Weber, Krystal L; White, Katherine A M; Rheeden, Richard V; Kruer, Michael C; Dacken, Brian A; Wang, Xiao-Jun; Davis, Bryan T; Rohret, Judy A; Struzynski, Jason T; Rohret, Frank A; Weimer, Jill M; Pearce, David A

    2015-11-15

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions.

  17. Observational modeling of topological spaces

    Energy Technology Data Exchange (ETDEWEB)

    Molaei, M.R. [Department of Mathematics, Shahid Bahonar University of Kerman, Kerman 76169-14111 (Iran, Islamic Republic of)], E-mail: mrmolaei@mail.uk.ac.ir

    2009-10-15

    In this paper a model for a multi-dimensional observer by using of the fuzzy theory is presented. Relative form of Tychonoff theorem is proved. The notion of topological entropy is extended. The persistence of relative topological entropy under relative conjugate relation is proved.

  18. Energy and nutrient deposition and excretion in the reproducing sow: model development and evaluation

    DEFF Research Database (Denmark)

    Hansen, A V; Strathe, A B; Theil, Peter Kappel;

    2014-01-01

    Air and nutrient emissions from swine operations raise environmental concerns. During the reproduction phase, sows consume and excrete large quantities of nutrients. The objective of this study was to develop a mathematical model to describe energy and nutrient partitioning and predict manure...... excretion and composition and methane emissions on a daily basis. The model was structured to contain gestation and lactation modules, which can be run separately or sequentially, with outputs from the gestation module used as inputs to the lactation module. In the gestating module, energy and protein...... production, and maternal growth with body tissue losses constrained within biological limits. Global sensitivity analysis showed that nonlinearity in the parameters was small. The model outputs considered were the total protein and fat deposition, average urinary and fecal N excretion, average methane...

  19. A simple branching model that reproduces language family and language population distributions

    Science.gov (United States)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  20. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    National Research Council Canada - National Science Library

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    .... Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic...

  1. Reproducible infection model for Clostridium perfringens in broiler chickens

    DEFF Research Database (Denmark)

    Pedersen, Karl; Friis-Holm, Lotte Bjerrum; Heuer, Ole Eske

    2008-01-01

    Experiments were carried out to establish an infection and disease model for Clostridium perfringens in broiler chickens. Previous experiments had failed to induce disease and only a transient colonization with challenge strains had been obtained. In the present study, two series of experiments w...

  2. Establishing a Reproducible Hypertrophic Scar following Thermal Injury: A Porcine Model

    Directory of Open Access Journals (Sweden)

    Scott J. Rapp, MD

    2015-02-01

    Conclusions: Deep partial-thickness thermal injury to the back of domestic swine produces an immature hypertrophic scar by 10 weeks following burn with thickness appearing to coincide with the location along the dorsal axis. With minimal pig to pig variation, we describe our technique to provide a testable immature scar model.

  3. Accuracy and reproducibility of dental measurements on tomographic digital models: a systematic review and meta-analysis.

    Science.gov (United States)

    Ferreira, Jamille B; Christovam, Ilana O; Alencar, David S; da Motta, Andréa F J; Mattos, Claudia T; Cury-Saramago, Adriana

    2017-04-26

    The aim of this systematic review with meta-analysis was to assess the accuracy and reproducibility of dental measurements obtained from digital study models generated from CBCT compared with those acquired from plaster models. The electronic databases Cochrane Library, Medline (via PubMed), Scopus, VHL, Web of Science, and System for Information on Grey Literature in Europe were screened to identify articles from 1998 until February 2016. The inclusion criteria were: prospective and retrospective clinical trials in humans; validation and/or comparison articles of dental study models obtained from CBCT and plaster models; and articles that used dental linear measurements as an assessment tool. The methodological quality of the studies was carried out by Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. A meta-analysis was performed to validate all comparative measurements. The databases search identified a total of 3160 items and 554 duplicates were excluded. After reading titles and abstracts, 12 articles were selected. Five articles were included after reading in full. The methodological quality obtained through QUADAS-2 was poor to moderate. In the meta-analysis, there were statistical differences between the mesiodistal widths of mandibular incisors, maxillary canines and premolars, and overall Bolton analysis. Therefore, the measurements considered accurate were maxillary and mandibular crowding, intermolar width and mesiodistal width of maxillary incisors, mandibular canines and premolars, in both arches for molars. Digital models obtained from CBCT were not accurate for all measures assessed. The differences were clinically acceptable for all dental linear measurements, except for maxillary arch perimeter. Digital models are reproducible for all measurements when intraexaminer assessment is considered and need improvement in interexaminer evaluation.

  4. Experimental and Numerical Models of Complex Clinical Scenarios; Strategies to Improve Relevance and Reproducibility of Joint Replacement Research.

    Science.gov (United States)

    Bechtold, Joan E; Swider, Pascal; Goreham-Voss, Curtis; Soballe, Kjeld

    2016-02-01

    This research review aims to focus attention on the effect of specific surgical and host factors on implant fixation, and the importance of accounting for them in experimental and numerical models. These factors affect (a) eventual clinical applicability and (b) reproducibility of findings across research groups. Proper function and longevity for orthopedic joint replacement implants relies on secure fixation to the surrounding bone. Technology and surgical technique has improved over the last 50 years, and robust ingrowth and decades of implant survival is now routinely achieved for healthy patients and first-time (primary) implantation. Second-time (revision) implantation presents with bone loss with interfacial bone gaps in areas vital for secure mechanical fixation. Patients with medical comorbidities such as infection, smoking, congestive heart failure, kidney disease, and diabetes have a diminished healing response, poorer implant fixation, and greater revision risk. It is these more difficult clinical scenarios that require research to evaluate more advanced treatment approaches. Such treatments can include osteogenic or antimicrobial implant coatings, allo- or autogenous cellular or tissue-based approaches, local and systemic drug delivery, surgical approaches. Regarding implant-related approaches, most experimental and numerical models do not generally impose conditions that represent mechanical instability at the implant interface, or recalcitrant healing. Many treatments will work well in forgiving settings, but fail in complex human settings with disease, bone loss, or previous surgery. Ethical considerations mandate that we justify and limit the number of animals tested, which restricts experimental permutations of treatments. Numerical models provide flexibility to evaluate multiple parameters and combinations, but generally need to employ simplifying assumptions. The objectives of this paper are to (a) to highlight the importance of mechanical

  5. Evaluation of Nitinol staples for the Lapidus arthrodesis in a reproducible biomechanical model

    Directory of Open Access Journals (Sweden)

    Nicholas Alexander Russell

    2015-12-01

    Full Text Available While the Lapidus procedure is a widely accepted technique for treatment of hallux valgus, the optimal fixation method to maintain joint stability remains controversial. The purpose of this study was to evaluate the biomechanical properties of new Shape Memory Alloy staples arranged in different configurations in a repeatable 1st Tarsometatarsal arthrodesis model. Ten sawbones models of the whole foot (n=5 per group were reconstructed using a single dorsal staple or two staples in a delta configuration. Each construct was mechanically tested in dorsal four-point bending, medial four-point bending, dorsal three-point bending and plantar cantilever bending with the staples activated at 37°C. The peak load, stiffness and plantar gapping were determined for each test. Pressure sensors were used to measure the contact force and area of the joint footprint in each group. There was a significant (p < 0.05 increase in peak load in the two staple constructs compared to the single staple constructs for all testing modalities. Stiffness also increased significantly in all tests except dorsal four-point bending. Pressure sensor readings showed a significantly higher contact force at time zero and contact area following loading in the two staple constructs (p < 0.05. Both groups completely recovered any plantar gapping following unloading and restored their initial contact footprint. The biomechanical integrity and repeatability of the models was demonstrated with no construct failures due to hardware or model breakdown. Shape memory alloy staples provide fixation with the ability to dynamically apply and maintain compression across a simulated arthrodesis following a range of loading conditions.

  6. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  7. Validation of the 3D Skin Comet assay using full thickness skin models: transferability and reproducibility

    Directory of Open Access Journals (Sweden)

    Kerstin Reisinger

    2015-06-01

    Full Text Available The 3D Skin Comet assay was developed to improve the in vitro prediction of the genotoxic potential of dermally applied chemicals. For this purpose, a classical read-out for genotoxicity (i.e. comet formation was combined with reconstructed 3D skin models as well-established test systems. Five laboratories (BASF, BfR (Federal Institute for Risk Assessment, Henkel, Procter & Gamble and TNO Triskilion started to validate this assay using the Phenion® Full- Thickness (FT Skin Model and 8 coded chemicals with financial support by Cosmetics Europe and the German Ministry of Education & Research. There was an excellent overall predictivity of the expected genotoxicity (>90%. Four labs correctly identified all chemicals and the fifth correctly identified 80% of the chemicals. Background DNA damage was low and values for solvent (acetone and positive (methyl methanesulfonate (MMS controls were comparable among labs. Inclusion of the DNA-polymerase inhibitor, aphidicolin (APC, in the protocol improved the predictivity of the assay since it enabled robust detection of pro-mutagens e.g., 7,12-dimethylbenz[a]anthracene and benzo[a]pyrene. Therefore, all negative findings are now confirmed by additional APC experiments to come to a final conclusion. Furthermore, MMC, which intercalates between DNA strands causing covalent binding, was detected with the standard protocol, in which it gave weak but statistically significant responses. Stronger responses, however, were obtained using a cross-linker specific protocol in which MMC reduced the migration of MMS-induced DNA damage. These data support the use of the Phenion® FT in the Comet assay: no false-positives and only one false-negative finding in a single lab. Testing will continue to obtain data for 30 chemicals. Once validated, the 3D Skin Comet assay is foreseen to be used as a follow-up test for positive results from the current in vitro genotoxicity test battery.

  8. PAMELA positron and electron spectra are reproduced by 3-dimensional cosmic-ray modeling

    CERN Document Server

    Gaggero, Daniele; Maccione, Luca; Di Bernardo, Giuseppe; Evoli, Carmelo

    2013-01-01

    The PAMELA collaboration recently released the $e^+$ absolute spectrum between 1 and 300 GeV in addition to the positron fraction and $e^-$ spectrum previously measured in the same time period. We use the newly developed 3-dimensional upgrade of the DRAGON code and the charge dependent solar modulation HelioProp code to consistently describe those data. We obtain very good fits of all data sets if a $e^+$ + $e^-$ hard extra-component peaked at 1 TeV is added to a softer $e^-$ background and the secondary $e^\\pm$ produced by the spallation of cosmic ray proton and helium nuclei. All sources are assumed to follow a realistic spiral arm spatial distribution. Remarkably, PAMELA data do not display any need of charge asymmetric extra-component. Finally, plain diffusion, or low re-acceleration, propagation models which are tuned against nuclear data, nicely describe PAMELA lepton data with no need to introduce a low energy break in the proton and Helium spectra.

  9. Inter-observer reproducibility of endometrial cytology by the Osaki Study Group method: utilising the Becton Dickinson SurePath(™) liquid-based cytology.

    Science.gov (United States)

    Norimatsu, Y; Yamaguchi, T; Taira, T; Abe, H; Sakamoto, H; Takenaka, M; Yanoh, K; Yoshinobu, M; Irino, S; Hirai, Y; Kobayashi, T K

    2016-12-01

    The purpose of the present study was to evaluate the reproducibility of the cytological diagnosis of endometrial lesions by the Osaki Study Group (OSG) method of new cytological diagnostic criteria using BD SurePath(™) (SP)-liquid-based cytology (LBC). This cytological classification using the OSG method consists of six categories: (i) normal endometrium (NE), (ii) endometrial glandular and stromal breakdown (EGBD), (iii) atypical endometrial cells, cannot exclude atypical endometrial hyperplasia or more (ATEC-A), (iv) adenocarcinoma including atypical endometrial hyperplasia or malignant tumour (Malignancy), (v) endometrial hyperplasia without atypia (EH) and (vi) atypical endometrial cells of undetermined significance (ATEC-US). For this study, a total 244 endometrial samplings were classified by two academic cytopathologists as follows: 147 NE cases , 36 EGBD cases , 47 Malignant cases, eight ATEC-A cases, two EH cases and four ATEC-US cases. To confirm the reproducibility of the diagnosis and to study the inter- and intra-observer agreement further, a second review round followed at 3-month intervals, which included three additional cytopathologists. The inter-observer agreement of NE classes improved progressively from 'good to fair' to 'excellent', with values increasing from 0.70 to 0.81. Both EGBD and Malignancy classes improved progressively from 'good to fair' to 'excellent', with values increasing from 0.62-0.63 to 0.84-0.95, respectively. The overall intra-observer agreement between the first and the second rounds was 'good to fair' to 'excellent', with values changing from 0.79 to 0.85. All kappa improvements were significant (P cytology. © 2016 John Wiley & Sons Ltd.

  10. Modelling of an explosive event observed by SUMER & TRACE

    Science.gov (United States)

    Price, Daniel; Taroyan, Youra; Ishak, Bebe

    2016-07-01

    To fully understand coronal heating, we must first understand the different solar processes that move energy throughout the solar atmosphere. TRACE observations have revealed a short cold loop evolving over a small timescale, seemingly with multiple explosive events occurring along its length. An adaptive hydrodynamic radiation code was used to simulate the loop under non-equilibrium ionization. Footpoint heating and cold plasma injection were considered as possible scenarios to reproduce the observations. The simulation results were converted into synthetic observations through forward modelling, for comparison to SOHO/SUMER spectral observations of the loop.

  11. The statistics of repeating patterns of cortical activity can be reproduced by a model network of stochastic binary neurons.

    Science.gov (United States)

    Roxin, Alex; Hakim, Vincent; Brunel, Nicolas

    2008-10-15

    Calcium imaging of the spontaneous activity in cortical slices has revealed repeating spatiotemporal patterns of transitions between so-called down states and up states (Ikegaya et al., 2004). Here we fit a model network of stochastic binary neurons to data from these experiments, and in doing so reproduce the distributions of such patterns. We use two versions of this model: (1) an unconnected network in which neurons are activated as independent Poisson processes; and (2) a network with an interaction matrix, estimated from the data, representing effective interactions between the neurons. The unconnected model (model 1) is sufficient to account for the statistics of repeating patterns in 11 of the 15 datasets studied. Model 2, with interactions between neurons, is required to account for pattern statistics of the remaining four. Three of these four datasets are the ones that contain the largest number of transitions, suggesting that long datasets are in general necessary to render interactions statistically visible. We then study the topology of the matrix of interactions estimated for these four datasets. For three of the four datasets, we find sparse matrices with long-tailed degree distributions and an overrepresentation of certain network motifs. The remaining dataset exhibits a strongly interconnected, spatially localized subgroup of neurons. In all cases, we find that interactions between neurons facilitate the generation of long patterns that do not repeat exactly.

  12. Isokinetic eccentric exercise as a model to induce and reproduce pathophysiological alterations related to delayed onset muscle soreness

    DEFF Research Database (Denmark)

    Lund, Henrik; Vestergaard-Poulsen, P; Kanstrup, I.L.

    1998-01-01

    Physiological alterations following unaccustomed eccentric exercise in an isokinetic dynamometer of the right m. quadriceps until exhaustion were studied, in order to create a model in which the physiological responses to physiotherapy could be measured. In experiment I (exp. I), seven selected...... parameters were measured bilaterally in 7 healthy subjects at day 0 as a control value. Then after a standardized bout of eccentric exercise the same parameters were measured daily for the following 7 d (test values). The measured parameters were: the ratio of phosphocreatine to inorganic phosphate (PCr...... (133Xenon washout technique). This was repeated in experiment II (exp. II) 6-12 months later in order to study reproducibility. In experiment III (exp. III), the normal fluctuations over 8 d of the seven parameters were measured, without intervention with eccentric exercise in 6 other subjects. All...

  13. Efficient and Reproducible Myogenic Differentiation from Human iPS Cells: Prospects for Modeling Miyoshi Myopathy In Vitro

    Science.gov (United States)

    Tanaka, Akihito; Woltjen, Knut; Miyake, Katsuya; Hotta, Akitsu; Ikeya, Makoto; Yamamoto, Takuya; Nishino, Tokiko; Shoji, Emi; Sehara-Fujisawa, Atsuko; Manabe, Yasuko; Fujii, Nobuharu; Hanaoka, Kazunori; Era, Takumi; Yamashita, Satoshi; Isobe, Ken-ichi; Kimura, En; Sakurai, Hidetoshi

    2013-01-01

    The establishment of human induced pluripotent stem cells (hiPSCs) has enabled the production of in vitro, patient-specific cell models of human disease. In vitro recreation of disease pathology from patient-derived hiPSCs depends on efficient differentiation protocols producing relevant adult cell types. However, myogenic differentiation of hiPSCs has faced obstacles, namely, low efficiency and/or poor reproducibility. Here, we report the rapid, efficient, and reproducible differentiation of hiPSCs into mature myocytes. We demonstrated that inducible expression of myogenic differentiation1 (MYOD1) in immature hiPSCs for at least 5 days drives cells along the myogenic lineage, with efficiencies reaching 70–90%. Myogenic differentiation driven by MYOD1 occurred even in immature, almost completely undifferentiated hiPSCs, without mesodermal transition. Myocytes induced in this manner reach maturity within 2 weeks of differentiation as assessed by marker gene expression and functional properties, including in vitro and in vivo cell fusion and twitching in response to electrical stimulation. Miyoshi Myopathy (MM) is a congenital distal myopathy caused by defective muscle membrane repair due to mutations in DYSFERLIN. Using our induced differentiation technique, we successfully recreated the pathological condition of MM in vitro, demonstrating defective membrane repair in hiPSC-derived myotubes from an MM patient and phenotypic rescue by expression of full-length DYSFERLIN (DYSF). These findings not only facilitate the pathological investigation of MM, but could potentially be applied in modeling of other human muscular diseases by using patient-derived hiPSCs. PMID:23626698

  14. The solar dynamo: inferences from observations and modeling

    CERN Document Server

    Kitchatinov, L L

    2014-01-01

    It can be shown on observational grounds that two basic effects of dynamo theory for solar activity - production of the toroidal field from the poloidal one by differential rotation and reverse conversion of the toroidal field to the poloidal configuration by helical motions - are operating in the Sun. These two effects, however, do not suffice for constructing a realistic model for the solar dynamo. Only when a non-local version of the alpha-effect is applied, is downward diamagnetic pumping included and field advection by the equatorward meridional flow near the base of the convection zone allowed for, can the observed activity cycles be closely reproduced. Fluctuations in the alpha-effect can be estimated from sunspot data. Dynamo models with fluctuating parameters reproduce irregularities of solar cycles including the grand activity minima. The physics of parametric excitation of irregularities remains, however, to be understood.

  15. A short-term mouse model that reproduces the immunopathological features of rhinovirus-induced exacerbation of COPD.

    Science.gov (United States)

    Singanayagam, Aran; Glanville, Nicholas; Walton, Ross P; Aniscenko, Julia; Pearson, Rebecca M; Pinkerton, James W; Horvat, Jay C; Hansbro, Philip M; Bartlett, Nathan W; Johnston, Sebastian L

    2015-08-01

    Viral exacerbations of chronic obstructive pulmonary disease (COPD), commonly caused by rhinovirus (RV) infections, are poorly controlled by current therapies. This is due to a lack of understanding of the underlying immunopathological mechanisms. Human studies have identified a number of key immune responses that are associated with RV-induced exacerbations including neutrophilic inflammation, expression of inflammatory cytokines and deficiencies in innate anti-viral interferon. Animal models of COPD exacerbation are required to determine the contribution of these responses to disease pathogenesis. We aimed to develop a short-term mouse model that reproduced the hallmark features of RV-induced exacerbation of COPD. Evaluation of complex protocols involving multiple dose elastase and lipopolysaccharide (LPS) administration combined with RV1B infection showed suppression rather than enhancement of inflammatory parameters compared with control mice infected with RV1B alone. Therefore, these approaches did not accurately model the enhanced inflammation associated with RV infection in patients with COPD compared with healthy subjects. In contrast, a single elastase treatment followed by RV infection led to heightened airway neutrophilic and lymphocytic inflammation, increased expression of tumour necrosis factor (TNF)-α, C-X-C motif chemokine 10 (CXCL10)/IP-10 (interferon γ-induced protein 10) and CCL5 [chemokine (C-C motif) ligand 5]/RANTES (regulated on activation, normal T-cell expressed and secreted), mucus hypersecretion and preliminary evidence for increased airway hyper-responsiveness compared with mice treated with elastase or RV infection alone. In summary, we have developed a new mouse model of RV-induced COPD exacerbation that mimics many of the inflammatory features of human disease. This model, in conjunction with human models of disease, will provide an essential tool for studying disease mechanisms and allow testing of novel therapies with potential to

  16. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    Science.gov (United States)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  17. An Effective and Reproducible Model of Ventricular Fibrillation in Crossbred Yorkshire Swine (Sus scrofa) for Use in Physiologic Research.

    Science.gov (United States)

    Burgert, James M; Johnson, Arthur D; Garcia-Blanco, Jose C; Craig, W John; O'Sullivan, Joseph C

    2015-10-01

    Transcutaneous electrical induction (TCEI) has been used to induce ventricular fibrillation (VF) in laboratory swine for physiologic and resuscitation research. Many studies do not describe the method of TCEI in detail, thus making replication by future investigators difficult. Here we describe a detailed method of electrically inducing VF that was used successfully in a prospective, experimental resuscitation study. Specifically, an electrical current was passed through the heart to induce VF in crossbred Yorkshire swine (n = 30); the current was generated by using two 22-gauge spinal needles, with one placed above and one below the heart, and three 9V batteries connected in series. VF developed in 28 of the 30 pigs (93%) within 10 s of beginning the procedure. In the remaining 2 swine, VF was induced successfully after medial redirection of the superior parasternal needle. The TCEI method is simple, reproducible, and cost-effective. TCEI may be especially valuable to researchers with limited access to funding, sophisticated equipment, or colleagues experienced in interventional cardiology techniques. The TCEI method might be most appropriate for pharmacologic studies requiring VF, VF resulting from the R-on-T phenomenon (as in prolonged QT syndrome), and VF arising from other ectopic or reentrant causes. However, the TCEI method does not accurately model the most common cause of VF, acute coronary occlusive disease. Researchers must consider the limitations of TCEI that may affect internal and external validity of collected data, when designing experiments using this model of VF.

  18. A rat tail temporary static compression model reproduces different stages of intervertebral disc degeneration with decreased notochordal cell phenotype.

    Science.gov (United States)

    Hirata, Hiroaki; Yurube, Takashi; Kakutani, Kenichiro; Maeno, Koichiro; Takada, Toru; Yamamoto, Junya; Kurakawa, Takuto; Akisue, Toshihiro; Kuroda, Ryosuke; Kurosaka, Masahiro; Nishida, Kotaro

    2014-03-01

    The intervertebral disc nucleus pulposus (NP) has two phenotypically distinct cell types-notochordal cells (NCs) and non-notochordal chondrocyte-like cells. In human discs, NCs are lost during adolescence, which is also when discs begin to show degenerative signs. However, little evidence exists regarding the link between NC disappearance and the pathogenesis of disc degeneration. To clarify this, a rat tail disc degeneration model induced by static compression at 1.3 MPa for 0, 1, or 7 days was designed and assessed for up to 56 postoperative days. Radiography, MRI, and histomorphology showed degenerative disc findings in response to the compression period. Immunofluorescence displayed that the number of DAPI-positive NP cells decreased with compression; particularly, the decrease was notable in larger, vacuolated, cytokeratin-8- and galectin-3-co-positive cells, identified as NCs. The proportion of TUNEL-positive cells, which predominantly comprised non-NCs, increased with compression. Quantitative PCR demonstrated isolated mRNA up-regulation of ADAMTS-5 in the 1-day loaded group and MMP-3 in the 7-day loaded group. Aggrecan-1 and collagen type 2α-1 mRNA levels were down-regulated in both groups. This rat tail temporary static compression model, which exhibits decreased NC phenotype, increased apoptotic cell death, and imbalanced catabolic and anabolic gene expression, reproduces different stages of intervertebral disc degeneration.

  19. A first attempt to reproduce basaltic soil chronosequences using a process-based soil profile model: implications for our understanding of soil evolution

    Science.gov (United States)

    Johnson, M.; Gloor, M.; Lloyd, J.

    2012-04-01

    Soils are complex systems which hold a wealth of information on both current and past conditions and many biogeochemical processes. The ability to model soil forming processes and predict soil properties will enable us to quantify such conditions and contribute to our understanding of long-term biogeochemical cycles, particularly the carbon cycle and plant nutrient cycles. However, attempts to confront such soil model predictions with data are rare, although increasingly more data from chronosquence studies is becoming available for such a purpose. Here we present initial results of an attempt to reproduce soil properties with a process-based soil evolution model similar to the model of Kirkby (1985, J. Soil Science). We specifically focus on the basaltic soils in both Hawaii and north Queensland, Australia. These soils are formed on a series of volcanic lava flows which provide sequences of different aged soils all with a relatively uniform parent material. These soil chronosequences provide a snapshot of a soil profile during different stages of development. Steep rainfall gradients in these regions also provide a system which allows us to test the model's ability to reproduce soil properties under differing climates. The mechanistic, soil evolution model presented here includes the major processes of soil formation such as i) mineral weathering, ii) percolation of rainfall through the soil, iii) leaching of solutes out of the soil profile iv) surface erosion and v) vegetation and biotic interactions. The model consists of a vertical profile and assumes simple geometry with a constantly sloping surface. The timescales of interest are on the order of tens to hundreds of thousand years. The specific properties the model predicts are, soil depth, the proportion of original elemental oxides remaining in each soil layer, pH of the soil solution, organic carbon distribution and CO2 production and concentration. The presentation will focus on a brief introduction of the

  20. An update on the rotenone models of Parkinson's disease: their ability to reproduce the features of clinical disease and model gene-environment interactions.

    Science.gov (United States)

    Johnson, Michaela E; Bobrovskaya, Larisa

    2015-01-01

    Parkinson's disease (PD) is the second most common neurodegenerative disorder that is characterized by two major neuropathological hallmarks: the degeneration of dopaminergic neurons in the substantia nigra (SN) and the presence of Lewy bodies in the surviving SN neurons, as well as other regions of the central and peripheral nervous system. Animal models have been invaluable tools for investigating the underlying mechanisms of the pathogenesis of PD and testing new potential symptomatic, neuroprotective and neurorestorative therapies. However, the usefulness of these models is dependent on how precisely they replicate the features of clinical PD with some studies now employing combined gene-environment models to replicate more of the affected pathways. The rotenone model of PD has become of great interest following the seminal paper by the Greenamyre group in 2000 (Betarbet et al., 2000). This paper reported for the first time that systemic rotenone was able to reproduce the two pathological hallmarks of PD as well as certain parkinsonian motor deficits. Since 2000, many research groups have actively used the rotenone model worldwide. This paper will review rotenone models, focusing upon their ability to reproduce the two pathological hallmarks of PD, motor deficits, extranigral pathology and non-motor symptoms. We will also summarize the recent advances in neuroprotective therapies, focusing on those that investigated non-motor symptoms and review rotenone models used in combination with PD genetic models to investigate gene-environment interactions.

  1. QSAR model reproducibility and applicability: a case study of rate constants of hydroxyl radical reaction models applied to polybrominated diphenyl ethers and (benzo-)triazoles.

    Science.gov (United States)

    Roy, Partha Pratim; Kovarich, Simona; Gramatica, Paola

    2011-08-01

    The crucial importance of the three central OECD principles for quantitative structure-activity relationship (QSAR) model validation is highlighted in a case study of tropospheric degradation of volatile organic compounds (VOCs) by OH, applied to two CADASTER chemical classes (PBDEs and (benzo-)triazoles). The application of any QSAR model to chemicals without experimental data largely depends on model reproducibility by the user. The reproducibility of an unambiguous algorithm (OECD Principle 2) is guaranteed by redeveloping MLR models based on both updated version of DRAGON software for molecular descriptors calculation and some freely available online descriptors. The Genetic Algorithm has confirmed its ability to always select the most informative descriptors independently on the input pool of variables. The ability of the GA-selected descriptors to model chemicals not used in model development is verified by three different splittings (random by response, K-ANN and K-means clustering), thus ensuring the external predictivity of the new models, independently of the training/prediction set composition (OECD Principle 5). The relevance of checking the structural applicability domain becomes very evident on comparing the predictions for CADASTER chemicals, using the new models proposed herein, with those obtained by EPI Suite. Copyright © 2011 Wiley Periodicals, Inc.

  2. Inter-observer reproducibility before and after web-based education in the Gleason grading of the prostate adenocarcinoma among the Iranian pathologists.

    Science.gov (United States)

    Abdollahi, Alireza; Sheikhbahaei, Sara; Meysamie, Alipasha; Bakhshandeh, Mohammadreza; Hosseinzadeh, Hasan

    2014-01-01

    This study was aimed at determining intra and inter-observer concordance rates in the Gleason scoring of prostatic adenocarcinoma, before and after a web-based educational course. In this self-controlled study, 150 tissue samples of prostatic adenocarcinoma are re-examined to be scored according to the Gleason scoring system. Then all pathologists attend a free web-based course. Afterwards, the same 150 samples [with different codes compared to the previous ones] are distributed differently among the pathologists to be assigned Gleason scores. After gathering the data, the concordance rate in the first and second reports of pathologists is determined. In the pre web-education, the mean kappa value of Interobserver agreement was 0.25 [fair agreement]. Post web-education significantly improved with the mean kappa value of 0.52 [moderate agreement]. Using weighted kappa values, significant improvement was observed in inter-observer agreement in higher scores of Gleason grade; Score 10 was achieved for the mean kappa value in post web-education was 0.68 [substantial agreement] compared to 0.25 (fair agreement) in pre web-education. Web-based training courses are attractive to pathologists as they do not need to spend much time and money. Therefore, such training courses are strongly recommended for significant pathological issues including the grading of the prostate adenocarcinoma. Through web-based education, pathologists can exchange views and contribute to the rise in the level of reproducibility. Such programs need to be included in post-graduation programs.

  3. Inter-observer reproducibility before and after web-based education in the Gleason grading of the prostate adenocarcinoma among the Iranian pathologists.

    Directory of Open Access Journals (Sweden)

    Alireza Abdollahi

    2014-05-01

    Full Text Available This study was aimed at determining intra and inter-observer concordance rates in the Gleason scoring of prostatic adenocarcinoma, before and after a web-based educational course. In this self-controlled study, 150 tissue samples of prostatic adenocarcinoma are re-examined to be scored according to the Gleason scoring system. Then all pathologists attend a free web-based course. Afterwards, the same 150 samples [with different codes compared to the previous ones] are distributed differently among the pathologists to be assigned Gleason scores. After gathering the data, the concordance rate in the first and second reports of pathologists is determined. In the pre web-education, the mean kappa value of Interobserver agreement was 0.25 [fair agreement]. Post web-education significantly improved with the mean kappa value of 0.52 [moderate agreement]. Using weighted kappa values, significant improvement was observed in inter-observer agreement in higher scores of Gleason grade; Score 10 was achieved for the mean kappa value in post web-education was 0.68 [substantial agreement] compared to 0.25 (fair agreement in pre web-education. Web-based training courses are attractive to pathologists as they do not need to spend much time and money. Therefore, such training courses are strongly recommended for significant pathological issues including the grading of the prostate adenocarcinoma. Through web-based education, pathologists can exchange views and contribute to the rise in the level of reproducibility. Such programs need to be included in post-graduation programs.

  4. Skill of ship-following large-eddy simulations in reproducing MAGIC observations across the northeast Pacific stratocumulus to cumulus transition region

    Science.gov (United States)

    McGibbon, J.; Bretherton, C. S.

    2017-06-01

    During the Marine ARM GPCI Investigation of Clouds (MAGIC) in October 2011 to September 2012, a container ship making periodic cruises between Los Angeles, CA, and Honolulu, HI, was instrumented with surface meteorological, aerosol and radiation instruments, a cloud radar and ceilometer, and radiosondes. Here large-eddy simulation (LES) is performed in a ship-following frame of reference for 13 four day transects from the MAGIC field campaign. The goal is to assess if LES can skillfully simulate the broad range of observed cloud characteristics and boundary layer structure across the subtropical stratocumulus to cumulus transition region sampled during different seasons and meteorological conditions. Results from Leg 15A, which sampled a particularly well-defined stratocumulus to cumulus transition, demonstrate the approach. The LES reproduces the observed timing of decoupling and transition from stratocumulus to cumulus and matches the observed evolution of boundary layer structure, cloud fraction, liquid water path, and precipitation statistics remarkably well. Considering the simulations of all 13 cruises, the LES skillfully simulates the mean diurnal variation of key measured quantities, including liquid water path (LWP), cloud fraction, measures of decoupling, and cloud radar-derived precipitation. The daily mean quantities are well represented, and daily mean LWP and cloud fraction show the expected correlation with estimated inversion strength. There is a -0.6 K low bias in LES near-surface air temperature that results in a high bias of 5.6 W m-2 in sensible heat flux (SHF). Overall, these results build confidence in the ability of LES to represent the northeast Pacific stratocumulus to trade cumulus transition region.Plain Language SummaryDuring the Marine ARM GPCI Investigation of Clouds (MAGIC) field campaign in October 2011 to September 2012, a cargo container ship making regular cruises between Los Angeles, CA, and Honolulu, HI, was fitted with tools to

  5. Reproducing the organic matter model of anthropogenic dark earth of Amazonia and testing the ecotoxicity of functionalized charcoal compounds

    Directory of Open Access Journals (Sweden)

    Carolina Rodrigues Linhares

    2012-05-01

    Full Text Available The objective of this work was to obtain organic compounds similar to the ones found in the organic matter of anthropogenic dark earth of Amazonia (ADE using a chemical functionalization procedure on activated charcoal, as well as to determine their ecotoxicity. Based on the study of the organic matter from ADE, an organic model was proposed and an attempt to reproduce it was described. Activated charcoal was oxidized with the use of sodium hypochlorite at different concentrations. Nuclear magnetic resonance was performed to verify if the spectra of the obtained products were similar to the ones of humic acids from ADE. The similarity between spectra indicated that the obtained products were polycondensed aromatic structures with carboxyl groups: a soil amendment that can contribute to soil fertility and to its sustainable use. An ecotoxicological test with Daphnia similis was performed on the more soluble fraction (fulvic acids of the produced soil amendment. Aryl chloride was formed during the synthesis of the organic compounds from activated charcoal functionalization and partially removed through a purification process. However, it is probable that some aryl chloride remained in the final product, since the ecotoxicological test indicated that the chemical functionalized soil amendment is moderately toxic.

  6. Enhancement of accuracy and reproducibility of parametric modeling for estimating abnormal intra-QRS potentials in signal-averaged electrocardiograms.

    Science.gov (United States)

    Lin, Chun-Cheng

    2008-09-01

    This work analyzes and attempts to enhance the accuracy and reproducibility of parametric modeling in the discrete cosine transform (DCT) domain for the estimation of abnormal intra-QRS potentials (AIQP) in signal-averaged electrocardiograms. One hundred sets of white noise with a flat frequency response were introduced to simulate the unpredictable, broadband AIQP when quantitatively analyzing estimation error. Further, a high-frequency AIQP parameter was defined to minimize estimation error caused by the overlap between normal QRS and AIQP in low-frequency DCT coefficients. Seventy-two patients from Taiwan were recruited for the study, comprising 30 patients with ventricular tachycardia (VT) and 42 without VT. Analytical results showed that VT patients had a significant decrease in the estimated AIQP. The global diagnostic performance (area under the receiver operating characteristic curve) of AIQP rose from 73.0% to 84.2% in lead Y, and from 58.3% to 79.1% in lead Z, when the high-frequency range fell from 100% to 80%. The combination of AIQP and ventricular late potentials further enhanced performance to 92.9% (specificity=90.5%, sensitivity=90%). Therefore, the significantly reduced AIQP in VT patients, possibly also including dominant unpredictable potentials within the normal QRS complex, may be new promising evidence of ventricular arrhythmias.

  7. Model Observers in Medical Imaging Research

    OpenAIRE

    He, Xin; Park, Subok

    2013-01-01

    Model observers play an important role in the optimization and assessment of imaging devices. In this review paper, we first discuss the basic concepts of model observers, which include the mathematical foundations and psychophysical considerations in designing both optimal observers for optimizing imaging systems and anthropomorphic observers for modeling human observers. Second, we survey a few state-of-the-art computational techniques for estimating model observers and the principles of im...

  8. Assessment of an ensemble of ocean-atmosphere coupled and uncoupled regional climate models to reproduce the climatology of Mediterranean cyclones

    Science.gov (United States)

    Flaounas, Emmanouil; Kelemen, Fanni Dora; Wernli, Heini; Gaertner, Miguel Angel; Reale, Marco; Sanchez-Gomez, Emilia; Lionello, Piero; Calmanti, Sandro; Podrascanin, Zorica; Somot, Samuel; Akhtar, Naveed; Romera, Raquel; Conte, Dario

    2016-11-01

    This study aims to assess the skill of regional climate models (RCMs) at reproducing the climatology of Mediterranean cyclones. Seven RCMs are considered, five of which were also coupled with an oceanic model. All simulations were forced at the lateral boundaries by the ERA-Interim reanalysis for a common 20-year period (1989-2008). Six different cyclone tracking methods have been applied to all twelve RCM simulations and to the ERA-Interim reanalysis in order to assess the RCMs from the perspective of different cyclone definitions. All RCMs reproduce the main areas of high cyclone occurrence in the region south of the Alps, in the Adriatic, Ionian and Aegean Seas, as well as in the areas close to Cyprus and to Atlas mountains. The RCMs tend to underestimate intense cyclone occurrences over the Mediterranean Sea and reproduce 24-40 % of these systems, as identified in the reanalysis. The use of grid nudging in one of the RCMs is shown to be beneficial, reproducing about 60 % of the intense cyclones and keeping a better track of the seasonal cycle of intense cyclogenesis. Finally, the most intense cyclones tend to be similarly reproduced in coupled and uncoupled model simulations, suggesting that modeling atmosphere-ocean coupled processes has only a weak impact on the climatology and intensity of Mediterranean cyclones.

  9. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models gene

  10. Reproducible ion-current-based approach for 24-plex comparison of the tissue proteomes of hibernating versus normal myocardium in swine models.

    Science.gov (United States)

    Qu, Jun; Young, Rebeccah; Page, Brian J; Shen, Xiaomeng; Tata, Nazneen; Li, Jun; Duan, Xiaotao; Fallavollita, James A; Canty, John M

    2014-05-02

    Hibernating myocardium is an adaptive response to repetitive myocardial ischemia that is clinically common, but the mechanism of adaptation is poorly understood. Here we compared the proteomes of hibernating versus normal myocardium in a porcine model with 24 biological replicates. Using the ion-current-based proteomic strategy optimized in this study to expand upon previous proteomic work, we identified differentially expressed proteins in new molecular pathways of cardiovascular interest. The methodological strategy includes efficient extraction with detergent cocktail; precipitation/digestion procedure with high, quantitative peptide recovery; reproducible nano-LC/MS analysis on a long, heated column packed with small particles; and quantification based on ion-current peak areas. Under the optimized conditions, high efficiency and reproducibility were achieved for each step, which enabled a reliable comparison of 24 the myocardial samples. To achieve confident discovery of differentially regulated proteins in hibernating myocardium, we used highly stringent criteria to define "quantifiable proteins". These included the filtering criteria of low peptide FDR and S/N > 10 for peptide ion currents, and each protein was quantified independently from ≥2 distinct peptides. For a broad methodological validation, the quantitative results were compared with a parallel, well-validated 2D-DIGE analysis of the same model. Excellent agreement between the two orthogonal methods was observed (R = 0.74), and the ion-current-based method quantified almost one order of magnitude more proteins. In hibernating myocardium, 225 significantly altered proteins were discovered with a low false-discovery rate (∼3%). These proteins are involved in biological processes including metabolism, apoptosis, stress response, contraction, cytoskeleton, transcription, and translation. This provides compelling evidence that hibernating myocardium adapts to chronic ischemia. The major metabolic

  11. Compliant bipedal model with the center of pressure excursion associated with oscillatory behavior of the center of mass reproduces the human gait dynamics.

    Science.gov (United States)

    Jung, Chang Keun; Park, Sukyung

    2014-01-03

    Although the compliant bipedal model could reproduce qualitative ground reaction force (GRF) of human walking, the model with a fixed pivot showed overestimations in stance leg rotation and the ratio of horizontal to vertical GRF. The human walking data showed a continuous forward progression of the center of pressure (CoP) during the stance phase and the suspension of the CoP near the forefoot before the onset of step transition. To better describe human gait dynamics with a minimal expense of model complexity, we proposed a compliant bipedal model with the accelerated pivot which associated the CoP excursion with the oscillatory behavior of the center of mass (CoM) with the existing simulation parameter and leg stiffness. Owing to the pivot acceleration defined to emulate human CoP profile, the arrival of the CoP at the limit of the stance foot over the single stance duration initiated the step-to-step transition. The proposed model showed an improved match of walking data. As the forward motion of CoM during single stance was partly accounted by forward pivot translation, the previously overestimated rotation of the stance leg was reduced and the corresponding horizontal GRF became closer to human data. The walking solutions of the model ranged over higher speed ranges (~1.7 m/s) than those of the fixed pivoted compliant bipedal model (~1.5m/s) and exhibited other gait parameters, such as touchdown angle, step length and step frequency, comparable to the experimental observations. The good matches between the model and experimental GRF data imply that the continuous pivot acceleration associated with CoM oscillatory behavior could serve as a useful framework of bipedal model.

  12. Spatial-temporal reproducibility assessment of global seasonal forecasting system version 5 model for Dam Inflow forecasting

    Science.gov (United States)

    Moon, S.; Suh, A. S.; Soohee, H.

    2016-12-01

    The GloSea5(Global Seasonal forecasting system version 5) is provided and operated by the KMA(Korea Meteorological Administration). GloSea5 provides Forecast(FCST) and Hindcast(HCST) data and its horizontal resolution is about 60km (0.83° x 0.56°) in the mid-latitudes. In order to use this data in watershed-scale water management, GloSea5 needs spatial-temporal downscaling. As such, statistical downscaling was used to correct for systematic biases of variables and to improve data reliability. HCST data is provided in ensemble format, and the highest statistical correlation(R2 = 0.60, RMSE = 88.92, NSE = 0.57) of ensemble precipitation was reported for the Yongdam Dam watershed on the #6 grid. Additionally, the original GloSea5(600.1mm) showed the greatest difference(-26.5%) compared to observations(816.1mm) during the summer flood season. However, downscaled GloSea5 was shown to have only a ?3.1% error rate. Most of the underestimated results corresponded to precipitation levels during the flood season and the downscaled GloSea5 showed important results of restoration in precipitation levels. Per the analysis results of spatial autocorrelation using seasonal Moran's I, the spatial distribution was shown to be statistically significant. These results can improve the uncertainty of original GloSea5 and substantiate its spatial-temporal accuracy and validity. The spatial-temporal reproducibility assessment will play a very important role as basic data for watershed-scale water management.

  13. Model observers in medical imaging research.

    Science.gov (United States)

    He, Xin; Park, Subok

    2013-10-04

    Model observers play an important role in the optimization and assessment of imaging devices. In this review paper, we first discuss the basic concepts of model observers, which include the mathematical foundations and psychophysical considerations in designing both optimal observers for optimizing imaging systems and anthropomorphic observers for modeling human observers. Second, we survey a few state-of-the-art computational techniques for estimating model observers and the principles of implementing these techniques. Finally, we review a few applications of model observers in medical imaging research.

  14. Assessment of a numerical model to reproduce event‐scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Measures, R.; Hicks, D. M.; Brasington, J.

    2016-01-01

    Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477

  15. Assessment of a numerical model to reproduce event-scale erosion and deposition distributions in a braided river.

    Science.gov (United States)

    Williams, R D; Measures, R; Hicks, D M; Brasington, J

    2016-08-01

    Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.

  16. Current status of the ability of the GEMS/MACC models to reproduce the tropospheric CO vertical distribution as measured by MOZAIC

    Directory of Open Access Journals (Sweden)

    N. Elguindi

    2010-10-01

    Full Text Available Vertical profiles of CO taken from the MOZAIC aircraft database are used to globally evaluate the performance of the GEMS/MACC models, including the ECMWF-Integrated Forecasting System (IFS model coupled to the CTM MOZART-3 with 4DVAR data assimilation for the year 2004. This study provides a unique opportunity to compare the performance of three offline CTMs (MOZART-3, MOCAGE and TM5 driven by the same meteorology as well as one coupled atmosphere/CTM model run with data assimilation, enabling us to assess the potential gain brought by the combination of online transport and the 4DVAR chemical satellite data assimilation.

    First we present a global analysis of observed CO seasonal averages and interannual variability for the years 2002–2007. Results show that despite the intense boreal forest fires that occurred during the summer in Alaska and Canada, the year 2004 had comparably lower tropospheric CO concentrations. Next we present a validation of CO estimates produced by the MACC models for 2004, including an assessment of their ability to transport pollutants originating from the Alaskan/Canadian wildfires. In general, all the models tend to underestimate CO. The coupled model and the CTMs perform best in Europe and the US where biases range from 0 to -25% in the free troposphere and from 0 to -50% in the surface and boundary layers (BL. Using the 4DVAR technique to assimilate MOPITT V4 CO significantly reduces biases by up to 50% in most regions. However none of the models, even the IFS-MOZART-3 coupled model with assimilation, are able to reproduce well the CO plumes originating from the Alaskan/Canadian wildfires at downwind locations in the eastern US and Europe. Sensitivity tests reveal that deficiencies in the fire emissions inventory and injection height play a role.

  17. Randomised reproducing graphs

    CERN Document Server

    Jordan, Jonathan

    2011-01-01

    We introduce a model for a growing random graph based on simultaneous reproduction of the vertices. The model can be thought of as a generalisation of the reproducing graphs of Southwell and Cannings and Bonato et al to allow for a random element, and there are three parameters, $\\alpha$, $\\beta$ and $\\gamma$, which are the probabilities of edges appearing between different types of vertices. We show that as the probabilities associated with the model vary there are a number of phase transitions, in particular concerning the degree sequence. If $(1+\\alpha)(1+\\gamma)1$ then the degree of a typical vertex grows to infinity, and the proportion of vertices having any fixed degree $d$ tends to zero. We also give some results on the number of edges and on the spectral gap.

  18. Comparison between observations and model

    OpenAIRE

    Claußnitzer, Antje

    2010-01-01

    In recent years the development of numerical weather prediction models has shown great progress in the short-term and medium-range forecast of temperature, wind speed or direction and cloud coverage, but only little success in the quantitative precipitation forecast. Rainfall is one of the most difficult forecasting meteorological variable. To improve the numerical models, it is necessary to understand the rainfall processes. This thesis contributes towards an understanding since the precipit...

  19. The Need for Reproducibility

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-06-27

    The purpose of this presentation is to consider issues of reproducibility, specifically it determines whether bitwise reproducible computation is possible, if computational research in DOE improves its publication process, and if reproducible results can be achieved apart from the peer review process?

  20. Thermal Infrared Observations and Thermophysical Modeling of Phobos

    Science.gov (United States)

    Smith, Nathan Michael; Edwards, Christopher Scott; Mommert, Michael; Trilling, David E.; Glotch, Timothy

    2016-10-01

    Mars-observing spacecraft have the opportunity to study Phobos from Mars orbit, and have produced a sizeable record of observations using the same instruments that study the surface of the planet below. However, these observations are generally infrequent, acquired only rarely over each mission.Using observations gathered by Mars Global Surveyor's (MGS) Thermal Emission Spectrometer (TES), we can investigate the fine layer of regolith that blankets Phobos' surface, and characterize its thermal properties. The mapping of TES observations to footprints on the Phobos surface has not previously been undertaken, and must consider the orientation and position of both MGS and Phobos, and TES's pointing mirror angle. Approximately 300 fully resolved observations are available covering a significant subset of Phobos' surface at a variety of scales.The properties of the surface regolith, such as grain size, density, and conductivity, determine how heat is absorbed, transferred, and reradiated to space. Thermophysical modeling allows us to simulate these processes and predict, for a given set of assumed parameters, how the observed thermal infrared spectra will appear. By comparing models to observations, we can constrain the properties of the regolith, and see how these properties vary with depth, as well as regionally across the Phobos surface. These constraints are key to understanding how Phobos formed and evolved over time, which in turn will help inform the environment and processes that shaped the solar system as a whole.We have developed a thermophysical model of Phobos adapted from a model used for unresolved observations of asteroids. The model has been modified to integrate thermal infrared flux across each observed portion of Phobos. It will include the effects of surface roughness, temperature-dependent conductivity, as well as radiation scattered, reflected, and thermally emitted from the Martian surface. Combining this model with the newly-mapped TES

  1. Development of a Three-Dimensional Hand Model Using Three-Dimensional Stereophotogrammetry: Assessment of Image Reproducibility.

    Directory of Open Access Journals (Sweden)

    Inge A Hoevenaren

    Full Text Available Using three-dimensional (3D stereophotogrammetry precise images and reconstructions of the human body can be produced. Over the last few years, this technique is mainly being developed in the field of maxillofacial reconstructive surgery, creating fusion images with computed tomography (CT data for precise planning and prediction of treatment outcome. Though, in hand surgery 3D stereophotogrammetry is not yet being used in clinical settings.A total of 34 three-dimensional hand photographs were analyzed to investigate the reproducibility. For every individual, 3D photographs were captured at two different time points (baseline T0 and one week later T1. Using two different registration methods, the reproducibility of the methods was analyzed. Furthermore, the differences between 3D photos of men and women were compared in a distance map as a first clinical pilot testing our registration method.The absolute mean registration error for the complete hand was 1.46 mm. This reduced to an error of 0.56 mm isolating the region to the palm of the hand. When comparing hands of both sexes, it was seen that the male hand was larger (broader base and longer fingers than the female hand.This study shows that 3D stereophotogrammetry can produce reproducible images of the hand without harmful side effects for the patient, so proving to be a reliable method for soft tissue analysis. Its potential use in everyday practice of hand surgery needs to be further explored.

  2. How well can a convection-permitting climate model reproduce decadal statistics of precipitation, temperature and cloud characteristics?

    Science.gov (United States)

    Brisson, Erwan; Van Weverberg, Kwinten; Demuzere, Matthias; Devis, Annemarie; Saeed, Sajjad; Stengel, Martin; van Lipzig, Nicole P. M.

    2016-11-01

    Convection-permitting climate model are promising tools for improved representation of extremes, but the number of regions for which these models have been evaluated are still rather limited to make robust conclusions. In addition, an integrated interpretation of near-surface characteristics (typically temperature and precipitation) together with cloud properties is limited. The objective of this paper is to comprehensively evaluate the performance of a `state-of-the-art' regional convection-permitting climate model for a mid-latitude coastal region with little orographic forcing. For this purpose, an 11-year integration with the COSMO-CLM model at Convection-Permitting Scale (CPS) using a grid spacing of 2.8 km was compared with in-situ and satellite-based observations of precipitation, temperature, cloud properties and radiation (both at the surface and the top of the atmosphere). CPS clearly improves the representation of precipitation, in especially the diurnal cycle, intensity and spatial distribution of hourly precipitation. Improvements in the representation of temperature are less obvious. In fact the CPS integration overestimates both low and high temperature extremes. The underlying cause for the overestimation of high temperature extremes was attributed to deficiencies in the cloud properties: The modelled cloud fraction is only 46 % whereas a cloud fraction of 65 % was observed. Surprisingly, the effect of this deficiency was less pronounced at the radiation balance at the top of the atmosphere due to a compensating error, in particular an overestimation of the reflectivity of clouds when they are present. Overall, a better representation of convective precipitation and a very good representation of the daily cycle in different cloud types were demonstrated. However, to overcome remaining deficiencies, additional efforts are necessary to improve cloud characteristics in CPS. This will be a challenging task due to compensating deficiencies that currently

  3. From Peer-Reviewed to Peer-Reproduced in Scholarly Publishing: The Complementary Roles of Data Models and Workflows in Bioinformatics.

    Science.gov (United States)

    González-Beltrán, Alejandra; Li, Peter; Zhao, Jun; Avila-Garcia, Maria Susana; Roos, Marco; Thompson, Mark; van der Horst, Eelke; Kaliyaperumal, Rajaram; Luo, Ruibang; Lee, Tin-Lap; Lam, Tak-Wah; Edmunds, Scott C; Sansone, Susanna-Assunta; Rocca-Serra, Philippe

    2015-01-01

    Reproducing the results from a scientific paper can be challenging due to the absence of data and the computational tools required for their analysis. In addition, details relating to the procedures used to obtain the published results can be difficult to discern due to the use of natural language when reporting how experiments have been performed. The Investigation/Study/Assay (ISA), Nanopublications (NP), and Research Objects (RO) models are conceptual data modelling frameworks that can structure such information from scientific papers. Computational workflow platforms can also be used to reproduce analyses of data in a principled manner. We assessed the extent by which ISA, NP, and RO models, together with the Galaxy workflow system, can capture the experimental processes and reproduce the findings of a previously published paper reporting on the development of SOAPdenovo2, a de novo genome assembler. Executable workflows were developed using Galaxy, which reproduced results that were consistent with the published findings. A structured representation of the information in the SOAPdenovo2 paper was produced by combining the use of ISA, NP, and RO models. By structuring the information in the published paper using these data and scientific workflow modelling frameworks, it was possible to explicitly declare elements of experimental design, variables, and findings. The models served as guides in the curation of scientific information and this led to the identification of inconsistencies in the original published paper, thereby allowing its authors to publish corrections in the form of an errata. SOAPdenovo2 scripts, data, and results are available through the GigaScience Database: http://dx.doi.org/10.5524/100044; the workflows are available from GigaGalaxy: http://galaxy.cbiit.cuhk.edu.hk; and the representations using the ISA, NP, and RO models are available through the SOAPdenovo2 case study website http://isa-tools.github.io/soapdenovo2/. philippe

  4. Reproducibility study of [{sup 18}F]FPP(RGD){sub 2} uptake in murine models of human tumor xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Edwin; Liu, Shuangdong; Chin, Frederick; Cheng, Zhen [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Gowrishankar, Gayatri; Yaghoubi, Shahriar [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Wedgeworth, James Patrick [Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Berndorff, Dietmar; Gekeler, Volker [Bayer Schering Pharma AG, Global Drug Discovery, Berlin (Germany); Gambhir, Sanjiv S. [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Canary Center at Stanford for Cancer Early Detection, Nuclear Medicine, Departments of Radiology and Bioengineering, Molecular Imaging Program at Stanford, Stanford, CA (United States)

    2011-04-15

    An {sup 18}F-labeled PEGylated arginine-glycine-aspartic acid (RGD) dimer [{sup 18}F]FPP(RGD){sub 2} has been used to image tumor {alpha}{sub v}{beta}{sub 3} integrin levels in preclinical and clinical studies. Serial positron emission tomography (PET) studies may be useful for monitoring antiangiogenic therapy response or for drug screening; however, the reproducibility of serial scans has not been determined for this PET probe. The purpose of this study was to determine the reproducibility of the integrin {alpha}{sub v}{beta}{sub 3}-targeted PET probe, [{sup 18}F ]FPP(RGD){sub 2} using small animal PET. Human HCT116 colon cancer xenografts were implanted into nude mice (n = 12) in the breast and scapular region and grown to mean diameters of 5-15 mm for approximately 2.5 weeks. A 3-min acquisition was performed on a small animal PET scanner approximately 1 h after administration of [{sup 18}F]FPP(RGD){sub 2} (1.9-3.8 MBq, 50-100 {mu}Ci) via the tail vein. A second small animal PET scan was performed approximately 6 h later after reinjection of the probe to assess for reproducibility. Images were analyzed by drawing an ellipsoidal region of interest (ROI) around the tumor xenograft activity. Percentage injected dose per gram (%ID/g) values were calculated from the mean or maximum activity in the ROIs. Coefficients of variation and differences in %ID/g values between studies from the same day were calculated to determine the reproducibility. The coefficient of variation (mean {+-}SD) for %ID{sub mean}/g and %ID{sub max}/g values between [{sup 18}F]FPP(RGD){sub 2} small animal PET scans performed 6 h apart on the same day were 11.1 {+-} 7.6% and 10.4 {+-} 9.3%, respectively. The corresponding differences in %ID{sub mean}/g and %ID{sub max}/g values between scans were -0.025 {+-} 0.067 and -0.039 {+-} 0.426. Immunofluorescence studies revealed a direct relationship between extent of {alpha}{sub {nu}}{beta}{sub 3} integrin expression in tumors and tumor vasculature

  5. A right to reproduce?

    Science.gov (United States)

    Quigley, Muireann

    2010-10-01

    How should we conceive of a right to reproduce? And, morally speaking, what might be said to justify such a right? These are just two questions of interest that are raised by the technologies of assisted reproduction. This paper analyses the possible legitimate grounds for a right to reproduce within the two main theories of rights; interest theory and choice theory.

  6. Magni Reproducibility Example

    DEFF Research Database (Denmark)

    2016-01-01

    An example of how to use the magni.reproducibility package for storing metadata along with results from a computational experiment. The example is based on simulating the Mandelbrot set.......An example of how to use the magni.reproducibility package for storing metadata along with results from a computational experiment. The example is based on simulating the Mandelbrot set....

  7. Long-term stability, reproducibility, and statistical sensitivity of a telemetry-instrumented dog model: A 27-month longitudinal assessment.

    Science.gov (United States)

    Fryer, Ryan M; Ng, Khing Jow; Chi, Liguo; Jin, Xidong; Reinhart, Glenn A

    2015-01-01

    ICH guidelines, as well as best-practice and ethical considerations, provide strong rationale for use of telemetry-instrumented dog colonies for cardiovascular safety assessment. However, few studies have investigated the long-term stability of cardiovascular function at baseline, reproducibility in response to pharmacologic challenge, and maintenance of statistical sensitivity to define the usable life of the colony. These questions were addressed in 3 identical studies spanning 27months and were performed in the same colony of dogs. Telemetry-instrumented dogs (n=4) received a single dose of dl-sotalol (10mg/kg, p.o.), a β1 adrenergic and IKr blocker, or vehicle, in 3 separate studies spanning 27months. Systemic hemodynamics, cardiovascular function, and ECG parameters were monitored for 18h post-dose; plasma drug concentrations (Cp) were measured at 1, 3, 5, and 24h post-dose. Baseline hemodynamic/ECG values were consistent across the 27-month study with the exception of modest age-dependent decreases in heart rate and the corresponding QT-interval. dl-Sotalol elicited highly reproducible effects in each study. Reductions in heart rate after dl-sotalol treatment ranged between -22 and -32 beats/min, and slight differences in magnitude could be ascribed to variability in dl-sotalol Cp (range=3230-5087ng/mL); dl-sotalol also reduced LV-dP/dtmax 13-22%. dl-Sotalol increased the slope of the PR-RR relationship suggesting inhibition of AV-conduction. Increases in the heart-rate corrected QT-interval were not significantly different across the 3 studies and results of a power analysis demonstrated that the detection limit for QTc values was not diminished throughout the 27month period and across a range of power assumptions despite modest, age-dependent changes in heart rate. These results demonstrate the long-term stability of a telemetry dog colony as evidenced by a stability of baseline values, consistently reproducible response to pharmacologic challenge and no

  8. Measurement of cerebral blood flow by intravenous xenon-133 technique and a mobile system. Reproducibility using the Obrist model compared to total curve analysis

    DEFF Research Database (Denmark)

    Schroeder, T; Holstein, P; Lassen, N A

    1986-01-01

    and side-to-side asymmetry. Data were analysed according to the Obrist model and the results compared with those obtained using a model correcting for the air passage artifact. Reproducibility was of the same order of magnitude as reported using stationary equipment. The side-to-side CBF asymmetry...... differences, but in low flow situations the artifact model yielded significantly more stable results. The present apparatus, equipped with 3-5 detectors covering each hemisphere, offers the opportunity of performing serial CBF measurements in situations not otherwise feasible....

  9. High critical current densities reproducibly observed for hot-isostatic-pressed PbMo6S8 wires with Mo barriers

    Science.gov (United States)

    Yamasaki, H.; Umeda, M.; Kosaka, S.

    1992-08-01

    Fabrication process, critical current densities (Jc), and microstructure of the superconducting PbMo6S8 wires with Mo barriers have been investigated. Reducing the volume fraction of the Mo barrier and using electron-beam-melted Mo with less deformation resistance than that of conventional powder-metallurgy-processed Mo, facilitate the densification of PbMo6S8 and Jc improvement by the hot-isostatic-pressing (HIP) treatments. It was possible to obtain reproducibly HIP-treated PbMo6S8 wires with homogeneously high Jc not less than 10 exp 8 A/sq m at 22 T and 4.2 K, which is promising for the production of future high field (greater than 20 T) superconducting magnets.

  10. Reproducible Research in Speech Sciences

    Directory of Open Access Journals (Sweden)

    Kandaacute;lmandaacute;n Abari

    2012-11-01

    Full Text Available Reproducible research is the minimum standard of scientific claims in cases when independent replication proves to be difficult. With the special combination of available software tools, we provide a reproducibility recipe for the experimental research conducted in some fields of speech sciences. We have based our model on the triad of the R environment, the EMU-format speech database, and the executable publication. We present the use of three typesetting systems (LaTeX, Markdown, Org, with the help of a mini research.

  11. Reproducibility in Seismic Imaging

    Directory of Open Access Journals (Sweden)

    González-Verdejo O.

    2012-04-01

    Full Text Available Within the field of exploration seismology, there is interest at national level of integrating reproducibility in applied, educational and research activities related to seismic processing and imaging. This reproducibility implies the description and organization of the elements involved in numerical experiments. Thus, a researcher, teacher or student can study, verify, repeat, and modify them independently. In this work, we document and adapt reproducibility in seismic processing and imaging to spread this concept and its benefits, and to encourage the use of open source software in this area within our academic and professional environment. We present an enhanced seismic imaging example, of interest in both academic and professional environments, using Mexican seismic data. As a result of this research, we prove that it is possible to assimilate, adapt and transfer technology at low cost, using open source software and following a reproducible research scheme.

  12. Safety and Reproducibility of a Clinical Trial System Using Induced Blood Stage Plasmodium vivax Infection and Its Potential as a Model to Evaluate Malaria Transmission

    Science.gov (United States)

    Elliott, Suzanne; Sekuloski, Silvana; Sikulu, Maggy; Hugo, Leon; Khoury, David; Cromer, Deborah; Davenport, Miles; Sattabongkot, Jetsumon; Ivinson, Karen; Ockenhouse, Christian; McCarthy, James

    2016-01-01

    Background Interventions to interrupt transmission of malaria from humans to mosquitoes represent an appealing approach to assist malaria elimination. A limitation has been the lack of systems to test the efficacy of such interventions before proceeding to efficacy trials in the field. We have previously demonstrated the feasibility of induced blood stage malaria (IBSM) infection with Plasmodium vivax. In this study, we report further validation of the IBSM model, and its evaluation for assessment of transmission of P. vivax to Anopheles stephensi mosquitoes. Methods Six healthy subjects (three cohorts, n = 2 per cohort) were infected with P. vivax by inoculation with parasitized erythrocytes. Parasite growth was monitored by quantitative PCR, and gametocytemia by quantitative reverse transcriptase PCR (qRT-PCR) for the mRNA pvs25. Parasite multiplication rate (PMR) and size of inoculum were calculated by linear regression. Mosquito transmission studies were undertaken by direct and membrane feeding assays over 3 days prior to commencement of antimalarial treatment, and midguts of blood fed mosquitoes dissected and checked for presence of oocysts after 7–9 days. Results The clinical course and parasitemia were consistent across cohorts, with all subjects developing mild to moderate symptoms of malaria. No serious adverse events were reported. Asymptomatic elevated liver function tests were detected in four of six subjects; these resolved without treatment. Direct feeding of mosquitoes was well tolerated. The estimated PMR was 9.9 fold per cycle. Low prevalence of mosquito infection was observed (1.8%; n = 32/1801) from both direct (4.5%; n = 20/411) and membrane (0.9%; n = 12/1360) feeds. Conclusion The P. vivax IBSM model proved safe and reliable. The clinical course and PMR were reproducible when compared with the previous study using this model. The IBSM model presented in this report shows promise as a system to test transmission-blocking interventions

  13. Siberian Arctic black carbon sources constrained by model and observation

    Science.gov (United States)

    Winiger, Patrik; Andersson, August; Eckhardt, Sabine; Stohl, Andreas; Semiletov, Igor P.; Dudarev, Oleg V.; Charkin, Alexander; Shakhova, Natalia; Klimont, Zbigniew; Heyes, Chris; Gustafsson, Örjan

    2017-02-01

    Black carbon (BC) in haze and deposited on snow and ice can have strong effects on the radiative balance of the Arctic. There is a geographic bias in Arctic BC studies toward the Atlantic sector, with lack of observational constraints for the extensive Russian Siberian Arctic, spanning nearly half of the circum-Arctic. Here, 2 y of observations at Tiksi (East Siberian Arctic) establish a strong seasonality in both BC concentrations (8 ngṡm-3 to 302 ngṡm-3) and dual-isotope-constrained sources (19 to 73% contribution from biomass burning). Comparisons between observations and a dispersion model, coupled to an anthropogenic emissions inventory and a fire emissions inventory, give mixed results. In the European Arctic, this model has proven to simulate BC concentrations and source contributions well. However, the model is less successful in reproducing BC concentrations and sources for the Russian Arctic. Using a Bayesian approach, we show that, in contrast to earlier studies, contributions from gas flaring (6%), power plants (9%), and open fires (12%) are relatively small, with the major sources instead being domestic (35%) and transport (38%). The observation-based evaluation of reported emissions identifies errors in spatial allocation of BC sources in the inventory and highlights the importance of improving emission distribution and source attribution, to develop reliable mitigation strategies for efficient reduction of BC impact on the Russian Arctic, one of the fastest-warming regions on Earth.

  14. Advancing an Information Model for Environmental Observations

    Science.gov (United States)

    Horsburgh, J. S.; Aufdenkampe, A. K.; Hooper, R. P.; Lehnert, K. A.; Schreuders, K.; Tarboton, D. G.; Valentine, D. W.; Zaslavsky, I.

    2011-12-01

    Observational data are fundamental to hydrology and water resources, and the way they are organized, described, and shared either enables or inhibits the analyses that can be performed using the data. The CUAHSI Hydrologic Information System (HIS) project is developing cyberinfrastructure to support hydrologic science by enabling better access to hydrologic data. HIS is composed of three major components. HydroServer is a software stack for publishing time series of hydrologic observations on the Internet as well as geospatial data using standards-based web feature, map, and coverage services. HydroCatalog is a centralized facility that catalogs the data contents of individual HydroServers and enables search across them. HydroDesktop is a client application that interacts with both HydroServer and HydroCatalog to discover, download, visualize, and analyze hydrologic observations published on one or more HydroServers. All three components of HIS are founded upon an information model for hydrologic observations at stationary points that specifies the entities, relationships, constraints, rules, and semantics of the observational data and that supports its data services. Within this information model, observations are described with ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. Physical implementations of this information model include the Observations Data Model (ODM) for storing hydrologic observations, Water Markup Language (WaterML) for encoding observations for transmittal over the Internet, the HydroCatalog metadata catalog database, and the HydroDesktop data cache database. The CUAHSI HIS and this information model have now been in use for several years, and have been deployed across many different academic institutions as well as across several national agency data repositories. Additionally, components of the HIS

  15. Observational constraints on the LLTB model

    CERN Document Server

    Marra, Valerio

    2010-01-01

    We directly compare the concordance LCDM model to the inhomogeneous matter-only alternative represented by LTB void models. To achieve a "democratic" confrontation we explore LLTB models with non-vanishing cosmological constant and perform a global likelihood analysis in the parameter space of cosmological constant and void radius. In our analysis we carefully consider SNe, Hubble constant, CMB and BAO measurements, marginalizing over the age of the universe and the background curvature. We find that the LCDM model is not the only possibility compatible with the observations, and that a matter-only void model is a viable alternative to the concordance model only if the BAO constraints are relaxed. Moreover, we will show that the areas of the parameter space which give a good fit to the observations are always disconnected with the result that a small local void does not significantly affect the parameter extraction for LCDM models.

  16. Cocaine addiction related reproducible brain regions of abnormal default-mode network functional connectivity: a group ICA study with different model orders.

    Science.gov (United States)

    Ding, Xiaoyu; Lee, Seong-Whan

    2013-08-26

    Model order selection in group independent component analysis (ICA) has a significant effect on the obtained components. This study investigated the reproducible brain regions of abnormal default-mode network (DMN) functional connectivity related with cocaine addiction through different model order settings in group ICA. Resting-state fMRI data from 24 cocaine addicts and 24 healthy controls were temporally concatenated and processed by group ICA using model orders of 10, 20, 30, 40, and 50, respectively. For each model order, the group ICA approach was repeated 100 times using the ICASSO toolbox and after clustering the obtained components, centrotype-based anterior and posterior DMN components were selected for further analysis. Individual DMN components were obtained through back-reconstruction and converted to z-score maps. A whole brain mixed effects factorial ANOVA was performed to explore the differences in resting-state DMN functional connectivity between cocaine addicts and healthy controls. The hippocampus, which showed decreased functional connectivity in cocaine addicts for all the tested model orders, might be considered as a reproducible abnormal region in DMN associated with cocaine addiction. This finding suggests that using group ICA to examine the functional connectivity of the hippocampus in the resting-state DMN may provide an additional insight potentially relevant for cocaine-related diagnoses and treatments. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Observational Challenges for the Standard FLRW Model

    CERN Document Server

    Buchert, Thomas; Kleinert, Hagen; Roukema, Boudewijn F; Wiltshire, David L

    2015-01-01

    In the context of the "Fourteenth Marcel Grossman Meeting on General Relativity" parallel session DE3, "Large--scale Structure and Statistics", concerning observational issues in cosmology, we summarise some of the main observational challenges for the standard FLRW model and describe how the results presented in the session are related to these challenges.

  18. The Proximal Medial Sural Nerve Biopsy Model: A Standardised and Reproducible Baseline Clinical Model for the Translational Evaluation of Bioengineered Nerve Guides

    Directory of Open Access Journals (Sweden)

    Ahmet Bozkurt

    2014-01-01

    Full Text Available Autologous nerve transplantation (ANT is the clinical gold standard for the reconstruction of peripheral nerve defects. A large number of bioengineered nerve guides have been tested under laboratory conditions as an alternative to the ANT. The step from experimental studies to the implementation of the device in the clinical setting is often substantial and the outcome is unpredictable. This is mainly linked to the heterogeneity of clinical peripheral nerve injuries, which is very different from standardized animal studies. In search of a reproducible human model for the implantation of bioengineered nerve guides, we propose the reconstruction of sural nerve defects after routine nerve biopsy as a first or baseline study. Our concept uses the medial sural nerve of patients undergoing diagnostic nerve biopsy (≥2 cm. The biopsy-induced nerve gap was immediately reconstructed by implantation of the novel microstructured nerve guide, Neuromaix, as part of an ongoing first-in-human study. Here we present (i a detailed list of inclusion and exclusion criteria, (ii a detailed description of the surgical procedure, and (iii a follow-up concept with multimodal sensory evaluation techniques. The proximal medial sural nerve biopsy model can serve as a preliminarynature of the injuries or baseline nerve lesion model. In a subsequent step, newly developed nerve guides could be tested in more unpredictable and challenging clinical peripheral nerve lesions (e.g., following trauma which have reduced comparability due to the different nature of the injuries (e.g., site of injury and length of nerve gap.

  19. Whole-body skeletal imaging in mice utilizing microPET: optimization of reproducibility and applications in animal models of bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Frank [The Crump Institute for Molecular Imaging, Department of Molecular and Medical Pharmacology, University of California School of Medicine, 700 Westwood Blvd., Los Angeles, CA 90095 (United States); Department of Nuclear Medicine, Ludwig-Maximilians-University, Munich (Germany); Lee, Yu-Po; Lieberman, Jay R. [Department of Orthopedic Surgery, University of California School of Medicine, Los Angeles, California (United States); Loening, Andreas M.; Chatziioannou, Arion [The Crump Institute for Molecular Imaging, Department of Molecular and Medical Pharmacology, University of California School of Medicine, 700 Westwood Blvd., Los Angeles, CA 90095 (United States); Freedland, Stephen J.; Belldegrun, Arie S. [Department of Urology, University of California School of Medicine, Los Angeles, California (United States); Leahy, Richard [University of Southern California School of Bioengineering, Los Angeles, California (United States); Sawyers, Charles L. [Department of Medicine, University of California School of Medicine, Los Angeles, California (United States); Gambhir, Sanjiv S. [The Crump Institute for Molecular Imaging, Department of Molecular and Medical Pharmacology, University of California School of Medicine, 700 Westwood Blvd., Los Angeles, CA 90095 (United States); UCLA-Jonsson Comprehensive Cancer Center and Department of Biomathematics, University of California School of Medicine, Los Angeles, California (United States)

    2002-09-01

    The aims were to optimize reproducibility and establish [{sup 18}F]fluoride ion bone scanning in mice, using a dedicated small animal positron emission tomography (PET) scanner (microPET) and to correlate functional findings with anatomical imaging using computed tomography (microCAT). Optimal tracer uptake time for [{sup 18}F]fluoride ion was determined by performing dynamic microPET scans. Quantitative reproducibility was measured using region of interest (ROI)-based counts normalized to (a) the injected dose, (b) integral of the heart time-activity curve, or (c) ROI over the whole skeleton. Bone lesions were repetitively imaged. Functional images were correlated with X-ray and microCAT. The plateau of [{sup 18}F]fluoride uptake occurs 60 min after injection. The highest reproducibility was achieved by normalizing to an ROI over the whole skeleton, with a mean percent coefficient of variation [(SD/mean) x 100] of <15%-20%. Benign and malignant bone lesions were successfully repetitively imaged. Preliminary correlation of microPET with microCAT demonstrated the high sensitivity of microPET and the ability of microCAT to detect small osteolytic lesions. Whole-body [{sup 18}F]fluoride ion bone imaging using microPET is reproducible and can be used to serially monitor normal and pathological changes to the mouse skeleton. Morphological imaging with microCAT is useful to display correlative changes in anatomy. Detailed in vivo studies of the murine skeleton in various small animal models of bone diseases should now be possible. (orig.)

  20. Observations and Modelling of DQ White Dwarfs

    CERN Document Server

    Vornanen, Tommi; Berdyugin, Andrei

    2012-01-01

    We present spectropolarimetric observations and modelling of 12 DQ white dwarfs. Modelling is based on the method presented in Berdyugina et al. (2005). We use the model to fit the C_2 absorption bands to get atmospheric parameters in different configurations, including stellar spots and stratified atmospheres, searching for the best possible fit. We still have problem to solve before we can give temperature estimates based on the Swan bands alone.

  1. Observational & modeling analysis of surface heat and moisture fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Smith, E. [Florida State Univ., Tallahassee, FL (United States)

    1995-09-01

    An observational and modeling study was conducted to help assess how well current GCMs are predicting surface fluxes under the highly variable cloudiness and flow conditions characteristic of the real atmosphere. The observational data base for the study was obtained from a network of surface flux stations operated during the First ISLSCP Field Experiment (FIFE). The study included examination of a surface-driven secondary circulation in the boundary layer resulting from a persistent cross-site gradient in soil moisture, to demonstrate the sensitivity of boundary layer dynamics to heterogeneous surface fluxes, The performance of a biosphere model in reproducing the measured surface fluxes was evaluated with and without the use of satellite retrieval of three key canopy variables with RMS uncertainties commensurate with those of the measurements themselves. Four sensible heat flux closure schemes currently being used in GCMs were then evaluated against the FIFE observations. Results indicate that the methods by which closure models are calibrated lead to exceedingly large errors when the schemes are applied to variable boundary layer conditions. 4 refs., 2 figs.

  2. Models and observations of sunspot penumbrae

    Institute of Scientific and Technical Information of China (English)

    BORRERO; Juan; Manuel

    2009-01-01

    The mysteries of sunspot penumbrae have been under an intense scrutiny for the past 10 years. During this time, some models have been proposed and refuted, while the surviving ones had to be modified, adapted and evolved to explain the ever-increasing array of observational constraints. In this contribution I will review two of the present models, emphasizing their contributions to this field, but also pinpointing some of their inadequacies to explain a number of recent observations at very high spatial resolution (0.32 ). To help explaining these new observations I propose some modifications to each of those models. These modifications bring those two seemingly opposite models closer together into a general picture that agrees well with recent 3D magneto-hydrodynamic simulations.

  3. Can four-zero-texture mass matrix model reproduce the quark and lepton mixing angles and CP violating phases?

    CERN Document Server

    Matsuda, K; Matsuda, Koichi; Nishiura, Hiroyuki

    2006-01-01

    We reconsider an universal mass matrix model which has a seesaw-invariant structure with four-zero texture common to all quarks and leptons. The CKM quark and MNS lepton mixing matrices of the model are analyzed analytically.We show that the model can be consistent with all the experimental data of neutrino oscillation and quark mixings by tuning free parameters of the model. It is also shown that the model predicts a relatively large value for (1,3) element of the MNS lepton mixing matrix, |(U_{MNS})_{13}|^2 \\simeq 2.6 \\times 10^{-2}. Using the seesaw mechanism, we also discuss the conditions for the components of the Dirac and the right-handed Majorana neutrino mass matrices which lead to the neutrino mass matrix consistent with the experimental data.

  4. Apparent diffusion coefficient measurements in diffusion-weighted magnetic resonance imaging of the anterior mediastinum: inter-observer reproducibility of five different methods of region-of-interest positioning.

    Science.gov (United States)

    Priola, Adriano Massimiliano; Priola, Sandro Massimo; Parlatano, Daniela; Gned, Dario; Giraudo, Maria Teresa; Giardino, Roberto; Ferrero, Bruno; Ardissone, Francesco; Veltri, Andrea

    2017-04-01

    To investigate inter-reader reproducibility of five different region-of-interest (ROI) protocols for apparent diffusion coefficient (ADC) measurements in the anterior mediastinum. In eighty-one subjects, on ADC mapping, two readers measured the ADC using five methods of ROI positioning that encompassed the entire tissue (whole tissue volume [WTV], three slices observer-defined [TSOD], single-slice [SS]) or the more restricted areas (one small round ROI [OSR]), multiple small round ROI [MSR]). Inter-observer variability was assessed with interclass correlation coefficient (ICC), coefficient of variation (CoV), and Bland-Altman analysis. Nonparametric tests were performed to compare the ADC between ROI methods. The measurement time was recorded and compared between ROI methods. All methods showed excellent inter-reader agreement with best and worst reproducibility in WTV and OSR, respectively (ICC, 0.937/0.874; CoV, 7.3 %/16.8 %; limits of agreement, ±0.44/±0.77 × 10(-3) mm(2)/s). ADC values of OSR and MSR were significantly lower compared to the other methods in both readers (p < 0.001). The SS and OSR methods required less measurement time (14 ± 2 s) compared to the others (p < 0.0001), while the WTV method required the longest measurement time (90 ± 56 and 77 ± 49 s for each reader) (p < 0.0001). All methods demonstrate excellent inter-observer reproducibility with the best agreement in WTV, although it requires the longest measurement time. • All ROI protocols show excellent inter-observer reproducibility. • WTV measurements provide the most reproducible ADC values. • ROI size and positioning influence ADC measurements in the anterior mediastinum. • ADC values of OSR and MSR are significantly lower than other methods. • OSR and WTV methods require the shortest and longest measurement time, respectively.

  5. TIME-IGGCAS model validation:Comparisons with empirical models and observations

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The TIME-IGGCAS (Theoretical Ionospheric Model of the Earth in Institute of Ge- ology and Geophysics, Chinese Academy of Sciences) has been developed re- cently on the basis of previous works. To test its validity, we have made compari- sons of model results with other typical empirical ionospheric models (IRI, NeQuick-ITUR, and TItheridge temperature models) and multi-observations (GPS, Ionosondes, Topex, DMSP, FORMOSAT, and CHAMP) in this paper. Several conclu- sions are obtained from our comparisons. The modeled electron density and elec- tron and ion temperatures are quantitatively in good agreement with those of em- pirical models and observations. TIME-IGGCAS can model the electron density variations versus several factors such as local time, latitude, and season very well and can reproduce most anomalistic features of ionosphere including equatorial anomaly, winter anomaly, and semiannual anomaly. These results imply a good base for the development of ionospheric data assimilation model in the future. TIME-IGGCAS underestimates electron temperature and overestimates ion tem- perature in comparison with either empirical models or observations. The model results have relatively large deviations near sunrise time and sunset time and at the low altitudes. These results give us a reference to improve the model and enhance its performance in the future.

  6. Models and Observations of Sunspot Penumbrae

    CERN Document Server

    Borrero, J M

    2008-01-01

    The mysteries of sunspot penumbrae have been under an intense scrutiny for the past 10 years. During this time, some models have been proposed and refuted, while the surviving ones had to be modified, adapted and evolved to explain the ever-increasing array of observational constraints. In this contribution I will review two of the present models, emphasizing their contributions to this field, but also pinpointing some of their inadequacies to explain a number of recent observations at very high spatial resolution. To help explaining these new observations I propose some modifications to each of them. These modifications bring those two seemingly opposite models closer together into a general picture that agrees well with recent 3D magneto-hydrodynamic simulations.

  7. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  8. Dust properties inside molecular clouds from coreshine modeling and observations

    CERN Document Server

    Lefèvre, Charlène; Juvela, Mika; Paladini, Roberta; Lallement, Rosine; Marshall, D J; Andersen, Morten; Bacmann, Aurore; Mcgee, Peregrine M; Montier, Ludovic; Noriega-Crespo, Alberto; Pelkonen, V -M; Ristorcelli, Isabelle; Steinacker, Jürgen

    2014-01-01

    Context. Using observations to deduce dust properties, grain size distribution, and physical conditions in molecular clouds is a highly degenerate problem. Aims. The coreshine phenomenon, a scattering process at 3.6 and 4.5 $\\mu$m that dominates absorption, has revealed its ability to explore the densest parts of clouds. We want to use this effect to constrain the dust parameters. The goal is to investigate to what extent grain growth (at constant dust mass) inside molecular clouds is able to explain the coreshine observations. We aim to find dust models that can explain a sample of Spitzer coreshine data. We also look at the consistency with near-infrared data we obtained for a few clouds. Methods. We selected four regions with a very high occurrence of coreshine cases: Taurus-Perseus, Cepheus, Chameleon and L183/L134. We built a grid of dust models and investigated the key parameters to reproduce the general trend of surface bright- nesses and intensity ratios of both coreshine and near-infrared observation...

  9. Model and observed seismicity represented in a two dimensional space

    Directory of Open Access Journals (Sweden)

    M. Caputo

    1976-06-01

    Full Text Available In recent years theoretical seismology lias introduced
    some formulae relating the magnitude and the seismic moment of earthquakes
    to the size of the fault and the stress drop which generated the
    earthquake.
    In the present paper we introduce a model for the statistics of the
    earthquakes based on these formulae. The model gives formulae which
    show internal consistency and are also confirmed by observations.
    For intermediate magnitudes the formulae reproduce also the trend
    of linearity of the statistics of magnitude and moment observed in all the
    seismic regions of the world. This linear trend changes into a curve with
    increasing slope for large magnitudes and moment.
    When a catalogue of the magnitudes and/or the seismic moment of
    the earthquakes of a seismic region is available, the model allows to estimate
    the maximum magnitude possible in the region.

  10. A two-stage unsupervised learning algorithm reproduces multisensory enhancement in a neural network model of the corticotectal system.

    Science.gov (United States)

    Anastasio, Thomas J; Patton, Paul E

    2003-07-30

    Multisensory enhancement (MSE) is the augmentation of the response to sensory stimulation of one modality by stimulation of a different modality. It has been described for multisensory neurons in the deep superior colliculus (DSC) of mammals, which function to detect, and direct orienting movements toward, the sources of stimulation (targets). MSE would seem to improve the ability of DSC neurons to detect targets, but many mammalian DSC neurons are unimodal. MSE requires descending input to DSC from certain regions of parietal cortex. Paradoxically, the descending projections necessary for MSE originate from unimodal cortical neurons. MSE, and the puzzling findings associated with it, can be simulated using a model of the corticotectal system. In the model, a network of DSC units receives primary sensory input that can be augmented by modulatory cortical input. Connection weights from primary and modulatory inputs are trained in stages one (Hebb) and two (Hebb-anti-Hebb), respectively, of an unsupervised two-stage algorithm. Two-stage training causes DSC units to extract information concerning simulated targets from their inputs. It also causes the DSC to develop a mixture of unimodal and multisensory units. The percentage of DSC multisensory units is determined by the proportion of cross-modal targets and by primary input ambiguity. Multisensory DSC units develop MSE, which depends on unimodal modulatory connections. Removal of the modulatory influence greatly reduces MSE but has little effect on DSC unit responses to stimuli of a single modality. The correspondence between model and data suggests that two-stage training captures important features of self-organization in the real corticotectal system.

  11. ENLIL Global Heliospheric Modeling as a Context For Multipoint Observations

    Science.gov (United States)

    Mays, M. Leila; Odstrcil, Dusan; Luhmann, Janet; Bain, Hazel; Li, Yan; Schwadron, Nathan; Gorby, Matt; Thompson, Barbara; Jian, Lan; Möstl, Christian; Rouillard, Alexis; Davies, Jackie; Temmer, Manuela; Rastaetter, Lutz; Taktakishvili, Aleksandre; MacNeice, Peter; Kuznetsova, Maria

    2016-04-01

    We present heliospheric simulation case studies using recent enhancements to WSA--ENLIL+Cone (version 2.8) at the Community Coordinated Modeling Center (CCMC). The global 3D MHD ENLIL model provides a time-dependent description of the background solar wind plasma and magnetic field using a sequence of WSA coronal model maps as input at the inner boundary of 21.5 Rs. A homogeneous, over-pressured hydrodynamic plasma cloud is launched through the inner boundary of the heliospheric computational domain and into the background solar wind. Multipoint observations help constrain simulations and this modeling system provides global context and arrival times of the solar wind streams and CMEs at Earth, planets, and spacecraft. Additionally, one can extract the magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. ENLIL "likelihood/all-clear" forecasting maps provide expected intensity, timing/duration of events at locations throughout the heliosphere with "possible SEP affected areas" color-coded based on shock strength. ENLIL simulations are also useful to drive SEP models such as the Solar Energetic Particle Model (SEPMOD) (Luhmann et al. 2007, 2010) and Energetic Particle Radiation Environment Module (EPREM) (Schwadron et al., 2010). SEPMOD injects protons onto a sequence observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EPREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. Studies have shown that accurate descriptions of the heliosphere, and hence modeled CME arrival times and SEPs, are achieved by ENLIL only when the background solar wind is well-reproduced and CME parameters are accurate. It is essential to include all of the relevant CMEs and

  12. Are there consistent models giving observable NSI ?

    CERN Document Server

    Martinez, Enrique Fernandez

    2013-01-01

    While the existing direct bounds on neutrino NSI are rather weak, order 10(−)(1) for propagation and 10(−)(2) for production and detection, the close connection between these interactions and new NSI affecting the better-constrained charged letpon sector through gauge invariance make these bounds hard to saturate in realistic models. Indeed, Standard Model extensions leading to neutrino NSI typically imply constraints at the 10(−)(3) level. The question of whether consistent models leading to observable neutrino NSI naturally arises and was discussed in a dedicated session at NUFACT 11. Here we summarize that discussion.

  13. Observations and NLTE modeling of Ellerman bombs

    CERN Document Server

    Berlicki, Arkadiusz

    2014-01-01

    Ellerman bombs (EBs) are short-lived and compact structures that are observed well in the wings of the hydrogen H-alpha line. EBs are also observed in the chromospheric CaII lines and in UV continua. H-alpha line profiles of EBs show a deep absorption at the line center and enhanced emission in the line wings. Similar shapes of the line profiles are observed for the CaII IR line at 8542 ang. It is generally accepted that EBs may be considered as compact microflares located in lower solar atmosphere. However, it is still not clear where exactly the emission of EBs is formed in the solar atmosphere. High-resolution spectrophotometric observations of EBs were used for determining of their physical parameters and construction of semi-empirical models. In our analysis we used observations of EBs obtained in the H-alpha and CaII H lines. We also used NLTE numerical codes for the construction of grids of 243 semi-empirical models simulating EBs structures. In this way, the observed emission could be compared with th...

  14. Supporting observation campaigns with high resolution modeling

    Science.gov (United States)

    Klocke, Daniel; Brueck, Matthias; Voigt, Aiko

    2017-04-01

    High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.

  15. Reproducing Kernel Particle Method for Non-Linear Fracture Analysis

    Institute of Scientific and Technical Information of China (English)

    Cao Zhongqing; Zhou Benkuan; Chen Dapeng

    2006-01-01

    To study the non-linear fracture, a non-linear constitutive model for piezoelectric ceramics was proposed, in which the polarization switching and saturation were taken into account. Based on the model, the non-linear fracture analysis was implemented using reproducing kernel particle method (RKPM). Using local J-integral as a fracture criterion, a relation curve of fracture loads against electric fields was obtained. Qualitatively, the curve is in agreement with the experimental observations reported in literature. The reproducing equation, the shape function of RKPM, and the transformation method to impose essential boundary conditions for meshless methods were also introduced. The computation was implemented using object-oriented programming method.

  16. Characterization and optimization of experimental variables within a reproducible bladder encrustation model and in vitro evaluation of the efficacy of urease inhibitors for the prevention of medical device-related encrustation.

    Science.gov (United States)

    Jones, David S; Djokic, Jasmina; Gorman, Sean P

    2006-01-01

    This study presents a reproducible, cost-effective in vitro encrustation model and, furthermore, describes the effects of components of the artificial urine and the presence of agents that modify the action of urease on encrustation on commercially available ureteral stents. The encrustation model involved the use of small-volume reactors (700 mL) containing artificial urine and employing an orbital incubator (at 37 degrees C) to ensure controlled stirring. The artificial urine contained sources of calcium and magnesium (both as chlorides), albumin and urease. Alteration of the ratio (% w/w) of calcium salt to magnesium salt affected the mass of encrustation, with the greatest encrustation noted whenever magnesium was excluded from the artificial urine. Increasing the concentration of albumin, designed to mimic the presence of protein in urine, significantly decreased the mass of both calcium and magnesium encrustation until a plateau was observed. Finally, exclusion of urease from the artificial urine significantly reduced encrustation due to the indirect effects of this enzyme on pH. Inclusion of the urease inhibitor, acetohydroxamic acid, or urease substrates (methylurea or ethylurea) into the artificial medium markedly reduced encrustation on ureteral stents. In conclusion, this study has described the design of a reproducible, cost-effective in vitro encrustation model. Encrustation was markedly reduced on biomaterials by the inclusion of agents that modify the action of urease. These agents may, therefore, offer a novel clinical approach to the control of encrustation on urological medical devices.

  17. Apparent diffusion coefficient measurements in diffusion-weighted magnetic resonance imaging of the anterior mediastinum: inter-observer reproducibility of five different methods of region-of-interest positioning

    Energy Technology Data Exchange (ETDEWEB)

    Priola, Adriano Massimiliano; Priola, Sandro Massimo; Parlatano, Daniela; Gned, Dario; Veltri, Andrea [San Luigi Gonzaga University Hospital, Department of Diagnostic Imaging, Regione Gonzole 10, Orbassano, Torino (Italy); Giraudo, Maria Teresa [University of Torino, Department of Mathematics ' ' Giuseppe Peano' ' , Torino (Italy); Giardino, Roberto; Ardissone, Francesco [San Luigi Gonzaga University Hospital, Department of Thoracic Surgery, Regione Gonzole 10, Orbassano, Torino (Italy); Ferrero, Bruno [San Luigi Gonzaga University Hospital, Department of Neurology, Regione Gonzole 10, Orbassano, Torino (Italy)

    2017-04-15

    To investigate inter-reader reproducibility of five different region-of-interest (ROI) protocols for apparent diffusion coefficient (ADC) measurements in the anterior mediastinum. In eighty-one subjects, on ADC mapping, two readers measured the ADC using five methods of ROI positioning that encompassed the entire tissue (whole tissue volume [WTV], three slices observer-defined [TSOD], single-slice [SS]) or the more restricted areas (one small round ROI [OSR], multiple small round ROI [MSR]). Inter-observer variability was assessed with interclass correlation coefficient (ICC), coefficient of variation (CoV), and Bland-Altman analysis. Nonparametric tests were performed to compare the ADC between ROI methods. The measurement time was recorded and compared between ROI methods. All methods showed excellent inter-reader agreement with best and worst reproducibility in WTV and OSR, respectively (ICC, 0.937/0.874; CoV, 7.3 %/16.8 %; limits of agreement, ±0.44/±0.77 x 10{sup -3} mm{sup 2}/s). ADC values of OSR and MSR were significantly lower compared to the other methods in both readers (p < 0.001). The SS and OSR methods required less measurement time (14 ± 2 s) compared to the others (p < 0.0001), while the WTV method required the longest measurement time (90 ± 56 and 77 ± 49 s for each reader) (p < 0.0001). All methods demonstrate excellent inter-observer reproducibility with the best agreement in WTV, although it requires the longest measurement time. (orig.)

  18. Decadal Variability Shown by the Arctic Ocean Hydrochemical Data and Reproduced by an Ice-Ocean Model

    Institute of Scientific and Technical Information of China (English)

    M. Ikeda; R. Colony; H. Yamaguchi; T. Ikeda

    2005-01-01

    The Arctic is experiencing a significant warming trend as well as a decadal oscillation. The atmospheric circulation represented by the Polar Vortex and the sea ice cover show decadal variabilities, while it has been difficult to reveal the decadal oscillation from the ocean interior. The recent distribution of Russian hydrochemical data collected from the Arctic Basin provides useful information on ocean interior variabilities. Silicate is used to provide the most valuable data for showing the boundary between the silicate-rich Pacific Water and the opposite Atlantic Water. Here, it is assumed that the silicate distribution receives minor influence from seasonal biological productivity and Siberian Rivers outflow. It shows a clear maximum around 100m depth in the Canada Basin, along with a vertical gradient below 100 m, which provides information on the vertical motion of the upper boundary of the Atlantic Water at a decadal time scale. The boundary shifts upward (downward), as realized by the silicate reduction (increase) at a fixed depth, responding to a more intense (weaker) Polar Vortex or a positive (negative) phase of the Arctic Oscillation. A coupled ice-ocean model is employed to reconstruct this decadal oscillation.

  19. Placing Observational Constraints on Massive Star Models

    Science.gov (United States)

    Rosenfield, Philip

    2011-10-01

    The lives and deaths of massive stars are intricately linked to the evolution of galaxies. Yet, despite their integral importance to understanding galaxy evolution, models of massive stars are inconsistent with observations. These uncertainties can be traced to limited observational constraints available for improving massive star models. A sensitive test of the underlying physics of massive stars, e.g., convection, rotation, and mass loss is to measure the ratio of blue core helium burning stars {BHeB} to red core helium burning stars {RHeB}, 5-20Msun stars in the stage evolution immediately following the main sequence. Even the most sophisticated models cannot accurately predict the observed ratio over a range of metallicities, suggesting an insufficient understanding of the underlying physics. However, observational measurements of this ratio over a wide range of environments would provide substantial constraints on the physical parameters governing the evolution of all stars >5 Msun.We propose to place stringent observational constraints on the physics of massive star evolution by uniformly measuring the B/R HeB ratio in a wide range of galaxies. The HST archive contains high quality optical imaging of resolved stellar populations of dozens of nearby galaxies. From the ANGST program, we identified 38 galaxies, spanning 2 dex in metallicity that have significant BHeB and RHeB populations. Using this sample, we will empirically characterize the colors of the BHeB and RHeB sequences as a function of luminosity and metallicity, measure the B/R ratio, and constrain the lifetimes of the BHeB and RHeBs in the Padova stellar evolution models and the Cambridge STARS code.

  20. Constraining Numerical Geodynamo Modeling with Surface Observations

    Science.gov (United States)

    Kuang, Weijia; Tangborn, Andrew

    2006-01-01

    Numerical dynamo solutions have traditionally been generated entirely by a set of self-consistent differential equations that govern the spatial-temporal variation of the magnetic field, velocity field and other fields related to dynamo processes. In particular, those solutions are obtained with parameters very different from those appropriate for the Earth s core. Geophysical application of the numerical results therefore depends on correct understanding of the differences (errors) between the model outputs and the true states (truth) in the outer core. Part of the truth can be observed at the surface in the form of poloidal magnetic field. To understand these differences, or errors, we generate new initial model state (analysis) by assimilating sequentially the model outputs with the surface geomagnetic observations using an optimal interpolation scheme. The time evolution of the core state is then controlled by our MoSST core dynamics model. The final outputs (forecasts) are then compared with the surface observations as a means to test the success of the assimilation. We use the surface geomagnetic data back to year 1900 for our studies, with 5-year forecast and 20-year analysis periods. We intend to use the result; to understand time variation of the errors with the assimilation sequences, and the impact of the assimilation on other unobservable quantities, such as the toroidal field and the fluid velocity in the core.

  1. The shocking transit of WASP-12b: Modelling the observed early ingress in the near ultraviolet

    CERN Document Server

    Llama, J; Jardine, M; Vidotto, A A; Helling, Ch; Fossati, L; Haswell, C A

    2011-01-01

    Near ultraviolet observations of WASP-12b have revealed an early ingress compared to the optical transit lightcurve. This has been interpreted as due to the presence of a magnetospheric bow shock which forms when the relative velocity of the planetary and stellar material is supersonic. We aim to reproduce this observed early ingress by modelling the stellar wind (or coronal plasma) in order to derive the speed and density of the material at the planetary orbital radius. From this we determine the orientation of the shock and the density of compressed plasma behind it. With this model for the density structure surrounding the planet we perform Monte Carlo radiation transfer simulations of the near UV transits of WASP-12b with and without a bow shock. We find that we can reproduce the transit lightcurves with a wide range of plasma temperatures, shock geometries and optical depths. Our results support the hypothesis that a bow shock could explain the observed early ingress.

  2. Adjoint inversion modeling of Asian dust emission using lidar observations

    Directory of Open Access Journals (Sweden)

    K. Yumimoto

    2008-06-01

    Full Text Available A four-dimensional variational (4D-Var data assimilation system for a regional dust model (RAMS/CFORS-4DVAR; RC4 is applied to an adjoint inversion of a heavy dust event over eastern Asia during 20 March–4 April 2007. The vertical profiles of the dust extinction coefficients derived from NIES Lidar network are directly assimilated, with validation using observation data. Two experiments assess impacts of observation site selection: Experiment A uses five Japanese observation sites located downwind of dust source regions; Experiment B uses these and two other sites near source regions. Assimilation improves the modeled dust extinction coefficients. Experiment A and Experiment B assimilation results are mutually consistent, indicating that observations of Experiment A distributed over Japan can provide comprehensive information related to dust emission inversion. Time series data of dust AOT calculated using modeled and Lidar dust extinction coefficients improve the model results. At Seoul, Matsue, and Toyama, assimilation reduces the root mean square differences of dust AOT by 35–40%. However, at Beijing and Tsukuba, the RMS differences degrade because of fewer observations during the heavy dust event. Vertical profiles of the dust layer observed by CALIPSO are compared with assimilation results. The dense dust layer was trapped at potential temperatures (θ of 280–300 K and was higher toward the north; the model reproduces those characteristics well. Latitudinal distributions of modeled dust AOT along the CALIPSO orbit paths agree well with those of CALIPSO dust AOT, OMI AI, and MODIS coarse-mode AOT, capturing the latitude at which AOTs and AI have high values. Assimilation results show increased dust emissions over the Gobi Desert and Mongolia; especially for 29–30 March, emission flux is about 10 times greater. Strong dust uplift fluxes over the Gobi Desert and Mongolia cause the heavy dust event. Total optimized dust emissions are 57

  3. Reproducibility of haemodynamical simulations in a subject-specific stented aneurysm model--a report on the Virtual Intracranial Stenting Challenge 2007.

    Science.gov (United States)

    Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F

    2008-07-19

    This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and

  4. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Directory of Open Access Journals (Sweden)

    Rania M Nada

    Full Text Available Superimposition of serial Cone Beam Computed Tomography (CBCT scans has become a valuable tool for three dimensional (3D assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16 for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27 for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  5. Controllability, Observability, and Stability of Mathematical Models

    OpenAIRE

    Iggidr, Abderrahman

    2004-01-01

    International audience; This article presents an overview of three fundamental concepts in Mathematical System Theory: controllability, stability and observability. These properties play a prominent role in the study of mathematical models and in the understanding of their behavior. They constitute the main research subject in Control Theory. Historically the tools and techniques of Automatic Control have been developed for artificial engineering systems but nowadays they are more and more ap...

  6. Evaluation of CNN as anthropomorphic model observer

    Science.gov (United States)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  7. Observation and modelling of urban dew

    Science.gov (United States)

    Richards, Katrina

    Despite its relevance to many aspects of urban climate and to several practical questions, urban dew has largely been ignored. Here, simple observations an out-of-doors scale model, and numerical simulation are used to investigate patterns of dewfall and surface moisture (dew + guttation) in urban environments. Observations and modelling were undertaken in Vancouver, B.C., primarily during the summers of 1993 and 1996. Surveys at several scales (0.02-25 km) show that the main controls on dew are weather, location and site configuration (geometry and surface materials). Weather effects are discussed using an empirical factor, FW . Maximum dew accumulation (up to ~ 0.2 mm per night) is seen on nights with moist air and high FW , i.e., cloudless conditions with light winds. Favoured sites are those with high Ysky and surfaces which cool rapidly after sunset, e.g., grass and well insulated roofs. A 1/8-scale model is designed, constructed, and run at an out-of-doors site to study dew patterns in an urban residential landscape which consists of house lots, a street and an open grassed park. The Internal Thermal Mass (ITM) approach is used to scale the thermal inertia of buildings. The model is validated using data from full-scale sites in Vancouver. Patterns in the model agree with those seen at the full-scale, i.e., dew distribution is governed by weather, site geometry and substrate conditions. Correlation is shown between Ysky and surface moisture accumulation. The feasibility of using a numerical model to simulate urban dew is investigated using a modified version of a rural dew model. Results for simple isolated surfaces-a deciduous tree leaf and an asphalt shingle roof-show promise, especially for built surfaces.

  8. Adaptation and Validation of QUick, Easy, New, CHEap, and Reproducible (QUENCHER) Antioxidant Capacity Assays in Model Products Obtained from Residual Wine Pomace.

    Science.gov (United States)

    Del Pino-García, Raquel; García-Lomillo, Javier; Rivero-Pérez, María D; González-SanJosé, María L; Muñiz, Pilar

    2015-08-12

    Evaluation of the total antioxidant capacity of solid matrices without extraction steps is a very interesting alternative for food researchers and also for food industries. These methodologies have been denominated QUENCHER from QUick, Easy, New, CHEap, and Reproducible assays. To demonstrate and highlight the validity of QUENCHER (Q) methods, values of Q-method validation were showed for the first time, and they were tested with products of well-known different chemical properties. Furthermore, new QUENCHER assays to measure scavenging capacity against superoxide, hydroxyl, and lipid peroxyl radicals were developed. Calibration models showed good linearity (R(2) > 0.995), proportionality and precision (CV antioxidant capacity values significantly different from those obtained with water. The dilution of samples with powdered cellulose was discouraged because possible interferences with some of the matrices analyzed may take place.

  9. Sensitivity studies of spin cut-off models on fission fragment observables

    Directory of Open Access Journals (Sweden)

    Thulliez L.

    2016-01-01

    Full Text Available A fission fragment de-excitation code, FIFRELIN, is being developed at CEA Cadarache. It allows probing the characteristics of the prompt emitted particles, neutrons and gammas, during the de-excitation process of fully accelerated fission fragments. The knowledge of the initial states of the fragments is important to accurately reproduce the fission fragment observables. In this paper a sensitivity study of various spin cut-off models, completely defining the initial fission fragment angular momentum distribution has been performed. This study shows that the choice of the model has a significant impact on gamma observables such as spectrum and multiplicity and almost none on the neutron observables.

  10. Observational signatures of anisotropic inflationary models

    CERN Document Server

    Ohashi, Junko; Tsujikawa, Shinji

    2013-01-01

    We study observational signatures of two classes of anisotropic inflationary models in which an inflaton field couples to (i) a vector kinetic term F_{mu nu}F^{mu nu} and (ii) a two-form kinetic term H_{mu nu lambda}H^{mu nu lambda}. We compute the corrections from the anisotropic sources to the power spectrum of gravitational waves as well as the two-point cross correlation between scalar and tensor perturbations. The signs of the anisotropic parameter g_* are different depending on the vector and the two-form models, but the statistical anisotropies generally lead to a suppressed tensor-to-scalar ratio r and a smaller scalar spectral index n_s in both models. In the light of the recent Planck bounds of n_s and r, we place observational constraints on several different inflaton potentials such as those in chaotic and natural inflation in the presence of anisotropic interactions. In the two-form model we also find that there is no cross correlation between scalar and tensor perturbations, while in the vector ...

  11. Reproducible research in palaeomagnetism

    Science.gov (United States)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  12. Observations and Modeling of Geospace Energetic Particles

    Science.gov (United States)

    Li, Xinlin

    2016-07-01

    Comprehensive measurements of energetic particles and electric and magnetic fields from state-of-art instruments onboard Van Allen Probes, in a geo-transfer-like orbit, revealed new features of the energetic particles and the fields in the inner magnetosphere and impose new challenges to any quantitative modeling of the physical processes responsible for these observations. Concurrent measurements of energetic particles by satellites in highly inclined low Earth orbits and plasma and fields by satellites in farther distances in the magnetospheres and in the up stream solar wind are the critically needed information for quantitative modeling and for leading to eventual accurate forecast of the variations of the energetic particles in the magnetosphere. In this presentation, emphasis will be on the most recent advance in our understanding of the energetic particles in the magnetosphere and the missing links for significantly advance in our modeling and forecasting capabilities.

  13. Dark energy observational evidence and theoretical models

    CERN Document Server

    Novosyadlyj, B; Shtanov, Yu; Zhuk, A

    2013-01-01

    The book elucidates the current state of the dark energy problem and presents the results of the authors, who work in this area. It describes the observational evidence for the existence of dark energy, the methods and results of constraining of its parameters, modeling of dark energy by scalar fields, the space-times with extra spatial dimensions, especially Kaluza---Klein models, the braneworld models with a single extra dimension as well as the problems of positive definition of gravitational energy in General Relativity, energy conditions and consequences of their violation in the presence of dark energy. This monograph is intended for science professionals, educators and graduate students, specializing in general relativity, cosmology, field theory and particle physics.

  14. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2014-05-01

    Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

  15. Can a Dusty Warm Absorber Model Reproduce the Soft X-ray Spectra of MCG-6-30-15 and Mrk 766?

    CERN Document Server

    Sako, M; Branduardi-Raymont, G; Kaastra, J S; Brinkman, A C; Page, M J; Behar, E; Paerels, F B S; Kinkhabwala, A; Liedahl, D A; Den Herder, J W A

    2003-01-01

    XMM-Newton RGS spectra of MCG-6-30-15 and Mrk 766 exhibit complex discrete structure, which was interpreted in a paper by Branduardi-Raymont et al. (2001) as evidence for the existence of relativistically broadened Lyman alpha emission from carbon, nitrogen, and oxygen, produced in the inner-most regions of an accretion disk around a Kerr black hole. This suggestion was subsequently criticized in a paper by Lee et al. (2001), who argued that for MCG-6-30-15, the Chandra HETG spectrum, which is partially overlapping the RGS in spectral coverage, is adequately fit by a dusty warm absorber model, with no relativistic line emission. We present a reanalysis of the original RGS data sets in terms of the Lee et al. (2001) model, and demonstrate that spectral models consisting of a smooth continuum with ionized and dust absorption alone cannot reproduce the RGS spectra of both objects. The original relativistic line model with warm absorption proposed by Branduardi-Raymont et al. (2001) provides a superior fit to the...

  16. Opening Reproducible Research

    Science.gov (United States)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  17. Observational Equivalence of Discrete String Models and Market Models

    NARCIS (Netherlands)

    Kerkhof, F.L.J.; Pelsser, A.

    2002-01-01

    In this paper we show that, contrary to the claim made in Longsta, Santa-Clara, and Schwartz (2001a) and Longsta, Santa-Clara, and Schwartz (2001b), discrete string models are not more parsimonious than market models.In fact, they are found to be observationally equivalent.We derive that, for the es

  18. Constraining Cosmological Models with Different Observations

    Science.gov (United States)

    Wei, J. J.

    2016-07-01

    With the observations of Type Ia supernovae (SNe Ia), scientists discovered that the Universe is experiencing an accelerated expansion, and then revealed the existence of dark energy in 1998. Since the amazing discovery, cosmology has became a hot topic in the physical research field. Cosmology is a subject that strongly depends on the astronomical observations. Therefore, constraining different cosmological models with all kinds of observations is one of the most important research works in the modern cosmology. The goal of this thesis is to investigate cosmology using the latest observations. The observations include SNe Ia, Type Ic Super Luminous supernovae (SLSN Ic), Gamma-ray bursts (GRBs), angular diameter distance of galaxy cluster, strong gravitational lensing, and age measurements of old passive galaxies, etc. In Chapter 1, we briefly review the research background of cosmology, and introduce some cosmological models. Then we summarize the progress on cosmology from all kinds of observations in more details. In Chapter 2, we present the results of our studies on the supernova cosmology. The main difficulty with the use of SNe Ia as standard candles is that one must optimize three or four nuisance parameters characterizing SN luminosities simultaneously with the parameters of an expansion model of the Universe. We have confirmed that one should optimize all of the parameters by carrying out the method of maximum likelihood estimation in any situation where the parameters include an unknown intrinsic dispersion. The commonly used method, which estimates the dispersion by requiring the reduced χ^{2} to equal unity, does not take into account all possible variances among the parameters. We carry out such a comparison of the standard ΛCDM cosmology and the R_{h}=ct Universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. Moreover, it is quite evident that SLSNe Ic may be useful

  19. Observations and Modeling of Solar Flare Atmospheric Dynamics

    Science.gov (United States)

    Li, Y.

    2015-09-01

    loops with different heating functions as inferred from their footpoint UV emission, combined with their different lengths as measured from imaging observations, give rise to different coronal plasma evolution patterns as revealed in both models and observations. With the same method, we further analyze another C4.7 flare. From AIA imaging observations, we can identify two sets of loops in this event. EIS spectroscopic observations reveal blueshifts at the feet of both sets of loops during the impulsive phase. However, the dynamic evolutions of the two sets of loops are quite different. The first set of loops exhibits blueshifts (˜10 km/s) for about 25 minutes followed by redshifts, while the second set shows stronger blueshifts (˜20 km/s) which are maintained for about an hour. The long-lasting blueshifts in the second set of loops are indicative of continuous heating. The UV 1600 Å ~ observation by AIA also shows that the feet of the loops brighten twice with 15 minutes apart. The first set of loops, on the other hand, brighten only once in the UV band. We construct heating functions of the two sets of loops using spatially resolved UV light curves at their footpoints, and model plasma evolution in these loops with the EBTEL model. The results show that, for the first set of loops, the synthetic EUV light curves from the model compare favorably with the observed light curves in six AIA channels and eight EIS spectral lines, and the computed mean enthalpy flow velocities also agree with the Doppler shifts measured in EIS lines. For the second set of loops modeled with twice-heating, there are some discrepancies between modeled and observed EUV light curves at low-temperature lines, and the model does not fully reproduce the prolonged blueshift signatures as observed. The prominent discrepancies between model and observations for the second set of loops may be caused by non-uniform heating localized especially at the loop footpoints which cannot be modeled by the 0D

  20. Observations and Models of Galaxy Assembly Bias

    Science.gov (United States)

    Campbell, Duncan A.

    2017-01-01

    The assembly history of dark matter haloes imparts various correlations between a halo’s physical properties and its large scale environment, i.e. assembly bias. It is common for models of the galaxy-halo connection to assume that galaxy properties are only a function of halo mass, implicitly ignoring how assembly bias may affect galaxies. Recently, programs to model and constrain the degree to which galaxy properties are influenced by assembly bias have been undertaken; however, the extent and character of galaxy assembly bias remains a mystery. Nevertheless, characterizing and modeling galaxy assembly bias is an important step in understanding galaxy evolution and limiting any systematic effects assembly bias may pose in cosmological measurements using galaxy surveys.I will present work on modeling and constraining the effect of assembly bias in two galaxy properties: stellar mass and star-formation rate. Conditional abundance matching allows for these galaxy properties to be tied to halo formation history to a variable degree, making studies of the relative strength of assembly bias possible. Galaxy-galaxy clustering and galactic conformity, the degree to which galaxy color is correlated between neighbors, are sensitive observational measures of galaxy assembly bias. I will show how these measurements can be used to constrain galaxy assembly bias and the peril of ignoring it.

  1. Current status of the ability of the GEMS/MACC models to reproduce the tropospheric CO vertical distribution as measured by MOZAIC

    Directory of Open Access Journals (Sweden)

    N. Elguindi

    2010-04-01

    Full Text Available Vertical profiles of CO taken from the MOZAIC aircraft database are used to present (1 a global analysis of CO seasonal averages and interannual variability for the years 2002–2007 and (2 a global validation of CO estimates produced by the MACC models for 2004, including an assessment of their ability to transport pollutants originating from the Alaskan/Canadian wildfires. Seasonal averages and interannual variability from several MOZAIC sites representing different regions of the world show that CO concentrations are highest and most variable during the winter season. The inter-regional variability is significant with concentrations increasing eastward from Europe to Japan. The impact of the intense boreal fires, particularly in Russia, during the fall of 2002 on the Northern Hemisphere CO concentrations throughout the troposphere is well represented by the MOZAIC data.

    A global validation of the GEMS/MACC GRG models which include three stand-alone CTMs (MOZART, MOCAGE and TM5 and the coupled ECMWF Integrated Forecasting System (IFS/MOZART model with and without MOPITT CO data assimilation show that the models have a tendency to underestimate CO. The models perform best in Europe and the US where biases range from 0 to –25% in the free troposphere and from 0 to –50% in the surface and boundary layers (BL. The biases are largest in the winter and during the daytime when emissions are highest, indicating that current inventories are too low. Data assimilation is shown to reduce biases by up to 25% in some regions. The models are not able to reproduce well the CO plumes originating from the Alaskan/Canadian wildfires at downwind locations in the eastern US and Europe, not even with assimilation. Sensitivity tests reveal that this is mainly due to deficiencies in the fire emissions inventory and injection height.

  2. Reproducible research in vadose zone sciences

    Science.gov (United States)

    A significant portion of present-day soil and Earth science research is computational, involving complex data analysis pipelines, advanced mathematical and statistical models, and sophisticated computer codes. Opportunities for scientific progress are greatly diminished if reproducing and building o...

  3. Spatial characteristics of the tropical cloud systems: comparison between model simulation and satellite observations

    OpenAIRE

    Guang J. Zhang; Zurovac-Jevtic, Dance; Erwin R Boer

    2011-01-01

    A Lagrangian cloud classification algorithm is applied to the cloud fields in the tropical Pacificsimulated by a high-resolution regional atmospheric model. The purpose of this work is toassess the model’s ability to reproduce the observed spatial characteristics of the tropical cloudsystems. The cloud systems are broadly grouped into three categories: deep clouds, mid-levelclouds and low clouds. The deep clouds are further divided into mesoscale convective systemsand non-mesoscale convective...

  4. Lagrangian Observations and Modeling of Marine Larvae

    Science.gov (United States)

    Paris, Claire B.; Irisson, Jean-Olivier

    2017-04-01

    Just within the past two decades, studies on the early-life history stages of marine organisms have led to new paradigms in population dynamics. Unlike passive plant seeds that are transported by the wind or by animals, marine larvae have motor and sensory capabilities. As a result, marine larvae have a tremendous capacity to actively influence their dispersal. This is continuously revealed as we develop new techniques to observe larvae in their natural environment and begin to understand their ability to detect cues throughout ontogeny, process the information, and use it to ride ocean currents and navigate their way back home, or to a place like home. We present innovative in situ and numerical modeling approaches developed to understand the underlying mechanisms of larval transport in the ocean. We describe a novel concept of a Lagrangian platform, the Drifting In Situ Chamber (DISC), designed to observe and quantify complex larval behaviors and their interactions with the pelagic environment. We give a brief history of larval ecology research with the DISC, showing that swimming is directional in most species, guided by cues as diverse as the position of the sun or the underwater soundscape, and even that (unlike humans!) larvae orient better and swim faster when moving as a group. The observed Lagrangian behavior of individual larvae are directly implemented in the Connectivity Modeling System (CMS), an open source Lagrangian tracking application. Simulations help demonstrate the impact that larval behavior has compared to passive Lagrangian trajectories. These methodologies are already the base of exciting findings and are promising tools for documenting and simulating the behavior of other small pelagic organisms, forecasting their migration in a changing ocean.

  5. Modern observations and models of Solar flares

    Science.gov (United States)

    Gritsyk, Pavel; Somov, Boris

    As well known, that fast particles propagating along flare loop generate bremsstrahlung hard x-ray emission and gyro-synchrotron microwave emission. We present the self-consistent kinetic description of propagation accelerated particles. The key point of this approach is taking into account the effect of reverse current. In our two-dimensional model the electric field of reverse current has the strong influence to the beam of accelerated particles. It decelerates part of the electrons in the beam and turns back other part of them without significant energy loss. The exact analytical solution for the appropriate kinetic equation with Landau collision integral was found. Using derived distribution function of electrons we’ve calculated evolution of their energy spectrum and plasma heating, coronal microwave emission and characteristics of hard x-ray emission in the corona and in the chromosphere. All results were compared with modern high precision space observations.

  6. The Whisper of Deep Basins: Observation & Modelling

    Science.gov (United States)

    Burjanek, J.; Ermert, L. A.; Poggi, V.; Michel, C.; Fäh, D.

    2013-12-01

    Free oscillations of Earth have been used for a long time to retrieve information about the deep Earth's interior. On a much smaller scale, standing waves develop in deep sedimentary basins and can possibly be used in a similar way. The sensitivity of these waves to subsurface properties makes them a potential source of information about the deep basin characteristics. We investigated the sequence of two-dimensional resonance modes occurring in Rhône Valley, a strongly over-deepened, glacially carved basin with a sediment fill reaching up to 900 m thickness. We obtained two synchronous linear-array recordings of ambient vibrations and analysed them with two different processing techniques. First, both 2D resonance frequencies and their corresponding modal shapes were identified by frequency-domain decomposition of the signal cross-spectral density matrix. Second, time-frequency polarization analysis was applied to support the addressing of the modes and to determine the relative contributions of the vertical and horizontal components of the fundamental in-plane mode. Simplified 2-D finite element models were then used to support the interpretation of the observations. We were able to identify several resonance modes including previously unmeasured higher modes at all investigated locations in the valley. Good agreement was found between results of our study and previous studies, between the two processing techniques and between observed and modelled results. Finally, a parametric study was performed to qualitatively assess the sensitivity of the mode's order, shape and frequency to subsurface properties like bedrock geometry, Poisson's ratio and shear wave velocity of the sediments. We concluded that the sequence of modes as they appear by frequency depends strongly on subsurface properties. Therefore addressing of the higher modes can be done reliably only with prior information on the sediment structure.

  7. Planck intermediate results XXIX. All-sky dust modelling with Planck, IRAS, and WISE observations

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.

    2016-01-01

    . The present work extends the DL dust modelling carried out on nearby galaxies using Herschel and Spitzer data to Galactic dust emission. We employ the DL dust model to generate maps of the dust mass surface density Sigma(Md), the dust optical extinction A(V), and the starlight intensity heating the bulk......We present all-sky modelling of the high resolution Planck, IRAS, andWISE infrared (IR) observations using the physical dust model presented by Draine & Li in 2007 (DL, ApJ, 657, 810). We study the performance and results of this model, and discuss implications for future dust modelling...... of the dust, parametrized by U-min. The DL model reproduces the observed spectral energy distribution (SED) satisfactorily over most of the sky, with small deviations in the inner Galactic disk and in low ecliptic latitude areas, presumably due to zodiacal light contamination. In the Andromeda galaxy (M31...

  8. Observing and Modeling Earth's Energy Flows

    Science.gov (United States)

    Stevens, Bjorn; Schwartz, Stephen E.

    2012-07-01

    This article reviews, from the authors' perspective, progress in observing and modeling energy flows in Earth's climate system. Emphasis is placed on the state of understanding of Earth's energy flows and their susceptibility to perturbations, with particular emphasis on the roles of clouds and aerosols. More accurate measurements of the total solar irradiance and the rate of change of ocean enthalpy help constrain individual components of the energy budget at the top of the atmosphere to within ±2 W m-2. The measurements demonstrate that Earth reflects substantially less solar radiation and emits more terrestrial radiation than was believed even a decade ago. Active remote sensing is helping to constrain the surface energy budget, but new estimates of downwelling surface irradiance that benefit from such methods are proving difficult to reconcile with existing precipitation climatologies. Overall, the energy budget at the surface is much more uncertain than at the top of the atmosphere. A decade of high-precision measurements of the energy budget at the top of the atmosphere is providing new opportunities to track Earth's energy flows on timescales ranging from days to years, and at very high spatial resolution. The measurements show that the principal limitation in the estimate of secular trends now lies in the natural variability of the Earth system itself. The forcing-feedback-response framework, which has developed to understand how changes in Earth's energy flows affect surface temperature, is reviewed in light of recent work that shows fast responses (adjustments) of the system are central to the definition of the effective forcing that results from a change in atmospheric composition. In many cases, the adjustment, rather than the characterization of the compositional perturbation (associated, for instance, with changing greenhouse gas concentrations, or aerosol burdens), limits accurate determination of the radiative forcing. Changes in clouds contribute

  9. Reinjected water return at Miravalles geothermal reservoir, Costa Rica: Numerical modelling and observations

    Energy Technology Data Exchange (ETDEWEB)

    Parini, Mauro; Acuna, Jorge A.; Laudiano, Michele

    1996-01-24

    The first 55 MW power plant at Miravalles started operation in March, 1994. During the first few months of production, a gradual increase in chloride content was observed in some production wells. The cause was assumed to be a rapid return of injectate from two in.jection wells located fairly near to the main production area. A tracer test was performed and showed a relatively rapid breakthrough, confirming the assumption made. Numerical modeling was then carried out to try to reproduce the observed behavior. The reservoir was modelled with an idealized three-dimensional network of fractures embedded into a low permeability matrix. The “two waters” feature of TOUGH2 simulator was used. The numerical simulation showed good agreement with observations. A “porous medium” model with equivalent hydraulic characteristics was unable to reproduce the observations. The fractured model, when applied to investigate the mid and long term expected behavior, indicated a reservoir cooling risk associated to the present injection scheme. Work is currently underway to modify this scheme.

  10. PNe as observational constraints in chemical evolution models for NGC 6822

    CERN Document Server

    Hernandez-Martinez, Liliana; Peña, Miriam; Peimbert, Manuel

    2011-01-01

    Chemical evolution models are useful for understanding the formation and evolution of stars and galaxies. Model predictions will be more robust as more observational constraints are used. We present chemical evolution models for the dwarf irregular galaxy NGC 6822 using chemical abundances of old and young Planetary Nebulae (PNe) and \\ion{H}{ii} regions as observational constraints. Two sets of chemical abundances, one derived from collisionally excited lines (CELs) and one, from recombination lines (RLs), are used. We try to use our models as a tool to discriminate between both procedures for abundance determinations. In our chemical evolution code, the chemical contribution of low and intermediate mass stars is time delayed, while for the massive stars the chemical contribution follows the instantaneous recycling approximation. Our models have two main free parameters: the mass-loss rate of a well-mixed outflow and the upper mass limit, $M_{up}$, of the initial mass function (IMF). To reproduce the gaseous ...

  11. General Description of Fission Observables: GEF Model Code

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K.-H. [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Amouroux, C. [CEA, DSM-Saclay (France); Schmitt, C., E-mail: schmitt@ganil.fr [GANIL, Bd. Henri Becquerel, B.P. 55027, F-14076 Caen Cedex 05 (France)

    2016-01-15

    consistent with the collective enhancement of the level density. The exchange of excitation energy and nucleons between the nascent fragments on the way from saddle to scission is estimated according to statistical mechanics. As a result, excitation energy and unpaired nucleons are predominantly transferred to the heavy fragment in the superfluid regime. This description reproduces some rather peculiar observed features of the prompt-neutron multiplicities and of the even-odd effect in fission-fragment Z distributions. For completeness, some conventional descriptions are used for calculating pre-equilibrium emission, fission probabilities and statistical emission of neutrons and gamma radiation from the excited fragments. Preference is given to simple models that can also be applied to exotic nuclei compared to more sophisticated models that need precise empirical input of nuclear properties, e.g. spectroscopic information. The approach reveals a high degree of regularity and provides a considerable insight into the physics of the fission process. Fission observables can be calculated with a precision that complies with the needs for applications in nuclear technology without specific adjustments to measured data of individual systems. The GEF executable runs out of the box with no need for entering any empirical data. This unique feature is of valuable importance, because the number of systems and energies of potential significance for fundamental and applied science will never be possible to be measured. The relevance of the approach for examining the consistency of experimental results and for evaluating nuclear data is demonstrated.

  12. Modelled and observed changes in aerosols and surface solar radiation over Europe between 1960 and 2009

    Science.gov (United States)

    Turnock, S. T.; Spracklen, D. V.; Carslaw, K. S.; Mann, G. W.; Woodhouse, M. T.; Forster, P. M.; Haywood, J.; Johnson, C. E.; Dalvi, M.; Bellouin, N.; Sanchez-Lorenzo, A.

    2015-08-01

    Substantial changes in anthropogenic aerosols and precursor gas emissions have occurred over recent decades due to the implementation of air pollution control legislation and economic growth. The response of atmospheric aerosols to these changes and the impact on climate are poorly constrained, particularly in studies using detailed aerosol chemistry-climate models. Here we compare the HadGEM3-UKCA (Hadley Centre Global Environment Model-United Kingdom Chemistry and Aerosols) coupled chemistry-climate model for the period 1960-2009 against extensive ground-based observations of sulfate aerosol mass (1978-2009), total suspended particle matter (SPM, 1978-1998), PM10 (1997-2009), aerosol optical depth (AOD, 2000-2009), aerosol size distributions (2008-2009) and surface solar radiation (SSR, 1960-2009) over Europe. The model underestimates observed sulfate aerosol mass (normalised mean bias factor (NMBF) = -0.4), SPM (NMBF = -0.9), PM10 (NMBF = -0.2), aerosol number concentrations (N30 NMBF = -0.85; N50 NMBF = -0.65; and N100 NMBF = -0.96) and AOD (NMBF = -0.01) but slightly overpredicts SSR (NMBF = 0.02). Trends in aerosol over the observational period are well simulated by the model, with observed (simulated) changes in sulfate of -68 % (-78 %), SPM of -42 % (-20 %), PM10 of -9 % (-8 %) and AOD of -11 % (-14 %). Discrepancies in the magnitude of simulated aerosol mass do not affect the ability of the model to reproduce the observed SSR trends. The positive change in observed European SSR (5 %) during 1990-2009 ("brightening") is better reproduced by the model when aerosol radiative effects (ARE) are included (3 %), compared to simulations where ARE are excluded (0.2 %). The simulated top-of-the-atmosphere aerosol radiative forcing over Europe under all-sky conditions increased by > 3.0 W m-2 during the period 1970-2009 in response to changes in anthropogenic emissions and aerosol concentrations.

  13. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  14. Evaluation of guidewire path reproducibility.

    Science.gov (United States)

    Schafer, Sebastian; Hoffmann, Kenneth R; Noël, Peter B; Ionita, Ciprian N; Dmochowski, Jacek

    2008-05-01

    The number of minimally invasive vascular interventions is increasing. In these interventions, a variety of devices are directed to and placed at the site of intervention. The device used in almost all of these interventions is the guidewire, acting as a monorail for all devices which are delivered to the intervention site. However, even with the guidewire in place, clinicians still experience difficulties during the interventions. As a first step toward understanding these difficulties and facilitating guidewire and device guidance, we have investigated the reproducibility of the final paths of the guidewire in vessel phantom models on different factors: user, materials and geometry. Three vessel phantoms (vessel diameters approximately 4 mm) were constructed having tortuousity similar to the internal carotid artery from silicon tubing and encased in Sylgard elastomer. Several trained users repeatedly passed two guidewires of different flexibility through the phantoms under pulsatile flow conditions. After the guidewire had been placed, rotational c-arm image sequences were acquired (9 in. II mode, 0.185 mm pixel size), and the phantom and guidewire were reconstructed (512(3), 0.288 mm voxel size). The reconstructed volumes were aligned. The centerlines of the guidewire and the phantom vessel were then determined using region-growing techniques. Guidewire paths appear similar across users but not across materials. The average root mean square difference of the repeated placement was 0.17 +/- 0.02 mm (plastic-coated guidewire), 0.73 +/- 0.55 mm (steel guidewire) and 1.15 +/- 0.65 mm (steel versus plastic-coated). For a given guidewire, these results indicate that the guidewire path is relatively reproducible in shape and position.

  15. Revisiting the Rigidly Rotating Magnetosphere model for sigma Ori E. I. Observations and Data Analysis

    CERN Document Server

    Oksala, M E; Townsend, R H D; Owocki, S P; Kochukhov, O; Neiner, C; Alecian, E; Grunhut, J

    2011-01-01

    We have obtained 18 new high-resolution spectropolarimetric observations of the B2Vp star sigma Ori E with both the Narval and ESPaDOnS spectropolarimeters. The aim of these observations is to test, with modern data, the assumptions of the Rigidly Rotating Magnetosphere (RRM) model of Townsend & Owocki (2005), applied to the specific case of sigma Ori E by Townsend et al. (2005). This model includes a substantially offset dipole magnetic field configuration, and approximately reproduces previous observational variations in longitudinal field strength, photometric brightness, and Halpha emission. We analyze new spectroscopy, including H I, He I, C II, Si III and Fe III lines, confirming the diversity of variability in photospheric lines, as well as the double S-wave variation of circumstellar hydrogen. Using the multiline analysis method of Least-Squares Deconvolution (LSD), new, more precise longitudinal magnetic field measurements reveal a substantial variance between the shapes of the observed and RRM m...

  16. Observational signatures of holographic models of inflation

    NARCIS (Netherlands)

    P. McFadden; K. Skenderis

    2009-01-01

    We discuss the phenomenology of recently proposed holographic models of inflation, in which the very early universe is non-geometric and is described by a dual three-dimensional quantum field theory (QFT). We analyze models determined by a specific class of dual QFTs and show that they have the foll

  17. Observations and modelling of snow avalanche entrainment

    Directory of Open Access Journals (Sweden)

    B. Sovilla

    2002-01-01

    Full Text Available In this paper full scale avalanche dynamics measurements from the Italian Pizzac and Swiss Vallée de la Sionne test sites are used to develop a snowcover entrainment model. A detailed analysis of three avalanche events shows that snowcover entrainment at the avalanche front appears to dominate over bed erosion at the basal sliding surface. Furthermore, the distribution of mass within the avalanche body is primarily a function of basal friction. We show that the mass distribution in the avalanche changes the flow dynamics significantly. Two different dynamical models, the Swiss Voellmy-fluid model and the Norwegian NIS model, are used to back calculate the events. Various entrainment methods are investigated and compared to measurements. We demon-strate that the Norwegian NIS model is clearly better able to simulate the events once snow entrainment has been included in the simulations.

  18. Submillimetre continuum emission from Class 0 sources: Theory, Observations, and Modelling

    CERN Document Server

    Rengel, M; Fröbrich, D; Wolf, S; Eislöffel, J; Rengel, Miriam; Hodapp, Klaus; Froebrich, Dirk; Wolf, Sebastian; Eisloeffel, Jochen

    2004-01-01

    We report on a study of the thermal dust emission of the circumstellar envelopes of a sample of Class 0 sources. The physical structure (geometry, radial intensity profile, spatial temperature and spectral energy distribution) and properties (mass, size, bolometric luminosity (L_bol) and temperature (T_ bol), and age) of Class 0 sources are derived here in an evolutionary context. This is done by combining SCUBA imaging at 450 and 850 microm of the thermal dust emission of envelopes of Class 0 sources in the Perseus and Orion molecular cloud complexes with a model of the envelope, with the implementation of techniques like the blackbody fitting and radiative transfer calculations of dusty envelopes, and with the Smith evolutionary model for protostars. The modelling results obtained here confirm the validity of a simple spherical symmetric model envelope, and the assumptions about density and dust distributions following the standard envelope model. The spherically model reproduces reasonably well the observe...

  19. Observational Tests of Planet Formation Models

    CERN Document Server

    Sozzetti, A; Latham, D W; Carney, B W; Laird, J B; Stefanik, R P; Boss, A P; Charbonneau, D; O'Donovan, F T; Holman, M J; Winn, J N

    2007-01-01

    We summarize the results of two experiments to address important issues related to the correlation between planet frequencies and properties and the metallicity of the hosts. Our results can usefully inform formation, structural, and evolutionary models of gas giant planets.

  20. Bayesian Network Models for Local Dependence among Observable Outcome Variables

    Science.gov (United States)

    Almond, Russell G.; Mulder, Joris; Hemat, Lisa A.; Yan, Duanli

    2009-01-01

    Bayesian network models offer a large degree of flexibility for modeling dependence among observables (item outcome variables) from the same task, which may be dependent. This article explores four design patterns for modeling locally dependent observations: (a) no context--ignores dependence among observables; (b) compensatory context--introduces…

  1. Hydrological extremes in China during 1971-2000: from observations and models

    Science.gov (United States)

    Liu, Xingcai; He, Jun; Mu, Mengfei; Tang, Qiuhong

    2016-04-01

    Hydrological cycle in China has been greatly affected by both significant climate change and human disturbance since the 1970s. The ISI-MIP2 project provides such a framework by involving multiple hydrological models to reproduce the global hydrological cycle considering both climate change and human impacts. However, the multimodel simulations yet need validation at regional applications. In this study, we evaluate the multimodel simulations of river flow using monthly observations from about 300 hydrological stations in China during the 1970-2000 period. The Nash-Sutcliffe (NS) coefficient and mean relative errors (MRE) are computed for each station to measure the performance of multimodel simulations. Trends in river flow are also compared for observations and simulations. On the basis of overall comparison, we evaluate the hydrological extremes derived from observations and simulations. The hydrological extremes are identified using a standardized discharge index (SDI), which resembles the standardized precipitation index (SPI), based on monthly river flow. The performance of multimodel simulations in reproducing hydrological extremes shows regional difference, and which seems to be greatly associated with the intensity of human activities in the basins. The uncertainty in multimodel simulations may be from models and input data. The uncertainties from both the hydrological models and forcings are investigated, and uncertainty from human impact related input (irrigated area and reservoir storage) is discussed with respect to reported data in China.

  2. A Study of Long-Term fMRI Reproducibility Using Data-Driven Analysis Methods.

    Science.gov (United States)

    Song, Xiaomu; Panych, Lawrence P; Chou, Ying-Hui; Chen, Nan-Kuei

    2014-12-01

    The reproducibility of functional magnetic resonance imaging (fMRI) is important for fMRI-based neuroscience research and clinical applications. Previous studies show considerable variation in amplitude and spatial extent of fMRI activation across repeated sessions on individual subjects even using identical experimental paradigms and imaging conditions. Most existing fMRI reproducibility studies were typically limited by time duration and data analysis techniques. Particularly, the assessment of reproducibility is complicated by a fact that fMRI results may depend on data analysis techniques used in reproducibility studies. In this work, the long-term fMRI reproducibility was investigated with a focus on the data analysis methods. Two spatial smoothing techniques, including a wavelet-domain Bayesian method and the Gaussian smoothing, were evaluated in terms of their effects on the long-term reproducibility. A multivariate support vector machine (SVM)-based method was used to identify active voxels, and compared to a widely used general linear model (GLM)-based method at the group level. The reproducibility study was performed using multisession fMRI data acquired from eight healthy adults over 1.5 years' period of time. Three regions-of-interest (ROI) related to a motor task were defined based upon which the long-term reproducibility were examined. Experimental results indicate that different spatial smoothing techniques may lead to different reproducibility measures, and the wavelet-based spatial smoothing and SVM-based activation detection is a good combination for reproducibility studies. On the basis of the ROIs and multiple numerical criteria, we observed a moderate to substantial within-subject long-term reproducibility. A reasonable long-term reproducibility was also observed from the inter-subject study. It was found that the short-term reproducibility is usually higher than the long-term reproducibility. Furthermore, the results indicate that brain

  3. Television Advertising and Children's Observational Modeling.

    Science.gov (United States)

    Atkin, Charles K.

    This paper assesses advertising effects on children and adolescents from a social learning theory perspective, emphasizing imitative performance of vicariously reinforced consumption stimuli. The basic elements of social psychologist Albert Bandura's modeling theory are outlined. Then specific derivations from the theory are applied to the problem…

  4. Bicycle Rider Control: Observations, Modeling & Experiments

    NARCIS (Netherlands)

    Kooijman, J.D.G.

    2012-01-01

    Bicycle designers traditionally develop bicycles based on experience and trial and error. Adopting modern engineering tools to model bicycle and rider dynamics and control is another method for developing bicycles. This method has the potential to evaluate the complete design space, and thereby dev

  5. Production process reproducibility and product quality consistency of transient gene expression in HEK293 cells with anti-PD1 antibody as the model protein.

    Science.gov (United States)

    Ding, Kai; Han, Lei; Zong, Huifang; Chen, Junsheng; Zhang, Baohong; Zhu, Jianwei

    2017-03-01

    Demonstration of reproducibility and consistency of process and product quality is one of the most crucial issues in using transient gene expression (TGE) technology for biopharmaceutical development. In this study, we challenged the production consistency of TGE by expressing nine batches of recombinant IgG antibody in human embryonic kidney 293 cells to evaluate reproducibility including viable cell density, viability, apoptotic status, and antibody yield in cell culture supernatant. Product quality including isoelectric point, binding affinity, secondary structure, and thermal stability was assessed as well. In addition, major glycan forms of antibody from different batches of production were compared to demonstrate glycosylation consistency. Glycan compositions of the antibody harvested at different time periods were also measured to illustrate N-glycan distribution over the culture time. From the results, it has been demonstrated that different TGE batches are reproducible from lot to lot in overall cell growth, product yield, and product qualities including isoelectric point, binding affinity, secondary structure, and thermal stability. Furthermore, major N-glycan compositions are consistent among different TGE batches and conserved during cell culture time.

  6. Contextual sensitivity in scientific reproducibility.

    Science.gov (United States)

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-07

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.

  7. A New Hybrid Model Rotor Flux Observer and Its Application

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new hybrid model rotor flux observer, based on a new voltage model, is presented. In the first place, the voltage model of an induction machine was constructed by using the modeling method discussed in this paper and then the current model using a flux feedback was adopted in this flux observer. Secondly, the two models were combined via a filter and then the rotor flux observer was established. In the M-T synchronous coordinate, the observer was analyzed theoretically and several important functions were derived. A comparison between the observer and the traditional models was made using Matlab software. The simulation results show that the observer model had a better performance than the traditional model.

  8. Evaluation of Aerosol-cloud Interaction in the GISS Model E Using ARM Observations

    Science.gov (United States)

    DeBoer, G.; Bauer, S. E.; Toto, T.; Menon, Surabi; Vogelmann, A. M.

    2013-01-01

    Observations from the US Department of Energy's Atmospheric Radiation Measurement (ARM) program are used to evaluate the ability of the NASA GISS ModelE global climate model in reproducing observed interactions between aerosols and clouds. Included in the evaluation are comparisons of basic meteorology and aerosol properties, droplet activation, effective radius parameterizations, and surface-based evaluations of aerosol-cloud interactions (ACI). Differences between the simulated and observed ACI are generally large, but these differences may result partially from vertical distribution of aerosol in the model, rather than the representation of physical processes governing the interactions between aerosols and clouds. Compared to the current observations, the ModelE often features elevated droplet concentrations for a given aerosol concentration, indicating that the activation parameterizations used may be too aggressive. Additionally, parameterizations for effective radius commonly used in models were tested using ARM observations, and there was no clear superior parameterization for the cases reviewed here. This lack of consensus is demonstrated to result in potentially large, statistically significant differences to surface radiative budgets, should one parameterization be chosen over another.

  9. Observation Constraints on the Simplified GCG Model

    Institute of Scientific and Technical Information of China (English)

    DONG Su-Mei; WU Pu-Xun

    2007-01-01

    A simplified version of generalized Chaplygin gas (GCG) as a dark energy model is studied. By using the latest 162 ESSENCE type la supernovae (Sne la) data, 30 high redshift Sne la data, the baryonk acoustic oscillation peak from SDSS and the CMB data from WMAP3, a strong constraint on this simplified GCG model is obtained. At the 95.4% confidence level we obtain 0.21 ≤ Ωm ≤ 0.31 and 0.994 ≤ a ≤ 1.0 with the best fit fim = 0.25 and a = 1. This best fit scenario corresponds to an accelerating universe with qo ~_0.65 and z ~- 0.81 (a redshift of cosmic phase transition from deceleration to acceleration).

  10. Using Islands to Systematically Compare Satellite Observations to Models and Theory

    Science.gov (United States)

    Sherwood, S. C.; Robinson, F.; Gerstle, D.; Liu, C.; Kirshbaum, D. J.; Hernandez-Deckers, D.; Li, Y.

    2012-12-01

    Satellite observations are our most voluminous, and perhaps most important source of information on atmospheric convective behavior. However testing models is quite difficult, especially with satellites in low Earth orbits, due to several problems including infrequent sampling, the chaotic nature of convection (which means actual storms will always differ from modeled ones even with perfect models), model initialization, and uncertain boundary conditions. This talk presents work using forcing by islands of different sizes as a strategy for overcoming these problems. We examine the systematic dependence of different characteristics of convection with island size, as a target for simple theories of convection and the sea breeze, and for CRMs (cloud resolving models). We find some nonintuitive trends of behavior with size -- some of which we can reproduce with the WRF CRM, and some which we cannot.

  11. Observations and modelling of snow avalanche entrainment

    OpenAIRE

    2002-01-01

    In this paper full scale avalanche dynamics measurements from the Italian Pizzac and Swiss Vallée de la Sionne test sites are used to develop a snowcover entrainment model. A detailed analysis of three avalanche events shows that snowcover entrainment at the avalanche front appears to dominate over bed erosion at the basal sliding surface. Furthermore, the distribution of mass within the avalanche body is primarily a function of basal fric...

  12. Structural equation modeling for observational studies

    Science.gov (United States)

    Grace, J.B.

    2008-01-01

    Structural equation modeling (SEM) represents a framework for developing and evaluating complex hypotheses about systems. This method of data analysis differs from conventional univariate and multivariate approaches familiar to most biologists in several ways. First, SEMs are multiequational and capable of representing a wide array of complex hypotheses about how system components interrelate. Second, models are typically developed based on theoretical knowledge and designed to represent competing hypotheses about the processes responsible for data structure. Third, SEM is conceptually based on the analysis of covariance relations. Most commonly, solutions are obtained using maximum-likelihood solution procedures, although a variety of solution procedures are used, including Bayesian estimation. Numerous extensions give SEM a very high degree of flexibility in dealing with nonnormal data, categorical responses, latent variables, hierarchical structure, multigroup comparisons, nonlinearities, and other complicating factors. Structural equation modeling allows researchers to address a variety of questions about systems, such as how different processes work in concert, how the influences of perturbations cascade through systems, and about the relative importance of different influences. I present 2 example applications of SEM, one involving interactions among lynx (Lynx pardinus), mongooses (Herpestes ichneumon), and rabbits (Oryctolagus cuniculus), and the second involving anuran species richness. Many wildlife ecologists may find SEM useful for understanding how populations function within their environments. Along with the capability of the methodology comes a need for care in the proper application of SEM.

  13. Cassini UVIS Observations of the Io Plasma Torus. IV. Modeling Temporal and Azimuthal Variability

    CERN Document Server

    Steffl, A J; Bagenal, F

    2007-01-01

    In this fourth paper in a series, we present the results of our efforts to model the remarkable temporal and azimuthal variability of the Io plasma torus during the Cassini encounter with Jupiter. The long-term (months) temporal variation in the average torus composition observed by the Cassini Ultraviolet Imaging Spectrograph (UVIS) can be modeled by supposing a factor of ~4 increase in the amount of material supplied to the extended neutral clouds that are the source of torus plasma, followed by a gradual decay to more "typical" values. On shorter timescales, the observed 10.07-hour torus periodicity and azimuthal variation in plasma composition, including its surprising modulation with System III longitude, is reproduced by our model using the superposition of two azimuthal variations of suprathermal electrons: a primary hot electron variation that slips 12.5 degrees/day relative to the Jovian magnetic field and a secondary variation that remains fixed in System III longitude.

  14. Reproducibility, controllability, and optimization of LENR experiments

    Energy Technology Data Exchange (ETDEWEB)

    Nagel, David J. [The George Washington University, Washington DC 20052 (United States)

    2006-07-01

    Low-energy nuclear reaction (LENR) measurements are significantly, and increasingly reproducible. Practical control of the production of energy or materials by LENR has yet to be demonstrated. Minimization of costly inputs and maximization of desired outputs of LENR remain for future developments. The paper concludes by underlying that it is now clearly that demands for reproducible experiments in the early years of LENR experiments were premature. In fact, one can argue that irreproducibility should be expected for early experiments in a complex new field. As emphasized in the paper and as often happened in the history of science, experimental and theoretical progress can take even decades. It is likely to be many years before investments in LENR experiments will yield significant returns, even for successful research programs. However, it is clearly that a fundamental understanding of the anomalous effects observed in numerous experiments will significantly increase reproducibility, improve controllability, enable optimization of processes, and accelerate the economic viability of LENR.

  15. Hyperbolic L2-modules with Reproducing Kernels

    Institute of Scientific and Technical Information of China (English)

    David EELPODE; Frank SOMMEN

    2006-01-01

    Abstract In this paper, the Dirac operator on the Klein model for the hyperbolic space is considered. A function space containing L2-functions on the sphere Sm-1 in (R)m, which are boundary values of solutions for this operator, is defined, and it is proved that this gives rise to a Hilbert module with a reproducing kernel.

  16. 3-D microphysical model studies of Arctic denitrification: comparison with observations

    Directory of Open Access Journals (Sweden)

    S. Davies

    2005-01-01

    Full Text Available Simulations of Arctic denitrification using a 3-D chemistry-microphysics transport model are compared with observations for the winters 1994/1995, 1996/1997 and 1999/2000. The model of Denitrification by Lagrangian Particle Sedimentation (DLAPSE couples the full chemical scheme of the 3-D chemical transport model, SLIMCAT, with a nitric acid trihydrate (NAT growth and sedimentation scheme. We use observations from the Microwave Limb Sounder (MLS and Improved Limb Atmospheric Sounder (ILAS satellite instruments, the balloon-borne Michelsen Interferometer for Passive Atmospheric Sounding (MIPAS-B, and the in situ NOy instrument on-board the ER-2. As well as directly comparing model results with observations, we also assess the extent to which these observations are able to validate the modelling approach taken. For instance, in 1999/2000 the model captures the temporal development of denitrification observed by the ER-2 from late January into March. However, in this winter the vortex was already highly denitrified by late January so the observations do not provide a strong constraint on the modelled rate of denitrification. The model also reproduces the MLS observations of denitrification in early February 2000. In 1996/1997 the model captures the timing and magnitude of denitrification as observed by ILAS, although the lack of observations north of ~67° N make it difficult to constrain the actual timing of onset. The comparison for this winter does not support previous conclusions that denitrification must be caused by an ice-mediated process. In 1994/1995 the model notably underestimates the magnitude of denitrification observed during a single balloon flight of the MIPAS-B instrument. Agreement between model and MLS HNO3 at 68 hPa in mid-February 1995 was significantly better. Sensitivity tests show that a 1.5 K overall decrease in vortex temperatures or a factor 4 increase in assumed NAT nucleation rates produce the best

  17. 3-D microphysical model studies of Arctic denitrification: comparison with observations

    Directory of Open Access Journals (Sweden)

    S. Davies

    2005-01-01

    Full Text Available Simulations of Arctic denitrification using a 3-D chemistry-microphysics transport model are compared with observations for the winters 1994/95, 1996/97 and 1999/2000. The model of Denitrification by Lagrangian Particle Sedimentation (DLAPSE couples the full chemical scheme of the 3-D chemical transport model, SLIMCAT, with a nitric acid trihydrate (NAT growth and sedimentation scheme. We use observations from the Microwave Limb Sounder (MLS and Improved Limb Atmospheric Sounder (ILAS satellite instruments, the balloon-borne Michelsen Interferometer for Passive Atmospheric Sounding (MIPAS-B, and the in situ NOy instrument on-board the ER-2. As well as directly comparing model results with observations, we also assess the extent to which these observations are able to validate the modelling approach taken. For instance, in 1999/2000 the model captures the temporal development of denitrification observed by the ER-2 from late January into March. However, in this winter the vortex was already highly denitrified by late January so the observations do not provide a strong constraint on the modelled rate of denitrification. The model also reproduces the MLS observations of denitrification in early February 2000. In 1996/97 the model captures the timing and magnitude of denitrification as observed by ILAS, although the lack of observations north of ~67° N in the beginning of February make it difficult to constrain the actual timing of onset. The comparison for this winter does not support previous conclusions that denitrification must be caused by an ice-mediated process. In 1994/95 the model notably underestimates the magnitude of denitrification observed during a single balloon flight of the MIPAS-B instrument. Agreement between model and MLS HNO3 at 68 hPa in mid-February 1995 is significantly better. Sensitivity tests show that a 1.5 K overall decrease in vortex temperatures, or a factor 4 increase in assumed NAT nucleation rates, produce the best

  18. Interacting Dark Energy Models and Observations

    Science.gov (United States)

    Shojaei, Hamed; Urioste, Jazmin

    2017-01-01

    Dark energy is one of the mysteries of the twenty first century. Although there are candidates resembling some features of dark energy, there is no single model describing all the properties of dark energy. Dark energy is believed to be the most dominant component of the cosmic inventory, but a lot of models do not consider any interaction between dark energy and other constituents of the cosmic inventory. Introducing an interaction will change the equation governing the behavior of dark energy and matter and creates new ways to explain cosmic coincidence problem. In this work we studied how the Hubble parameter and density parameters evolve with time in the presence of certain types of interaction. The interaction serves as a way to convert dark energy into matter to avoid a dark energy-dominated universe by creating new equilibrium points for the differential equations. Then we will use numerical analysis to predict the values of distance moduli at different redshifts and compare them to the values for the distance moduli obtained by WMAP (Wilkinson Microwave Anisotropy Probe). Undergraduate Student

  19. Observations, Modeling and Theory of Debris Disks

    CERN Document Server

    Matthews, Brenda C; Wyatt, Mark C; Bryden, Geoff; Eiroa, Carlos

    2014-01-01

    Main sequence stars, like the Sun, are often found to be orbited by circumstellar material that can be categorized into two groups, planets and debris. The latter is made up of asteroids and comets, as well as the dust and gas derived from them, which makes debris disks observable in thermal emission or scattered light. These disks may persist over Gyrs through steady-state evolution and/or may also experience sporadic stirring and major collisional breakups, rendering them atypically bright for brief periods of time. Most interestingly, they provide direct evidence that the physical processes (whatever they may be) that act to build large oligarchs from micron-sized dust grains in protoplanetary disks have been successful in a given system, at least to the extent of building up a significant planetesimal population comparable to that seen in the Solar System's asteroid and Kuiper belts. Such systems are prime candidates to host even larger planetary bodies as well. The recent growth in interest in debris dis...

  20. Charge state evolution in the solar wind. III. Model comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)

    2014-08-01

    We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.

  1. Modelling the observed properties of carbon-enhanced metal-poor stars using binary population synthesis

    CERN Document Server

    Abate, C; Stancliffe, R J; Izzard, R G; Karakas, A I; Beers, T C; Lee, Y S

    2015-01-01

    The stellar population in the Galactic halo is characterised by a large fraction of CEMP stars. Most CEMP stars are enriched in $s$-elements (CEMP-$s$ stars), and some of these are also enriched in $r$-elements (CEMP-$s/r$ stars). One formation scenario proposed for CEMP stars invokes wind mass transfer in the past from a TP-AGB primary star to a less massive companion star which is presently observed. We generate low-metallicity populations of binary stars to reproduce the observed CEMP-star fraction. In addition, we aim to constrain our wind mass-transfer model and investigate under which conditions our synthetic populations reproduce observed abundance distributions. We compare the CEMP fractions and the abundance distributions determined from our synthetic populations with observations. Several physical parameters of the binary stellar population of the halo are uncertain, e.g. the initial mass function, the mass-ratio and orbital-period distributions, and the binary fraction. We vary the assumptions in o...

  2. Regional gravity field modelling from GOCE observables

    Science.gov (United States)

    Pitoňák, Martin; Šprlák, Michal; Novák, Pavel; Tenzer, Robert

    2017-01-01

    In this article we discuss a regional recovery of gravity disturbances at the mean geocentric sphere approximating the Earth over the area of Central Europe from satellite gravitational gradients. For this purpose, we derive integral formulas which allow converting the gravity disturbances onto the disturbing gravitational gradients in the local north-oriented frame (LNOF). The derived formulas are free of singularities in case of r ≠ R . We then investigate three numerical approaches for solving their inverses. In the initial approach, the integral formulas are firstly modified for solving individually the near- and distant-zone contributions. While the effect of the near-zone gravitational gradients is solved as an inverse problem, the effect of the distant-zone gravitational gradients is computed by numerical integration from the global gravitational model (GGM) TIM-r4. In the second approach, we further elaborate the first scenario by reducing measured gravitational gradients for gravitational effects of topographic masses. In the third approach, we apply additional modification by reducing gravitational gradients for the reference GGM. In all approaches we determine the gravity disturbances from each of the four accurately measured gravitational gradients separately as well as from their combination. Our regional gravitational field solutions are based on the GOCE EGG_TRF_2 gravitational gradients collected within the period from November 1 2009 until January 11 2010. Obtained results are compared with EGM2008, DIR-r1, TIM-r1 and SPW-r1. The best fit, in terms of RMS (2.9 mGal), is achieved for EGM2008 while using the third approach which combine all four well-measured gravitational gradients. This is explained by the fact that a-priori information about the Earth's gravitational field up to the degree and order 180 was used.

  3. Impact of Model and Observation Error on Assimilating Snow Cover Fraction Observations

    Science.gov (United States)

    Arsenault, Kristi R.

    Accurately modeling or observing snow cover fraction (SCF) estimates, which represent fractional snow cover area within a gridcell, can help with better understanding earth system dynamics, improving weather and climate prediction, and providing end-use water solutions. Seeking to obtain more accurate snowpack estimates, high resolution snow cover fraction observations are assimilated with different data assimilation (DA) methods within a land surface model (LSM). The LSM simulates snowpack states, snow water equivalent and snow depth, to obtain improved snowpack estimates known as the analysis. Data assimilation experiments are conducted for two mountainous areas where high spatial snow variability occurs, which can impact realistic snowpack representation for different hydrological and meteorological applications. Consequently, the experiments are conducted at higher model resolutions to better capture this variability. This study focuses on four key aspects of how assimilating SCF observations may improve snowpack estimates and impact the LSM overall. These include investigating the role of data assimilation method complexity, evaluating the impact of model and observational errors on snow state analysis estimates, improving the model's SCF representation for assimilation using observation operators, and examining subsequent model state and flux impacts when SCF observations are assimilated. A simpler direct insertion (DI) and a more complex ensemble Kalman filter (EnKF) data assimilation method were applied. The more complex method proved to be superior to the simpler one; however, this method required accounting for more realistic observational and model errors. Also, the EnKF method required an ensemble of model forecasts, in which bias in the ensemble generation was found and removed. Reducing this bias improved the model snowpack estimates. Detection and geolocation errors in the satellite-based snow cover fraction observations also contributed to degrading

  4. Observed versus modelled u-, g-, r-, i-, z-band photometry of local galaxies - evaluation of model performance

    Science.gov (United States)

    Hansson, K. S. Alexander; Lisker, Thorsten; Grebel, Eva K.

    2012-12-01

    We test how well available stellar population models can reproduce observed u-, g-, r-, i-, z-band photometry of the local galaxy population (0.02 ≤ z ≤ 0.03) as probed by the Sloan Digital Sky Survey (SDSS). Our study is conducted from the perspective of a user of the models, who has observational data in hand and seeks to convert them into physical quantities. Stellar population models for galaxies are created by synthesizing star formation histories and chemical enrichments using single stellar populations from several groups (STARBURST99, GALAXEV, the Maraston models, GALEV). The role of dust is addressed through a simplistic, but observationally motivated, dust model that couples the amplitude of the extinction to the star formation history, metallicity and the viewing angle. Moreover, the influence of emission lines is considered (for the subset of models for which this component is included). The performance of the models is investigated by (1) comparing their prediction with the observed galaxy population in the SDSS using the (u - g)-(r - i) and (g - r)-(i - z) colour planes, (2) comparing predicted stellar mass and luminosity weighted ages and metallicities, specific star formation rates, mass-to-light ratios and total extinctions with literature values from studies based on spectroscopy. Strong differences between the various models are seen with several models occupying regions in the colour-colour diagrams where no galaxies are observed. We would therefore like to emphasize the importance of the choice of model. Using our preferred model we find that the star formation history, metallicity and also dust content can be constrained over a large part of the parameter space through the use of u-, g-, r-, i-, z-band photometry. However, strong local degeneracies are present due to overlap of models with high and low extinction in certain parts of the colour space.

  5. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  6. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  7. Three-dimensional model-observation comparison in the Loop Current region

    Science.gov (United States)

    Rosburg, K. C.; Donohue, K. A.; Chassignet, E. P.

    2016-12-01

    Accurate high-resolution ocean models are required for hurricane and oil spill pathway predictions, and to enhance the dynamical understanding of circulation dynamics. Output from the 1/25° data-assimilating Gulf of Mexico HYbrid Coordinate Ocean Model (HYCOM31.0) is compared to daily full water column observations from a moored array, with a focus on Loop Current path variability and upper-deep layer coupling during eddy separation. Array-mean correlation was 0.93 for sea surface height, and 0.93, 0.63, and 0.75 in the thermocline for temperature, zonal, and meridional velocity, respectively. Peaks in modeled eddy kinetic energy were consistent with observations during Loop Current eddy separation, but with modeled deep eddy kinetic energy at half the observed amplitude. Modeled and observed LC meander phase speeds agreed within 8% and 2% of each other within the 100 - 40 and 40 - 20 day bands, respectively. The model reproduced observed patterns indicative of baroclinic instability, that is, a vertical offset with deep stream function leading upper stream function in the along-stream direction. While modeled deep eddies differed slightly spatially and temporally, the joint development of an upper-ocean meander along the eastern side of the LC and the successive propagation of upper-deep cyclone/anticylone pairs that preceded separation were contained within the model solution. Overall, model-observation comparison indicated that HYCOM31.0 could provide insight into processes within the 100 - 20 day band, offering a larger spatial and temporal window than observational arrays.

  8. Anticorrelated observed and modeled trends in dissolved oceanic oxygen over the last 50 years

    Directory of Open Access Journals (Sweden)

    L. Stramma

    2012-04-01

    Full Text Available Observations and model runs indicate trends in dissolved oxygen (DO associated with current and ongoing global warming. However, a large-scale observation-to-model comparison has been missing and is presented here. This study presents a first global compilation of DO measurements covering the last 50 years. It shows declining upper-ocean DO levels in many regions, especially the tropical oceans, whereas areas with increasing trends are found in the subtropics and in some subpolar regions. For the Atlantic Ocean south of 20° N, the DO history could even be extended back to about 70 years, showing decreasing DO in the subtropical South Atlantic. The global mean DO trend between 50° S and 50° N at 300 dbar for the period 1960 to 2010 is −0.063 μmol kg−1 yr−1. Results of a numerical biogeochemical Earth system model reveal that the magnitude of the observed change is consistent with CO2-induced climate change. However, the correlation between simulated and observed patterns of past DO change is negative, indicating that the model does not correctly reproduce the processes responsible for observed regional oxygen changes in the past 50 years. A negative pattern correlation is also obtained for model configurations with particularly low and particularly high diapycnal mixing, for a configuration that assumes a CO2-induced enhancement of the C:N ratios of exported organic matter and irrespective of whether climatological or realistic winds from reanalysis products are used to force the model. Depending on the model configuration the 300 dbar DO trend between 50° S and 50° N is −0.026 to −0.046 μmol kg−1 yr−1. Although numerical models reproduce the overall sign and, to some extent, magnitude of observed ocean deoxygenation, this degree of realism does not necessarily apply to simulated regional patterns and the representation of processes involved in their generation

  9. Observed versus modelled u,g,r,i,z-band photometry of local galaxies - Evaluation of model performance

    CERN Document Server

    Hansson, K S Alexander; Grebel, Eva K

    2012-01-01

    We test how well available stellar population models can reproduce observed u,g,r,i,z-band photometry of the local galaxy population (0.02<=z<=0.03) as probed by the SDSS. Our study is conducted from the perspective of a user of the models, who has observational data in hand and seeks to convert them into physical quantities. Stellar population models for galaxies are created by synthesizing star formations histories and chemical enrichments using single stellar populations from several groups (Starburst99, GALAXEV, Maraston2005, GALEV). The role of dust is addressed through a simplistic, but observationally motivated, dust model that couples the amplitude of the extinction to the star formation history, metallicity and the viewing angle. Moreover, the influence of emission lines is considered (for the subset of models for which this component is included). The performance of the models is investigated by: 1) comparing their prediction with the observed galaxy population in the SDSS using the (u-g)-(r-i...

  10. Fireball and cannonball models of gamma ray bursts confront observations

    OpenAIRE

    Dar, Arnon

    2005-01-01

    The two leading contenders for the theory of gamma-ray bursts (GRBs) and their afterglows, the Fireball and Cannonball models, are compared and their predictions are confronted, within space limitations, with key GRB observations, including recent observations with SWIFT

  11. Model-independent inference on compact-binary observations

    OpenAIRE

    Mandel, Ilya; Farr, Will M.; Colonna, Andrea; Stevenson, Simon; Tiňo, Peter; Veitch, John

    2016-01-01

    The recent advanced LIGO detections of gravitational waves from merging binary black holes enhance the prospect of exploring binary evolution via gravitational-wave observations of a population of compact-object binaries. In the face of uncertainty about binary formation models, model-independent inference provides an appealing alternative to comparisons between observed and modelled populations. We describe a procedure for clustering in the multi-dimensional parameter space of observations t...

  12. Correlation between human observer performance and model observer performance in differential phase contrast CT

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Garrett, John [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2013-11-15

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  13. Three-feature model to reproduce the topology of citation networks and the effects from authors' visibility on their h-index

    CERN Document Server

    Amancio, Diego R; Costa, Luciano da F; 10.1016/j.joi.2012.02.005

    2013-01-01

    Various factors are believed to govern the selection of references in citation networks, but a precise, quantitative determination of their importance has remained elusive. In this paper, we show that three factors can account for the referencing pattern of citation networks for two topics, namely "graphenes" and "complex networks", thus allowing one to reproduce the topological features of the networks built with papers being the nodes and the edges established by citations. The most relevant factor was content similarity, while the other two - in-degree (i.e. citation counts) and {age of publication} had varying importance depending on the topic studied. This dependence indicates that additional factors could play a role. Indeed, by intuition one should expect the reputation (or visibility) of authors and/or institutions to affect the referencing pattern, and this is only indirectly considered via the in-degree that should correlate with such reputation. Because information on reputation is not readily avai...

  14. A method to isolate bacterial communities and characterize ecosystems from food products: Validation and utilization in as a reproducible chicken meat model.

    Science.gov (United States)

    Rouger, Amélie; Remenant, Benoit; Prévost, Hervé; Zagorec, Monique

    2017-04-17

    Influenced by production and storage processes and by seasonal changes the diversity of meat products microbiota can be very variable. Because microbiotas influence meat quality and safety, characterizing and understanding their dynamics during processing and storage is important for proposing innovative and efficient storage conditions. Challenge tests are usually performed using meat from the same batch, inoculated at high levels with one or few strains. Such experiments do not reflect the true microbial situation, and the global ecosystem is not taken into account. Our purpose was to constitute live stocks of chicken meat microbiotas to create standard and reproducible ecosystems. We searched for the best method to collect contaminating bacterial communities from chicken cuts to store as frozen aliquots. We tested several methods to extract DNA of these stored communities for subsequent PCR amplification. We determined the best moment to collect bacteria in sufficient amounts during the product shelf life. Results showed that the rinsing method associated to the use of Mobio DNA extraction kit was the most reliable method to collect bacteria and obtain DNA for subsequent PCR amplification. Then, 23 different chicken meat microbiotas were collected using this procedure. Microbiota aliquots were stored at -80°C without important loss of viability. Their characterization by cultural methods confirmed the large variability (richness and abundance) of bacterial communities present on chicken cuts. Four of these bacterial communities were used to estimate their ability to regrow on meat matrices. Challenge tests performed on sterile matrices showed that these microbiotas were successfully inoculated and could overgrow the natural microbiota of chicken meat. They can therefore be used for performing reproducible challenge tests mimicking a true meat ecosystem and enabling the possibility to test the influence of various processing or storage conditions on complex meat

  15. 2 types of spicules "observed" in 3D realistic models

    CERN Document Server

    Martínez-Sykora, Juan

    2010-01-01

    Realistic numerical 3D models of the outer solar atmosphere show two different kind of spicule-like phenomena, as also observed on the solar limb. The numerical models are calculated using the 2 types of spicules "observed" in 3D realistic models Oslo Staggered Code (OSC) to solve the full MHD equations with non-grey and NLTE radiative transfer and thermal conduction along the magnetic field lines. The two types of spicules arise as a natural result of the dynamical evolution in the models. We discuss the different properties of these two types of spicules, their differences from observed spicules and what needs to be improved in the models.

  16. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through suffi

  17. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  18. Observations and a linear model of water level in an interconnected inlet-bay system

    Science.gov (United States)

    Aretxabaleta, Alfredo L.; Ganju, Neil K.; Butman, Bradford; Signell, Richard P.

    2017-04-01

    A system of barrier islands and back-barrier bays occurs along southern Long Island, New York, and in many coastal areas worldwide. Characterizing the bay physical response to water level fluctuations is needed to understand flooding during extreme events and evaluate their relation to geomorphological changes. Offshore sea level is one of the main drivers of water level fluctuations in semienclosed back-barrier bays. We analyzed observed water levels (October 2007 to November 2015) and developed analytical models to better understand bay water level along southern Long Island. An increase (˜0.02 m change in 0.17 m amplitude) in the dominant M2 tidal amplitude (containing the largest fraction of the variability) was observed in Great South Bay during mid-2014. The observed changes in both tidal amplitude and bay water level transfer from offshore were related to the dredging of nearby inlets and possibly the changing size of a breach across Fire Island caused by Hurricane Sandy (after December 2012). The bay response was independent of the magnitude of the fluctuations (e.g., storms) at a specific frequency. An analytical model that incorporates bay and inlet dimensions reproduced the observed transfer function in Great South Bay and surrounding areas. The model predicts the transfer function in Moriches and Shinnecock bays where long-term observations were not available. The model is a simplified tool to investigate changes in bay water level and enables the evaluation of future conditions and alternative geomorphological settings.

  19. Observations and a linear model of water level in an interconnected inlet-bay system

    Science.gov (United States)

    Aretxabaleta, Alfredo; Ganju, Neil Kamal; Butman, Bradford; Signell, Richard

    2017-01-01

    A system of barrier islands and back-barrier bays occurs along southern Long Island, New York, and in many coastal areas worldwide. Characterizing the bay physical response to water level fluctuations is needed to understand flooding during extreme events and evaluate their relation to geomorphological changes. Offshore sea level is one of the main drivers of water level fluctuations in semienclosed back-barrier bays. We analyzed observed water levels (October 2007 to November 2015) and developed analytical models to better understand bay water level along southern Long Island. An increase (∼0.02 m change in 0.17 m amplitude) in the dominant M2 tidal amplitude (containing the largest fraction of the variability) was observed in Great South Bay during mid-2014. The observed changes in both tidal amplitude and bay water level transfer from offshore were related to the dredging of nearby inlets and possibly the changing size of a breach across Fire Island caused by Hurricane Sandy (after December 2012). The bay response was independent of the magnitude of the fluctuations (e.g., storms) at a specific frequency. An analytical model that incorporates bay and inlet dimensions reproduced the observed transfer function in Great South Bay and surrounding areas. The model predicts the transfer function in Moriches and Shinnecock bays where long-term observations were not available. The model is a simplified tool to investigate changes in bay water level and enables the evaluation of future conditions and alternative geomorphological settings.

  20. Tests of scanning model observers for myocardial SPECT imaging

    Science.gov (United States)

    Gifford, H. C.; Pretorius, P. H.; Brankov, J. G.

    2009-02-01

    Many researchers have tested and applied human-model observers as part of their evaluations of reconstruction methods for SPECT perfusion imaging. However, these model observers have generally been limited to signal-known- exactly (SKE) detection tasks. Our objective is to formulate and test scanning model observers that emulate humans in detection-localization tasks involving perfusion defects. Herein, we compare several models based on the channelized nonprewhitening (CNPW) observer. Simulated Tc-99m images of the heart with and without defects were created using a mathematical anthropomorphic phantom. Reconstructions were performed with an iterative algorithm and postsmoothed with a 3D Gaussian filter. Human and model-observer studies were conducted to assess the optimal number of iterations and the smoothing level of the filter. The human-observer study was a multiple-alternative forced-choice (MAFC) study with five defects. The CNPW observer performed the MAFC study, but also performed an SKE-but-variable (SKEV) study and a localization ROC (LROC) study. A separate LROC study applied an observer based on models of human search in mammograms. The amount of prior knowledge about the possible defects differed for these four model-observer studies. The trend was towards improved agreement with the human observers as prior knowledge decreased.

  1. Polar F-layer model-observation comparisons: a neutral wind surprise

    Directory of Open Access Journals (Sweden)

    J. J. Sojka

    2005-01-01

    Full Text Available The existence of a month-long continuous database of incoherent scatter radar observations of the ionosphere from the EISCAT Savlbard Radar (ESR at Longyearbyen, Norway, provides an unprecedented opportunity for model/data comparisons. Physics-based ionospheric models, such as the Utah State University Time Dependent Ionospheric Model (TDIM, are usually only compared with observations over restricted one or two day events or against climatological averages. In this study, using the ESR observations, the daily weather, day-to-day variability, and month-long climatology can be simultaneously addressed to identify modeling shortcomings and successes. Since for this study the TDIM is driven by climatological representations of the magnetospheric convection, auroral oval, neutral atmosphere, and neutral winds, whose inputs are solar and geomagnetic indices, it is not surprising that the daily weather cannot be reproduced. What is unexpected is that the horizontal neutral wind has come to the forefront as a decisive model input parameter in matching the diurnal morphology of density structuring seen in the observations.

  2. Additive Manufacturing: Reproducibility of Metallic Parts

    Directory of Open Access Journals (Sweden)

    Konda Gokuldoss Prashanth

    2017-02-01

    Full Text Available The present study deals with the properties of five different metals/alloys (Al-12Si, Cu-10Sn and 316L—face centered cubic structure, CoCrMo and commercially pure Ti (CP-Ti—hexagonal closed packed structure fabricated by selective laser melting. The room temperature tensile properties of Al-12Si samples show good consistency in results within the experimental errors. Similar reproducible results were observed for sliding wear and corrosion experiments. The other metal/alloy systems also show repeatable tensile properties, with the tensile curves overlapping until the yield point. The curves may then follow the same path or show a marginal deviation (~10 MPa until they reach the ultimate tensile strength and a negligible difference in ductility levels (of ~0.3% is observed between the samples. The results show that selective laser melting is a reliable fabrication method to produce metallic materials with consistent and reproducible properties.

  3. Observing and modelling phytoplankton community structure in the North Sea

    Science.gov (United States)

    Ford, David A.; van der Molen, Johan; Hyder, Kieran; Bacon, John; Barciela, Rosa; Creach, Veronique; McEwan, Robert; Ruardij, Piet; Forster, Rodney

    2017-03-01

    Phytoplankton form the base of the marine food chain, and knowledge of phytoplankton community structure is fundamental when assessing marine biodiversity. Policy makers and other users require information on marine biodiversity and other aspects of the marine environment for the North Sea, a highly productive European shelf sea. This information must come from a combination of observations and models, but currently the coastal ocean is greatly under-sampled for phytoplankton data, and outputs of phytoplankton community structure from models are therefore not yet frequently validated. This study presents a novel set of in situ observations of phytoplankton community structure for the North Sea using accessory pigment analysis. The observations allow a good understanding of the patterns of surface phytoplankton biomass and community structure in the North Sea for the observed months of August 2010 and 2011. Two physical-biogeochemical ocean models, the biogeochemical components of which are different variants of the widely used European Regional Seas Ecosystem Model (ERSEM), were then validated against these and other observations. Both models were a good match for sea surface temperature observations, and a reasonable match for remotely sensed ocean colour observations. However, the two models displayed very different phytoplankton community structures, with one better matching the in situ observations than the other. Nonetheless, both models shared some similarities with the observations in terms of spatial features and inter-annual variability. An initial comparison of the formulations and parameterizations of the two models suggests that diversity between the parameter settings of model phytoplankton functional types, along with formulations which promote a greater sensitivity to changes in light and nutrients, is key to capturing the observed phytoplankton community structure. These findings will help inform future model development, which should be coupled

  4. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  5. Modeling astronomically observed interstellar infrared spectra by ionized carbon pentagon-hexagon molecules (c9h7) n+

    CERN Document Server

    Ota, Norio

    2015-01-01

    Modeling a promising carrier of the astronomically observed polycyclic aromatic hydrocarbon (PAH), infrared (IR) spectra of ionized molecules (C9H7) n+ were calculated based on density functional theory (DFT). In a previous study, it was found that void induced coronene C23H12++ could reproduce observed spectra from 3 to 15 micron, which has carbon two pentagons connected with five hexagons. In this paper, we tried to test the simplest model, that is, one pentagon connected with one hexagon, which is indene like molecule (C9H7) n+ (n=0 to 4). DFT based harmonic frequency analysis resulted that observed spectrum could be almost reproduced by a suitable sum of ionized C9H7n+ molecules. Typical example is C9H7++. Calculated peaks were 3.2, 7.4, 7.6, 8.4, and 12.7 micron, whereas observed one 3.3, 7.6, 7.8, 8.6 and 12.7 micron. By a combination of different degree of ionized molecules, we can expect to reproduce total spectrum. For a comparison, hexagon-hexagon molecule naphthalene (C10H8) n+ was studied. Unfortu...

  6. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  7. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes...

  8. [Reproducibility of subjective refraction measurement].

    Science.gov (United States)

    Grein, H-J; Schmidt, O; Ritsche, A

    2014-11-01

    Reproducibility of subjective refraction measurement is limited by various factors. The main factors affecting reproducibility include the characteristics of the measurement method and of the subject and the examiner. This article presents the results of a study on this topic, focusing on the reproducibility of subjective refraction measurement in healthy eyes. The results of previous studies are not all presented in the same way by the respective authors and cannot be fully standardized without consulting the original scientific data. To the extent that they are comparable, the results of our study largely correspond largely with those of previous investigations: During repeated subjective refraction measurement, 95% of the deviation from the mean value was approximately ±0.2 D to ±0.65 D for the spherical equivalent and cylindrical power. The reproducibility of subjective refraction measurement in healthy eyes is limited, even under ideal conditions. Correct assessment of refraction results is only feasible after identifying individual variability. Several measurements are required. Refraction cannot be measured without a tolerance range. The English full-text version of this article is available at SpringerLink (under supplemental).

  9. Reproducible research in computational science.

    Science.gov (United States)

    Peng, Roger D

    2011-12-02

    Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.

  10. Observational evidence for various models of Moving Magnetic Features

    Science.gov (United States)

    Lee, Jeongwoo W.

    1992-01-01

    New measurements of Moving Magnetic Features (MMFs) based on the observations of the active region NOAA 5612 made at Big Bear Solar Observatory (BBSO) on August 2, 1989 are presented. The existing theoretical models are checked against the new observations, and the origin of MMFs conjectured from the deduced observational constraints is discussed.

  11. Testing models of triggered star formation: theory and observation

    CERN Document Server

    Haworth, Thomas J; Acreman, David M

    2012-01-01

    One of the main reasons that triggered star formation is contentious is the failure to accurately link the observations with models in a detailed, quantitative, way. It is therefore critical to continuously test and improve the model details and methods with which comparisons to observations are made. We use a Monte Carlo radiation transport and hydrodynamics code TORUS to show that the diffuse radiation field has a significant impact on the outcome of radiatively driven implosion (RDI) models. We also calculate SEDs and synthetic images from the models to test observational diagnostics that are used to determine bright rimmed cloud conditions and search for signs of RDI.

  12. Holonomy observables in Ponzano-Regge type state sum models

    CERN Document Server

    Barrett, John W

    2011-01-01

    We study observables on group elements in the Ponzano-Regge model. We show that these observables have a natural interpretation in terms of Feynman diagrams on a sphere and contrast them to the well studied observables on the spin labels. We elucidate this interpretation by showing how they arise from the no-gravity limit of the Turaev-Viro model and Chern-Simons theory.

  13. Limitations of a coupled regional climate model in the reproduction of the observed Arctic sea-ice retreat

    Directory of Open Access Journals (Sweden)

    W. Dorn

    2012-03-01

    Full Text Available The effects of internal model variability on the simulation of Arctic sea-ice extent and volume have been examined with the aid of a seven-member ensemble with a coupled regional climate model for the period 1948–2008. Beyond general weaknesses related to insufficient representation of feedback processes, it is found that the model's ability to reproduce observed summer sea-ice retreat depends mainly on two factors: the correct simulation of the atmospheric circulation during the summer months and the sea-ice volume at the beginning of the melting period. Since internal model variability shows its maximum during the summer months, the ability to reproduce the observed atmospheric summer circulation is limited. In addition, the atmospheric circulation during summer also significantly affects the sea-ice volume over the years, leading to a limited ability to start with reasonable sea-ice volume into the melting period. Furthermore, the sea-ice volume pathway shows notable decadal variability which amplitude varies among the ensemble members. The scatter is particularly large in periods when the ice volume increases, indicating limited skill in reproducing high-ice years.

  14. ITK: Enabling Reproducible Research and Open Science

    Directory of Open Access Journals (Sweden)

    Matthew Michael McCormick

    2014-02-01

    Full Text Available Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature.Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification.This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46.

  15. Confronting semi-analytic galaxy models with galaxy-matter correlations observed by CFHTLenS

    CERN Document Server

    Saghiha, Hananeh; Schneider, Peter; Hilbert, Stefan

    2016-01-01

    Testing predictions of semi-analytic models of galaxy evolution against observations help to understand the complex processes that shape galaxies. We compare predictions from the Garching and Durham models implemented on the Millennium Run with observations of galaxy-galaxy lensing (GGL) and galaxy-galaxy-galaxy lensing (G3L) for various galaxy samples with stellar masses in the range 0.5 < (M_* / 10^10 M_Sun) < 32 and photometric redshift range 0.2 < z < 0.6 in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We find that the predicted GGL and G3L signals are in qualitative agreement with CFHTLenS data. Quantitatively, the models succeed in reproducing the observed signals in the highest stellar mass bin 16 < ( M_* / 10^10 M_Sun) < 32 but show different degrees of tension for the other stellar mass samples. The Durham model is strongly excluded on a 95% confidence level by the observations as it largely over-predicts the amplitudes of GGL and G3L signals, probably owing to a la...

  16. The role of observation uncertainty in the calibration of hydrologic rainfall-runoff models

    Directory of Open Access Journals (Sweden)

    T. Ghizzoni

    2007-06-01

    Full Text Available Hydrologic rainfall-runoff models are usually calibrated with reference to a limited number of recorded flood events, for which rainfall and runoff measurements are available. In this framework, model's parameters consistency depends on the number of both events and hydrograph points used for calibration, and on measurements reliability. Recently, to make users aware of application limits, major attention has been devoted to the estimation of uncertainty in hydrologic modelling. Here a simple numerical experiment is proposed, that allows the analysis of uncertainty in hydrologic rainfall-runoff modelling associated to both quantity and quality of available data.

    A distributed rainfall-runoff model based on geomorphologic concepts has been used. The experiment involves the analysis of an ensemble of model runs, and its overall set up holds if the model is to be applied in different catchments and climates, or even if a different hydrologic model is used. With reference to a set of 100 synthetic rainfall events characterized by a given rainfall volume, the effect of uncertainty in parameters calibration is studied. An artificial truth – perfect observation – is created by using the model in a known configuration. An external source of uncertainty is introduced by assuming realistic, i.e. uncertain, discharge observations to calibrate the model. The range of parameters' values able to "reproduce" the observation is studied. Finally, the model uncertainty is evaluated and discussed. The experiment gives useful indications about the number of both events and data points needed for a careful and stable calibration of a specific model, applied in a given climate and catchment. Moreover, an insight on the expected and maximum error in flood peak discharge simulations is given: errors ranging up to 40% are to be expected if parameters are calibrated on insufficient data sets.

  17. Fuzzy model-based observers for fault detection in CSTR.

    Science.gov (United States)

    Ballesteros-Moncada, Hazael; Herrera-López, Enrique J; Anzurez-Marín, Juan

    2015-11-01

    Under the vast variety of fuzzy model-based observers reported in the literature, what would be the properone to be used for fault detection in a class of chemical reactor? In this study four fuzzy model-based observers for sensor fault detection of a Continuous Stirred Tank Reactor were designed and compared. The designs include (i) a Luenberger fuzzy observer, (ii) a Luenberger fuzzy observer with sliding modes, (iii) a Walcott-Zak fuzzy observer, and (iv) an Utkin fuzzy observer. A negative, an oscillating fault signal, and a bounded random noise signal with a maximum value of ±0.4 were used to evaluate and compare the performance of the fuzzy observers. The Utkin fuzzy observer showed the best performance under the tested conditions.

  18. WDAC Task Team on Observations for Model Evaluation: Facilitating the use of observations for CMIP

    Science.gov (United States)

    Waliser, D. E.; Gleckler, P. J.; Ferraro, R.; Eyring, V.; Bosilovich, M. G.; Schulz, J.; Thepaut, J. N.; Taylor, K. E.; Chepfer, H.; Bony, S.; Lee, T. J.; Joseph, R.; Mathieu, P. P.; Saunders, R.

    2015-12-01

    Observations are essential for the development and evaluation of climate models. Satellite and in-situ measurements as well as reanalysis products provide crucial resources for these purposes. Over the last two decades, the climate modeling community has become adept at developing model intercomparison projects (MIPs) that provide the basis for more systematic comparisons of climate models under common experimental conditions. A prominent example among these is the coupled MIP (CMIP). Due to its growing importance in providing input to the IPCC, the framework for CMIP, now planning CMIP6, has expanded to include a very comprehensive and precise set of experimental protocols, with an advanced data archive and dissemination system. While the number, types and sophistication of observations over the same time period have kept pace, their systematic application to the evaluation of climate models has yet to be fully exploited due to a lack of coordinated protocols for identifying, archiving, documenting and applying observational resources. This presentation will discuss activities and plans of the World Climate Research Program (WCRP) Data Advisory Council's (WDAC) Task Team on Observations for Model Evaluation for facilitating the use of observations for model evaluation. The presentation will include an update on the status of the obs4MIPs and ana4MIPs projects, whose purpose is to provide a limited collection of well-established and documented observation and reanalysis datasets for comparison with Earth system models, targeting CMIP in particular. The presentation will also describe the role these activities and datasets play in the development of a set of community standard observation-based climate model performance metrics by the Working Group on Numerical Experimentation (WGNE)'s Performance Metrics Panel, as well as which CMIP6 experiments these activities are targeting, and where additional community input and contributions to these activities are needed.

  19. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  20. Correcting biased observation model error in data assimilation

    CERN Document Server

    Harlim, John

    2016-01-01

    While the formulation of most data assimilation schemes assumes an unbiased observation model error, in real applications, model error with nontrivial biases is unavoidable. A practical example is the error in the radiative transfer model (which is used to assimilate satellite measurements) in the presence of clouds. As a consequence, many (in fact 99\\%) of the cloudy observed measurements are not being used although they may contain useful information. This paper presents a novel nonparametric Bayesian scheme which is able to learn the observation model error distribution and correct the bias in incoming observations. This scheme can be used in tandem with any data assimilation forecasting system. The proposed model error estimator uses nonparametric likelihood functions constructed with data-driven basis functions based on the theory of kernel embeddings of conditional distributions developed in the machine learning community. Numerically, we show positive results with two examples. The first example is des...

  1. Accuracy and reproducibility of patient-specific hemodynamic models of stented intracranial aneurysms: report on the Virtual Intracranial Stenting Challenge 2011.

    Science.gov (United States)

    Cito, S; Geers, A J; Arroyo, M P; Palero, V R; Pallarés, J; Vernet, A; Blasco, J; San Román, L; Fu, W; Qiao, A; Janiga, G; Miura, Y; Ohta, M; Mendina, M; Usera, G; Frangi, A F

    2015-01-01

    Validation studies are prerequisites for computational fluid dynamics (CFD) simulations to be accepted as part of clinical decision-making. This paper reports on the 2011 edition of the Virtual Intracranial Stenting Challenge. The challenge aimed to assess the reproducibility with which research groups can simulate the velocity field in an intracranial aneurysm, both untreated and treated with five different configurations of high-porosity stents. Particle imaging velocimetry (PIV) measurements were obtained to validate the untreated velocity field. Six participants, totaling three CFD solvers, were provided with surface meshes of the vascular geometry and the deployed stent geometries, and flow rate boundary conditions for all inlets and outlets. As output, they were invited to submit an abstract to the 8th International Interdisciplinary Cerebrovascular Symposium 2011 (ICS'11), outlining their methods and giving their interpretation of the performance of each stent configuration. After the challenge, all CFD solutions were collected and analyzed. To quantitatively analyze the data, we calculated the root-mean-square error (RMSE) over uniformly distributed nodes on a plane slicing the main flow jet along its axis and normalized it with the maximum velocity on the slice of the untreated case (NRMSE). Good agreement was found between CFD and PIV with a NRMSE of 7.28%. Excellent agreement was found between CFD solutions, both untreated and treated. The maximum difference between any two groups (along a line perpendicular to the main flow jet) was 4.0 mm/s, i.e. 4.1% of the maximum velocity of the untreated case, and the average NRMSE was 0.47% (range 0.28-1.03%). In conclusion, given geometry and flow rates, research groups can accurately simulate the velocity field inside an intracranial aneurysm-as assessed by comparison with in vitro measurements-and find excellent agreement on the hemodynamic effect of different stent configurations.

  2. Towards global empirical upscaling of FLUXNET eddy covariance observations: validation of a model tree ensemble approach using a biosphere model

    Science.gov (United States)

    Jung, M.; Reichstein, M.; Bondeau, A.

    2009-10-01

    Global, spatially and temporally explicit estimates of carbon and water fluxes derived from empirical up-scaling eddy covariance measurements would constitute a new and possibly powerful data stream to study the variability of the global terrestrial carbon and water cycle. This paper introduces and validates a machine learning approach dedicated to the upscaling of observations from the current global network of eddy covariance towers (FLUXNET). We present a new model TRee Induction ALgorithm (TRIAL) that performs hierarchical stratification of the data set into units where particular multiple regressions for a target variable hold. We propose an ensemble approach (Evolving tRees with RandOm gRowth, ERROR) where the base learning algorithm is perturbed in order to gain a diverse sequence of different model trees which evolves over time. We evaluate the efficiency of the model tree ensemble (MTE) approach using an artificial data set derived from the Lund-Potsdam-Jena managed Land (LPJmL) biosphere model. We aim at reproducing global monthly gross primary production as simulated by LPJmL from 1998-2005 using only locations and months where high quality FLUXNET data exist for the training of the model trees. The model trees are trained with the LPJmL land cover and meteorological input data, climate data, and the fraction of absorbed photosynthetic active radiation simulated by LPJmL. Given that we know the "true result" in the form of global LPJmL simulations we can effectively study the performance of the MTE upscaling and associated problems of extrapolation capacity. We show that MTE is able to explain 92% of the variability of the global LPJmL GPP simulations. The mean spatial pattern and the seasonal variability of GPP that constitute the largest sources of variance are very well reproduced (96% and 94% of variance explained respectively) while the monthly interannual anomalies which occupy much less variance are less well matched (41% of variance explained

  3. Efficient and reproducible identification of mismatch repair deficient colon cancer

    DEFF Research Database (Denmark)

    Joost, Patrick; Bendahl, Pär-Ola; Halvarsson, Britta;

    2013-01-01

    BACKGROUND: The identification of mismatch-repair (MMR) defective colon cancer is clinically relevant for diagnostic, prognostic and potentially also for treatment predictive purposes. Preselection of tumors for MMR analysis can be obtained with predictive models, which need to demonstrate ease...... of application and favorable reproducibility. METHODS: We validated the MMR index for the identification of prognostically favorable MMR deficient colon cancers and compared performance to 5 other prediction models. In total, 474 colon cancers diagnosed ≥ age 50 were evaluated with correlation between...... and efficiently identifies MMR defective colon cancers with high sensitivity and specificity. The model shows stable performance with low inter-observer variability and favorable performance when compared to other MMR predictive models....

  4. Assessing impacts of off-nadir observation on remote sensing of vegetation - Use of the Suits model

    Science.gov (United States)

    Bartlett, D. S.; Johnson, R. W.; Hardisky, M. A.; Klemas, V.

    1986-01-01

    The use of Suits' (1972a, b) digital radiative transfer model to simulate the effect of nonLambertian canopy reflectance on off-nadir observations of vegetation is discussed. Canopy reflectances of cord grass are calculated using the radiative transfer model, field radiometric measurements, and airborne multispectral scanner data. The effects of varying view angles on canopy reflectance are analyzed and compared. The comparison reveals that the model is effective in simulating the sense and magnitude of reflectance change due to variable angles of observations; however, the model does not reproduce the observed dependence of nadir canopy reflectance on solar zenith angle. It is concluded that the radiative transfer model is applicable for predicting the variation in canopy reflectance due to changing view zenith angles.

  5. Assessing impacts of off-nadir observation on remote sensing of vegetation - Use of the Suits model

    Science.gov (United States)

    Bartlett, D. S.; Johnson, R. W.; Hardisky, M. A.; Klemas, V.

    1986-01-01

    The use of Suits' (1972a, b) digital radiative transfer model to simulate the effect of nonLambertian canopy reflectance on off-nadir observations of vegetation is discussed. Canopy reflectances of cord grass are calculated using the radiative transfer model, field radiometric measurements, and airborne multispectral scanner data. The effects of varying view angles on canopy reflectance are analyzed and compared. The comparison reveals that the model is effective in simulating the sense and magnitude of reflectance change due to variable angles of observations; however, the model does not reproduce the observed dependence of nadir canopy reflectance on solar zenith angle. It is concluded that the radiative transfer model is applicable for predicting the variation in canopy reflectance due to changing view zenith angles.

  6. Reproducibility of NIF hohlraum measurements

    Science.gov (United States)

    Moody, J. D.; Ralph, J. E.; Turnbull, D. P.; Casey, D. T.; Albert, F.; Bachmann, B. L.; Doeppner, T.; Divol, L.; Grim, G. P.; Hoover, M.; Landen, O. L.; MacGowan, B. J.; Michel, P. A.; Moore, A. S.; Pino, J. E.; Schneider, M. B.; Tipton, R. E.; Smalyuk, V. A.; Strozzi, D. J.; Widmann, K.; Hohenberger, M.

    2015-11-01

    The strategy of experimentally ``tuning'' the implosion in a NIF hohlraum ignition target towards increasing hot-spot pressure, areal density of compressed fuel, and neutron yield relies on a level of experimental reproducibility. We examine the reproducibility of experimental measurements for a collection of 15 identical NIF hohlraum experiments. The measurements include incident laser power, backscattered optical power, x-ray measurements, hot-electron fraction and energy, and target characteristics. We use exact statistics to set 1-sigma confidence levels on the variations in each of the measurements. Of particular interest is the backscatter and laser-induced hot-spot locations on the hohlraum wall. Hohlraum implosion designs typically include variability specifications [S. W. Haan et al., Phys. Plasmas 18, 051001 (2011)]. We describe our findings and compare with the specifications. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  7. What sea-ice biogeochemical modellers need from observers

    Directory of Open Access Journals (Sweden)

    Nadja Steiner

    2016-02-01

    Full Text Available Abstract Numerical models can be a powerful tool helping to understand the role biogeochemical processes play in local and global systems and how this role may be altered in a changing climate. With respect to sea-ice biogeochemical models, our knowledge is severely limited by our poor confidence in numerical model parameterisations representing those processes. Improving model parameterisations requires communication between observers and modellers to guide model development and improve the acquisition and presentation of observations. In addition to more observations, modellers need conceptual and quantitative descriptions of the processes controlling, for example: primary production and diversity of algal functional types in sea ice, ice algal growth, release from sea ice, heterotrophic remineralisation, transfer and emission of gases (e.g., DMS, CH4, BrO, incorporation of seawater components in growing sea ice (including Fe, organic and inorganic carbon, and major salts and subsequent release; CO2 dynamics (including CaCO3 precipitation, flushing and supply of nutrients to sea-ice ecosystems; and radiative transfer through sea ice. These issues can be addressed by focused observations, as well as controlled laboratory and field experiments that target specific processes. The guidelines provided here should help modellers and observers improve the integration of measurements and modelling efforts and advance toward the common goal of understanding biogeochemical processes in sea ice and their current and future impacts on environmental systems.

  8. Modelling and Observation of Mineral Dust Optical Properties over Central Europe

    Science.gov (United States)

    Chilinski, Michał T.; Markowicz, Krzysztof M.; Zawadzka, Olga; Stachlewska, Iwona S.; Kumala, Wojciech; Petelski, Tomasz; Makuch, Przemysław; Westphal, Douglas L.; Zagajewski, Bogdan

    2016-12-01

    This paper is focused on Saharan dust transport to Central Europe/Poland; we compare properties of atmospheric Saharan dust using data from NAAPS, MACC, AERONET as well as observations obtained during HyMountEcos campaign in June 2012. Ten years of dust climatology shows that long-range transport of Saharan dust to Central Europe is mostly during spring and summer. HYSPLIT back-trajectories indicate airmass transport mainly in November, but it does not agree with modeled maxima of dust optical depth. NAAPS model shows maximum of dust optical depth ( 0.04-0.05, 550 nm) in April-May, but the MACC modeled peak is broader ( 0.04). During occurrence of mineral dust over Central-Europe for 14% (NAAPS) / 12% (MACC) of days dust optical depths are above 0.05 and during 4% (NAAPS) / 2.5% (MACC) of days dust optical depths exceed 0.1. The HyMountEcos campaign took place in June-July 2012 in the mountainous region of Karkonosze. The analysis includes remote sensing data from lidars, sun-photometers, and numerical simulations from NAAPS, MACC, DREAM8b models. Comparison of simulations with observations demonstrates the ability of models to reasonably reproduce aerosol vertical distributions and their temporal variability. However, significant differences between simulated and measured AODs were found. The best agreement was achieved for MACC model.

  9. Modelling and Observation of Mineral Dust Optical Properties over Central Europe

    Directory of Open Access Journals (Sweden)

    Chilinski Michał T.

    2016-12-01

    Full Text Available This paper is focused on Saharan dust transport to Central Europe/Poland; we compare properties of atmospheric Saharan dust using data from NAAPS, MACC, AERONET as well as observations obtained during HyMountEcos campaign in June 2012. Ten years of dust climatology shows that long-range transport of Saharan dust to Central Europe is mostly during spring and summer. HYSPLIT back-trajectories indicate airmass transport mainly in November, but it does not agree with modeled maxima of dust optical depth. NAAPS model shows maximum of dust optical depth (~0.04-0.05, 550 nm in April-May, but the MACC modeled peak is broader (~0.04. During occurrence of mineral dust over Central-Europe for 14% (NAAPS / 12% (MACC of days dust optical depths are above 0.05 and during 4% (NAAPS / 2.5% (MACC of days dust optical depths exceed 0.1. The HyMountEcos campaign took place in June-July 2012 in the mountainous region of Karkonosze. The analysis includes remote sensing data from lidars, sun-photometers, and numerical simulations from NAAPS, MACC, DREAM8b models. Comparison of simulations with observations demonstrates the ability of models to reasonably reproduce aerosol vertical distributions and their temporal variability. However, significant differences between simulated and measured AODs were found. The best agreement was achieved for MACC model.

  10. Observations and Numerical Modeling of Eddy Generation in the Mediterranean Undercurrent

    Science.gov (United States)

    Serra, N.; Ambar, I.; Kaese, R.

    2001-12-01

    In the frame of the European Union MAST III project CANIGO (Canary Islands Gibraltar Azores Observations), RAFOS floats were deployed in the Mediterranean undercurrent off south Portugal during the period from September 1997 to September 1998. An analysis of this Lagrangian approach complemented with results obtained with XBT probes and current meter data from the same project shows some of the major aspects of the flow associated with the undercurrent as well as the eddy activity related with it. Floats that stayed in the undercurrent featured a downstream deceleration and a steering by bottom topography. Three meddy formations at Cape St. Vincent could be isolated from the float data as well as the generation of dipolar structures in the Portimao Canyon, a feature not previously directly observed. The dynamical coupling of meddies and cyclones was observed for a considerable period of time. High-resolution modeling of the Mediterranean Outflow using a sigma-coordinate primitive equations ocean model (SCRUM) incorporating realistic topography and stratification reveals the adjustment of the salty plume while descending along the continental slope of the Gulf of Cadiz channeled by the topography. The model reproduces the generation of eddies in the two observed sites (cape and canyon) and the splitting of the outflow water into well-defined cores.

  11. Synergistic use of an oil drift model and remote sensing observations for oil spill monitoring.

    Science.gov (United States)

    De Padova, Diana; Mossa, Michele; Adamo, Maria; De Carolis, Giacomo; Pasquariello, Guido

    2017-02-01

    In case of oil spills due to disasters, one of the environmental concerns is the oil trajectories and spatial distribution. To meet these new challenges, spill response plans need to be upgraded. An important component of such a plan would be models able to simulate the behaviour of oil in terms of trajectories and spatial distribution, if accidentally released, in deep water. All these models need to be calibrated with independent observations. The aim of the present paper is to demonstrate that significant support to oil slick monitoring can be obtained by the synergistic use of oil drift models and remote sensing observations. Based on transport properties and weathering processes, oil drift models can indeed predict the fate of spilled oil under the action of water current velocity and wind in terms of oil position, concentration and thickness distribution. The oil spill event that occurred on 31 May 2003 in the Baltic Sea offshore the Swedish and Danish coasts is considered a case study with the aim of producing three-dimensional models of sea circulation and oil contaminant transport. The High-Resolution Limited Area Model (HIRLAM) is used for atmospheric forcing. The results of the numerical modelling of current speed and water surface elevation data are validated by measurements carried out in Kalmarsund, Simrishamn and Kungsholmsfort stations over a period of 18 days and 17 h. The oil spill model uses the current field obtained from a circulation model. Near-infrared (NIR) satellite images were compared with numerical simulations. The simulation was able to predict both the oil spill trajectories of the observed slick and thickness distribution. Therefore, this work shows how oil drift modelling and remotely sensed data can provide the right synergy to reproduce the timing and transport of the oil and to get reliable estimates of thicknesses of spilled oil to prepare an emergency plan and to assess the magnitude of risk involved in case of oil spills due

  12. Root traits explain observed tundra vegetation nitrogen uptake patterns: Implications for trait-based land models

    Science.gov (United States)

    Zhu, Qing; Iversen, Colleen M.; Riley, William J.; Slette, Ingrid J.; Vander Stel, Holly M.

    2016-12-01

    Ongoing climate warming will likely perturb vertical distributions of nitrogen availability in tundra soils through enhancing nitrogen mineralization and releasing previously inaccessible nitrogen from frozen permafrost soil. However, arctic tundra responses to such changes are uncertain, because of a lack of vertically explicit nitrogen tracer experiments and untested hypotheses of root nitrogen uptake under the stress of microbial competition implemented in land models. We conducted a vertically explicit 15N tracer experiment for three dominant tundra species to quantify plant N uptake profiles. Then we applied a nutrient competition model (N-COM), which is being integrated into the ACME Land Model, to explain the observations. Observations using an 15N tracer showed that plant N uptake profiles were not consistently related to root biomass density profiles, which challenges the prevailing hypothesis that root density always exerts first-order control on N uptake. By considering essential root traits (e.g., biomass distribution and nutrient uptake kinetics) with an appropriate plant-microbe nutrient competition framework, our model reasonably reproduced the observed patterns of plant N uptake. In addition, we show that previously applied nutrient competition hypotheses in Earth System Land Models fail to explain the diverse plant N uptake profiles we observed. Our results cast doubt on current climate-scale model predictions of arctic plant responses to elevated nitrogen supply under a changing climate and highlight the importance of considering essential root traits in large-scale land models. Finally, we provided suggestions and a short synthesis of data availability for future trait-based land model development.

  13. Broad range of 2050 warming from an observationally constrained large climate model ensemble

    Science.gov (United States)

    Rowlands, Daniel J.; Frame, David J.; Ackerley, Duncan; Aina, Tolu; Booth, Ben B. B.; Christensen, Carl; Collins, Matthew; Faull, Nicholas; Forest, Chris E.; Grandey, Benjamin S.; Gryspeerdt, Edward; Highwood, Eleanor J.; Ingram, William J.; Knight, Sylvia; Lopez, Ana; Massey, Neil; McNamara, Frances; Meinshausen, Nicolai; Piani, Claudio; Rosier, Suzanne M.; Sanderson, Benjamin M.; Smith, Leonard A.; Stone, Dáithí A.; Thurston, Milo; Yamazaki, Kuniko; Hiro Yamazaki, Y.; Allen, Myles R.

    2012-04-01

    Incomplete understanding of three aspects of the climate system--equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing--and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century. Explorations of these uncertainties have so far relied on scaling approaches, large ensembles of simplified climate models, or small ensembles of complex coupled atmosphere-ocean general circulation models which under-represent uncertainties in key climate system properties derived from independent sources. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere-ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4-3K by 2050, relative to 1961-1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report, but extends towards larger warming than observed in ensembles-of-opportunity typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range `no mitigation' scenario for greenhouse-gas emissions.

  14. Renormalization group running of fermion observables in an extended non-supersymmetric SO(10) model

    Science.gov (United States)

    Meloni, Davide; Ohlsson, Tommy; Riad, Stella

    2017-03-01

    We investigate the renormalization group evolution of fermion masses, mixings and quartic scalar Higgs self-couplings in an extended non-supersymmetric SO(10) model, where the Higgs sector contains the 10 H, 120 H, and 126 H representations. The group SO(10) is spontaneously broken at the GUT scale to the Pati-Salam group and subsequently to the Standard Model (SM) at an intermediate scale M I. We explicitly take into account the effects of the change of gauge groups in the evolution. In particular, we derive the renormalization group equations for the different Yukawa couplings. We find that the computed physical fermion observables can be successfully matched to the experimental measured values at the electroweak scale. Using the same Yukawa couplings at the GUT scale, the measured values of the fermion observables cannot be reproduced with a SM-like evolution, leading to differences in the numerical values up to around 80%. Furthermore, a similar evolution can be performed for a minimal SO(10) model, where the Higgs sector consists of the 10 H and 126 H representations only, showing an equally good potential to describe the low-energy fermion observables. Finally, for both the extended and the minimal SO(10) models, we present predictions for the three Dirac and Majorana CP-violating phases as well as three effective neutrino mass parameters.

  15. Observational semantics of the Prolog Resolution Box Model

    CERN Document Server

    Deransart, Pierre; Ferrand, Gérard

    2007-01-01

    This paper specifies an observational semantics and gives an original presentation of the Byrd box model. The approach accounts for the semantics of Prolog tracers independently of a particular Prolog implementation. Prolog traces are, in general, considered as rather obscure and difficult to use. The proposed formal presentation of its trace constitutes a simple and pedagogical approach for teaching Prolog or for implementing Prolog tracers. It is a form of declarative specification for the tracers. The trace model introduced here is only one example to illustrate general problems relating to tracers and observing processes. Observing processes know, from observed processes, only their traces. The issue is then to be able to reconstitute, by the sole analysis of the trace, part of the behaviour of the observed process, and if possible, without any loss of information. As a matter of fact, our approach highlights qualities of the Prolog resolution box model which made its success, but also its insufficiencies...

  16. Shortening the learning curve in endoscopic endonasal skull base surgery: a reproducible polymer tumor model for the trans-sphenoidal trans-tubercular approach to retro-infundibular tumors.

    Science.gov (United States)

    Berhouma, Moncef; Baidya, Nishanta B; Ismaïl, Abdelhay A; Zhang, Jun; Ammirati, Mario

    2013-09-01

    Endoscopic endonasal skull base surgery attracts an increasing number of young neurosurgeons. This recent technique requires specific technical skills for the approaches to non-pituitary tumors (expanded endoscopic endonasal surgery). Actual residents' busy schedules carry the risk of compromising their laboratory training by limiting significantly the dedicated time for dissections. To enhance and shorten the learning curve in expanded endoscopic endonasal skull base surgery, we propose a reproducible model based on the implantation of a polymer via an intracranial route to provide a pathological retro-infundibular expansive lesion accessible to a virgin expanded endoscopic endonasal route, avoiding the ethically-debatable need to hundreds of pituitary cases in live patients before acquiring the desired skills. A polymer-based tumor model was implanted in 6 embalmed human heads via a microsurgical right fronto-temporal approach through the carotido-oculomotor cistern to mimic a retro-infundibular tumor. The tumor's position was verified by CT-scan. An endoscopic endonasal trans-sphenoidal trans-tubercular trans-planum approach was then carried out on a virgin route under neuronavigation tracking. Dissection of the tumor model from displaced surrounding neurovascular structures reproduced live surgery's sensations and challenges. Post-implantation CT-scan allowed the pre-removal assessment of the tumor insertion, its relationships as well as naso-sphenoidal anatomy in preparation of the endoscopic approach. Training on easily reproducible retro-infundibular approaches in a context of pathological distorted anatomy provides a unique opportunity to avoid the need for repetitive live surgeries to acquire skills for this kind of rare tumors, and may shorten the learning curve for endoscopic endonasal surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided i

  18. Time-symmetric universe model and its observational implication

    Energy Technology Data Exchange (ETDEWEB)

    Futamase, T.; Matsuda, T.

    1987-08-01

    A time-symmetric closed-universe model is discussed in terms of the radiation arrow of time. The time symmetry requires the occurrence of advanced waves in the recontracting phase of the Universe. We consider the observational consequences of such advanced waves, and it is shown that a test observer in the expanding phase can observe a time-reversed image of a source of radiation in the future recontracting phase.

  19. How well do environmental archives of atmospheric mercury deposition in the Arctic reproduce rates and trends depicted by atmospheric models and measurements?

    Science.gov (United States)

    Goodsite, M E; Outridge, P M; Christensen, J H; Dastoor, A; Muir, D; Travnikov, O; Wilson, S

    2013-05-01

    This review compares the reconstruction of atmospheric Hg deposition rates and historical trends over recent decades in the Arctic, inferred from Hg profiles in natural archives such as lake and marine sediments, peat bogs and glacial firn (permanent snowpack), against those predicted by three state-of-the-art atmospheric models based on global Hg emission inventories from 1990 onwards. Model veracity was first tested against atmospheric Hg measurements. Most of the natural archive and atmospheric data came from the Canadian-Greenland sectors of the Arctic, whereas spatial coverage was poor in other regions. In general, for the Canadian-Greenland Arctic, models provided good agreement with atmospheric gaseous elemental Hg (GEM) concentrations and trends measured instrumentally. However, there are few instrumented deposition data with which to test the model estimates of Hg deposition, and these data suggest models over-estimated deposition fluxes under Arctic conditions. Reconstructed GEM data from glacial firn on Greenland Summit showed the best agreement with the known decline in global Hg emissions after about 1980, and were corroborated by archived aerosol filter data from Resolute, Nunavut. The relatively stable or slowly declining firn and model GEM trends after 1990 were also corroborated by real-time instrument measurements at Alert, Nunavut, after 1995. However, Hg fluxes and trends in northern Canadian lake sediments and a southern Greenland peat bog did not exhibit good agreement with model predictions of atmospheric deposition since 1990, the Greenland firn GEM record, direct GEM measurements, or trends in global emissions since 1980. Various explanations are proposed to account for these discrepancies between atmosphere and archives, including problems with the accuracy of archive chronologies, climate-driven changes in Hg transfer rates from air to catchments, waters and subsequently into sediments, and post-depositional diagenesis in peat bogs

  20. The detection of observations possibly influential for model selection

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1991-01-01

    textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.

  1. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2014-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m•min−1 cutting speed and 0...... a built–up edge occurrence hindering a robust evaluation of cutting fluid performance, if the data evaluation is based on surface finish only. Measurements of hole geometry provide documentation to recognise systematic error distorting the performance test....

  2. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2012-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m∙min-1 cutting speed and 0...... a built-up edge occurrence hindering a robust evaluation of cutting fluid performance, if the data evaluation is based on surface finish only. Measurements of hole geometry provide documentation to recognize systematic error distorting the performance test....

  3. Evaluating Heating and Moisture Variability Associated with MJO Events in a Low-Dimension Dynamic Model with Observations and Reanalyses

    Science.gov (United States)

    Stachnik, J. P.; Waliser, D. E.; Majda, A.; Stechmann, S.

    2013-12-01

    The Madden-Julian Oscillation (MJO) is the leading mode of intraseasonal variability in the tropics. Despite its importance toward determining large-scale variability at both low-latitudes and the extratropics, MJO prediction often suffers from low skill regarding event initiation and its overall simulation remains a challenge in many global climate models (GCMs). The MJO 'skeleton' model, originally developed by Majda and Stechmann (2009), is a new low-order dynamic model that is capable of reproducing several salient features of the MJO despite its many simplifications relative to higher-order GCMs. Among their successes, the newest version of the skeleton model is able to reproduce the intermittent generation of MJO events, including organization into MJO wave trains that experience both growth and decay. This study presents an analysis of initiation events in the skeleton model. Higher-order features, such as the organization into wave trains, are examined herein and we document the simulated heating and moisture variability in the model compared to satellite-derived observations and reanalyses. We likewise present a composite analysis of the thermodynamic fields related to the initiation of primary and successive MJO events, as well as those precursor conditions leading to quiescent periods of the MJO. Time permitting, we also evaluate the multi-scale structures and other aspects of heating and moisture variability in the model (e.g., asymmetry between active and inactive periods) compared to distributions of integrated heating and moisture in Tropical Rainfall Measuring Mission (TRMM) observations and other datasets.

  4. Model-independent inference on compact-binary observations

    Science.gov (United States)

    Mandel, Ilya; Farr, Will M.; Colonna, Andrea; Stevenson, Simon; Tiňo, Peter; Veitch, John

    2017-03-01

    The recent advanced LIGO detections of gravitational waves from merging binary black holes enhance the prospect of exploring binary evolution via gravitational-wave observations of a population of compact-object binaries. In the face of uncertainty about binary formation models, model-independent inference provides an appealing alternative to comparisons between observed and modelled populations. We describe a procedure for clustering in the multidimensional parameter space of observations that are subject to significant measurement errors. We apply this procedure to a mock data set of population-synthesis predictions for the masses of merging compact binaries convolved with realistic measurement uncertainties, and demonstrate that we can accurately distinguish subpopulations of binary neutron stars, binary black holes, and mixed neutron star-black hole binaries with tens of observations.

  5. Theoretical Modeling and Computer Simulations for the Origins and Evolution of Reproducing Molecular Systems and Complex Systems with Many Interactive Parts

    Science.gov (United States)

    Liang, Shoudan

    2000-01-01

    Our research effort has produced nine publications in peer-reviewed journals listed at the end of this report. The work reported here are in the following areas: (1) genetic network modeling; (2) autocatalytic model of pre-biotic evolution; (3) theoretical and computational studies of strongly correlated electron systems; (4) reducing thermal oscillations in atomic force microscope; (5) transcription termination mechanism in prokaryotic cells; and (6) the low glutamine usage in thennophiles obtained by studying completely sequenced genomes. We discuss the main accomplishments of these publications.

  6. Hints on halo evolution in SFDM models with galaxy observations

    CERN Document Server

    Gonzalez-Morales, Alma X; Urena-Lopez, L Arturo; Valenzuela, Octavio

    2012-01-01

    A massive, self-interacting scalar field has been considered as a possible candidate for the dark matter in the universe. We present an observational constraint to the model arising from strong lensing observations in galaxies. The result points to a discrepancy in the properties of scalar field dark matter halos for dwarf and lens galaxies, mainly because halo parameters are directly related to physical quantities in the model. This is an important indication that it becomes necessary to have a better understanding of halo evolution in scalar field dark matter models, where the presence of baryons can play an important role.

  7. Consistent negative response of US crops to high temperatures in observations and crop models

    Science.gov (United States)

    Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja

    2017-04-01

    High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day above 30°C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures above 30°C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.

  8. Consistent negative response of US crops to high temperatures in observations and crop models

    Science.gov (United States)

    Schauberger, Bernhard; Archontoulis, Sotirios; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Elliott, Joshua; Folberth, Christian; Khabarov, Nikolay; Müller, Christoph; Pugh, Thomas A. M.; Rolinski, Susanne; Schaphoff, Sibyll; Schmid, Erwin; Wang, Xuhui; Schlenker, Wolfram; Frieler, Katja

    2017-01-01

    High temperatures are detrimental to crop yields and could lead to global warming-driven reductions in agricultural productivity. To assess future threats, the majority of studies used process-based crop models, but their ability to represent effects of high temperature has been questioned. Here we show that an ensemble of nine crop models reproduces the observed average temperature responses of US maize, soybean and wheat yields. Each day >30 °C diminishes maize and soybean yields by up to 6% under rainfed conditions. Declines observed in irrigated areas, or simulated assuming full irrigation, are weak. This supports the hypothesis that water stress induced by high temperatures causes the decline. For wheat a negative response to high temperature is neither observed nor simulated under historical conditions, since critical temperatures are rarely exceeded during the growing season. In the future, yields are modelled to decline for all three crops at temperatures >30 °C. Elevated CO2 can only weakly reduce these yield losses, in contrast to irrigation.

  9. The Many Manifestations of Downsizing: Hierarchical Galaxy Formation Models confront Observations

    CERN Document Server

    Fontanot, Fabio; Monaco, Pierluigi; Somerville, Rachel S; Santini, Paola

    2009-01-01

    [abridged] It has been widely claimed that several lines of observational evidence point towards a "downsizing" (DS) of the process of galaxy formation over cosmic time. This behavior is sometimes termed "anti-hierarchical", and contrasted with the "bottom-up" assembly of the dark matter structures in Cold Dark Matter models. In this paper we address three different kinds of observational evidence that have been described as DS: the stellar mass assembly, star formation rate and the ages of the stellar populations in local galaxies. We compare a broad compilation of available data-sets with the predictions of three different semi-analytic models of galaxy formation within the Lambda-CDM framework. In the data, we see only weak evidence at best of DS in stellar mass and in star formation rate. We find that, when observational errors on stellar mass and SFR are taken into account, the models acceptably reproduce the evolution of massive galaxies, over the entire redshift range that we consider. However, lower m...

  10. Reproducing an extreme flood with uncertain post-event information

    Science.gov (United States)

    Fuentes-Andino, Diana; Beven, Keith; Halldin, Sven; Xu, Chong-Yu; Reynolds, José Eduardo; Di Baldassarre, Giuliano

    2017-07-01

    Studies for the prevention and mitigation of floods require information on discharge and extent of inundation, commonly unavailable or uncertain, especially during extreme events. This study was initiated by the devastating flood in Tegucigalpa, the capital of Honduras, when Hurricane Mitch struck the city. In this study we hypothesized that it is possible to estimate, in a trustworthy way considering large data uncertainties, this extreme 1998 flood discharge and the extent of the inundations that followed from a combination of models and post-event measured data. Post-event data collected in 2000 and 2001 were used to estimate discharge peaks, times of peak, and high-water marks. These data were used in combination with rain data from two gauges to drive and constrain a combination of well-known modelling tools: TOPMODEL, Muskingum-Cunge-Todini routing, and the LISFLOOD-FP hydraulic model. Simulations were performed within the generalized likelihood uncertainty estimation (GLUE) uncertainty-analysis framework. The model combination predicted peak discharge, times of peaks, and more than 90 % of the observed high-water marks within the uncertainty bounds of the evaluation data. This allowed an inundation likelihood map to be produced. Observed high-water marks could not be reproduced at a few locations on the floodplain. Identifications of these locations are useful to improve model set-up, model structure, or post-event data-estimation methods. Rainfall data were of central importance in simulating the times of peak and results would be improved by a better spatial assessment of rainfall, e.g. from radar data or a denser rain-gauge network. Our study demonstrated that it was possible, considering the uncertainty in the post-event data, to reasonably reproduce the extreme Mitch flood in Tegucigalpa in spite of no hydrometric gauging during the event. The method proposed here can be part of a Bayesian framework in which more events can be added into the analysis as

  11. Observational Constraints on a Variable Dark Energy Model

    CERN Document Server

    Movahed, M S; Movahed, Mohammad Sadegh; Rahvar, Sohrab

    2006-01-01

    We present cosmological tests for a phenomenological parametrization of quintessence model with time-varying equation of state on low, intermediate and high redshift observations \\cite{w04}. We study the sensitivity of the comoving distance and volume element with the Alcock-Paczynski test to the time varying model of dark energy. At the intermediate redshifts, Gold supernova Type Ia data is used to fit the quintessence model to the observed distance modulus. The value of the observed acoustic angular scale by WMAP experiment also is compared with the model. The combined result of CMB and SNIa data confines $w=p/\\rho$ to be more than -1.3 which can violate the dominant energy condition.

  12. Modelled radiative forcing of the direct aerosol effect with multi-observation evaluation

    Directory of Open Access Journals (Sweden)

    G. Myhre

    2009-02-01

    Full Text Available A high-resolution global aerosol model (Oslo CTM2 driven by meteorological data and allowing a comparison with a variety of aerosol observations is used to simulate radiative forcing (RF of the direct aerosol effect. The model simulates all main aerosol components, including several secondary components such as nitrate and secondary organic carbon. The model reproduces the main chemical composition and size features observed during large aerosol campaigns. Although the chemical composition compares best with ground-based measurement over land for modelled sulphate, no systematic differences are found for other compounds. The modelled aerosol optical depth (AOD is compared to remote sensed data from AERONET ground and MODIS and MISR satellite retrievals. To gain confidence in the aerosol modelling, we have tested its ability to reproduce daily variability in the aerosol content, and this is performing well in many regions; however, we also identified some locations where model improvements are needed. The annual mean regional pattern of AOD from the aerosol model is broadly similar to the AERONET and the satellite retrievals (mostly within 10–20%. We notice a significant improvement from MODIS Collection 4 to Collection 5 compared to AERONET data. Satellite derived estimates of aerosol radiative effect over ocean for clear sky conditions differs significantly on regional scales (almost up to a factor two, but also in the global mean. The Oslo CTM2 has an aerosol radiative effect close to the mean of the satellite derived estimates. We derive a radiative forcing (RF of the direct aerosol effect of −0.35 Wm−2 in our base case. Implementation of a simple approach to consider internal black carbon (BC mixture results in a total RF of −0.28 Wm−2. Our results highlight the importance of carbonaceous particles, producing stronger individual RF than considered in the recent IPCC estimate; however, net RF is less different

  13. How useful are stream level observations for model calibration?

    Science.gov (United States)

    Seibert, Jan; Vis, Marc; Pool, Sandra

    2014-05-01

    Streamflow estimation in ungauged basins is especially challenging in data-scarce regions and it might be reasonable to take at least a few measurements. Recent studies demonstrated that few streamflow measurements, representing data that could be measured with limited efforts in an ungauged basin, might be needed to constrain runoff models for simulations in ungauged basins. While in these previous studies we assumed that few streamflow measurements were taken during different points in time over one year, obviously it would be reasonable to (also) measure stream levels. Several approaches could be used in practice for such stream level observations: water level loggers have become less expensive and easier to install and can be used to obtain continuous stream level time series; stream levels will in the near future be increasingly available from satellite remote sensing resulting in evenly space time series; community-based approaches (e.g., crowdhydrology.org), finally, can offer level observations at irregular time intervals. Here we present a study where a catchment runoff model (the HBV model) was calibrated for gauged basins in Switzerland assuming that only a subset of the data was available. We pretended that only stream level observations at different time intervals, representing the temporal resolution of the different observation approaches mentioned before, and a small number of streamflow observations were available. The model, which was calibrated based on these data subsets, was then evaluated on the full observed streamflow record. Our results indicate that streamlevel data alone already can provide surprisingly good model simulation results, which can be further improved by the combination with one streamflow observation. The surprisingly good results with only streamlevel time series can be explained by the relatively high precipitation in the studied catchments. Constructing a hypothetical catchment with reduced precipitation resulted in poorer

  14. Used of observed snow in the Snomod model

    Science.gov (United States)

    Sorteberg, H. K.

    2009-04-01

    For the hydroelectric industry in Norway, it is important to know exactly what resources are available at all times. The correct volume of snow reserves and the accurate forecasting of the spring flood volume can provide the best basis for maximising production values. The forward market can fluctuate considerably, and it is therefore important to know what is available at the right time. For many years, the Snomod model has been used to calculate snow reserves and to forecast the spring flood volume. Snomod is based on a regression equation between the annual observations of inflow and one or more precipitation series. Manual snow measurements are used in both Snomod and the HBV model and other models to estimate the correct snow reserves. In operational use, Snomod is updated manually with the snow estimate that is considered to be correct. Following the winter of 2007-2008, analyses were carried out to determine how accurate the forecasting was. The analyses were based on comparing the spring flood volume forecast with the observed spring flood volume using the ‘observed precipitation' precipitation scenario. Such analyses can tell us something about the quality of the model results for this winter. Analyses have been carried out for 18 models using Snomod. When the results from the analyses are compared with the spring floods, the spring flood volume has been forecast accurately for most of the models with observed precipitation when observed snow has been used in the forecasting process. The results indicate that nine of the models are very good, five are good and two are reasonable. Only one model produced a poor forecast of the spring flood volume. If a corresponding analysis without correction for observed snow is carried out, and the observed spring flood is compared with the forecast spring flood, the results are not as good. This may stem from the fact that during the spring of 2008 there were higher levels of evaporation during the melting season than

  15. The role of observational uncertainties in testing model hypotheses

    Science.gov (United States)

    Westerberg, I. K.; Birkel, C.

    2012-12-01

    Knowledge about hydrological processes and the spatial and temporal distribution of water resources is needed as a basis for managing water for hydropower, agriculture and flood-protection. Conceptual hydrological models may be used to infer knowledge on catchment functioning but are affected by uncertainties in the model representation of reality as well as in the observational data used to drive the model and to evaluate model performance. Therefore, meaningful hypothesis testing of the hydrological functioning of a catchment requires such uncertainties to be carefully estimated and accounted for in model calibration and evaluation. The aim of this study was to investigate the role of observational uncertainties in hypothesis testing, in particular whether it was possible to detect model-structural representations that were wrong in an important way given the uncertainties in the observational data. We studied the relatively data-scarce tropical Sarapiqui catchment in Costa Rica, Central America, where water resources play a vital part for hydropower production and livelihood. We tested several model structures of varying complexity as hypotheses about catchment functioning, but also hypotheses about the nature of the modelling errors. The tests were made within a learning framework for uncertainty estimation which enabled insights into data uncertainties, suitable model-structural representations and appropriate likelihoods. The observational uncertainty in discharge data was estimated from a rating-curve analysis and precipitation measurement errors through scenarios relating the error to, for example, canopy interception, wind-driven rain and the elevation gradient. The hypotheses were evaluated in a posterior analysis of the simulations where the performance of each simulation was analysed relative to the observational uncertainties for the entire hydrograph as well as for different aspects of the hydrograph (e.g. peak flows, recession periods, and base flow

  16. Testing ocean tide models using GGP superconducting gravimeter observations

    Science.gov (United States)

    Baker, T.; Bos, M.

    2003-04-01

    Observations from the global network of superconducting gravimeters in the Global Geodynamics Project (GGP) are used to test 10 ocean tide models (SCHW; FES94.1, 95.2, 98, 99; CSR3.0, 4.0; TPXO.5; GOT99.2b; and NAO.99b). In addition, observations are used from selected sites with LaCoste and Romberg gravimeters with electrostatic feedback, where special attention has been given to achieving a calibration accuracy of 0.1%. In Europe, there are several superconducting gravimeter stations in a relatively small area and this can be used to advantage in testing the ocean (and body) tide models and in identifying sites with anomalous observations. At some of the superconducting gravimeter sites there are anomalies in the in-phase components of the main tidal harmonics, which are due to calibration errors of up to 0.3%. It is shown that the recent ocean tide models are in better agreement with the tidal gravity observations than were the earlier models of Schwiderski and FES94.1. However, no single ocean tide model gives completely satisfactory results in all areas of the world. For example, for M2 the TPXO.5 and NAO99b models give anomalous results in Europe, whereas the FES95.2, FES98 and FES99 models give anomalous results in China and Japan. It is shown that the observations from this improved set of tidal gravity stations will provide an important test of the new ocean tide models that will be developed in the next few years. For further details see Baker, T.F. and Bos, M.S. (2003). "Validating Earth and ocean tide models using tidal gravity measurements", Geophysical Journal International, 152.

  17. Energetic protons at Mars: interpretation of SLED/Phobos-2 observations by a kinetic model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    2012-11-01

    Full Text Available Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs from the Sun can penetrate close to the planet (under some circumstances reaching the surface. On 13 March 1989 the SLED instrument aboard the Phobos-2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8 RM. In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3-D self-consistent hybrid model (HYB-Mars where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1 a flux enhancement near the inbound bow shock, (2 the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3 the energy dependency of the flux enhancement near the bow shock and (4 how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars–solar wind interaction significantly modulated the Martian energetic particle environment.

  18. Wind waves in tropical cyclones: satellite altimeter observations and modeling

    Science.gov (United States)

    Golubkin, Pavel; Kudryavtsev, Vladimir; Chapron, Bertrand

    2016-04-01

    altimeter measurements, together with TC intensities estimates, are used to assess the proposed formulations. Compared to satellite altimeter measurements, the proposed analytical solutions for the wave energy distribution are in convincing agreement. For almost symmetrical wind field, the model quantitatively reproduces measured profiles of the wave energy with significant asymmetry between the wave-containment front-right quadrant and the rear-left quadrant where wave energy is remarkably damped. Though the differences between parametric model-wind and altimeter-wind profiles are noticeable, the energy ratios between the front-right and the rear-left quadrants are similar for both wind sources. As analytically developed, the wave enhancement criterion can provide a rapid evaluation to document the general characteristics of each storm, especially the expected wave field asymmetry.

  19. Simulation of the hydrodynamic conditions of the eye to better reproduce the drug release from hydrogel contact lenses: experiments and modeling.

    Science.gov (United States)

    Pimenta, A F R; Valente, A; Pereira, J M C; Pereira, J C F; Filipe, H P; Mata, J L G; Colaço, R; Saramago, B; Serro, A P

    2016-12-01

    Currently, most in vitro drug release studies for ophthalmic applications are carried out in static sink conditions. Although this procedure is simple and useful to make comparative studies, it does not describe adequately the drug release kinetics in the eye, considering the small tear volume and flow rates found in vivo. In this work, a microfluidic cell was designed and used to mimic the continuous, volumetric flow rate of tear fluid and its low volume. The suitable operation of the cell, in terms of uniformity and symmetry of flux, was proved using a numerical model based in the Navier-Stokes and continuity equations. The release profile of a model system (a hydroxyethyl methacrylate-based hydrogel (HEMA/PVP) for soft contact lenses (SCLs) loaded with diclofenac) obtained with the microfluidic cell was compared with that obtained in static conditions, showing that the kinetics of release in dynamic conditions is slower. The application of the numerical model demonstrated that the designed cell can be used to simulate the drug release in the whole range of the human eye tear film volume and allowed to estimate the drug concentration in the volume of liquid in direct contact with the hydrogel. The knowledge of this concentration, which is significantly different from that measured in the experimental tests during the first hours of release, is critical to predict the toxicity of the drug release system and its in vivo efficacy. In conclusion, the use of the microfluidic cell in conjunction with the numerical model shall be a valuable tool to design and optimize new therapeutic drug-loaded SCLs.

  20. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    Science.gov (United States)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  1. Accessibility and Reproducibility of Stable High-qmin Steady-State Scenarios by q-profile+βN Model Predictive Control

    Science.gov (United States)

    Schuster, E.; Wehner, W.; Holcomb, C. T.; Victor, B.; Ferron, J. R.; Luce, T. C.

    2016-10-01

    The capability of combined q-profile and βN control to enable access to and repeatability of steady-state scenarios for qmin > 1.4 discharges has been assessed in DIII-D experiments. To steer the plasma to the desired state, model predictive control (MPC) of both the q-profile and βN numerically solves successive optimization problems in real time over a receding time horizon by exploiting efficient quadratic programming techniques. A key advantage of this control approach is that it allows for explicit incorporation of state/input constraints to prevent the controller from driving the plasma outside of stability/performance limits and obtain, as closely as possible, steady state conditions. The enabler of this feedback-control approach is a control-oriented model capturing the dominant physics of the q-profile and βN responses to the available actuators. Experiments suggest that control-oriented model-based scenario planning in combination with MPC can play a crucial role in exploring stability limits of scenarios of interest. Supported by the US DOE under DE-SC0010661.

  2. Large barrier, highly uniform and reproducible Ni-Si/4H-SiC forward Schottky diode characteristics: testing the limits of Tung's model

    Science.gov (United States)

    Omar, Sabih U.; Sudarshan, Tangali S.; Rana, Tawhid A.; Song, Haizheng; Chandrashekhar, M. V. S.

    2014-07-01

    We report highly ideal (n < 1.1), uniform nickel silicide (Ni-Si)/SiC Schottky barrier (1.60-1.67 eV with a standard deviation <2.8%) diodes, fabricated on 4H-SiC epitaxial layers grown by chemical vapour deposition. The barrier height was constant over a wide epilayer doping range of 1014-1016 cm-3, apart from a slight decrease consistent with image force lowering. This remarkable uniformity was achieved by careful optimization of the annealing of the Schottky interface to minimize non-idealities that could lead to inhomogeneity. Tung's barrier inhomogeneity model was used to quantify the level of inhomogeneity in the optimized annealed diodes. The estimated ‘bulk’ barrier height (1.75 eV) was consistent with the Shockley-Mott limit for the Ni-Si/4H-SiC interface, implying an unpinned Fermi level. But the model was not useful to explain the poor ideality in unoptimized, as-deposited Schottky contacts (n = 1.6 - 2.5). We show analytically and numerically that only idealities n < 1.21 can be explained using Tung's model, irrespective of material system, indicating that the barrier height inhomogeneity is not the only cause of poor ideality in Schottky diodes. For explaining this highly non-ideal behaviour, other factors (e.g. interface traps, morphological defects, extrinsic impurities, etc) need to be considered.

  3. Characterization of coarse particulate matter in the western United States: a comparison between observation and modeling

    Directory of Open Access Journals (Sweden)

    R. Li

    2013-02-01

    Full Text Available We provide a regional characterization of coarse particulate matter (PM10–2.5 spanning the western United States based on the analysis of measurements from 50 sites reported in the US EPA Air Quality System (AQS and two state agencies. We found that the observed PM10–2.5 concentrations show significant spatial variability and distinct spatial patterns, associated with the distributions of land use/land cover and soil moisture. The highest concentrations were observed in the southwestern US, where sparse vegetation, shrublands or barren lands dominate with lower soil moistures, whereas the lowest concentrations were observed in areas dominated by grasslands, forest, or croplands with higher surface soil moistures. The observed PM10–2.5 concentrations also show variable seasonal, weekly, and diurnal patterns, indicating a variety of sources and their relative importance at different locations. The observed results were compared to modeled PM10–2.5 concentrations from an annual simulation using the Community Multiscale Air Quality modeling system (CMAQ that has been designed for regulatory or policy assessments of a variety of pollutants including PM10, which consists of PM10–2.5 and fine particulate matter (PM2.5. The model under-predicts PM10–2.5 observations at 49 of 50 sites, among which 14 sites have annual observation means that are at least five times greater than model means. Model results also fail to reproduce their spatial patterns. Important sources (e.g. pollen, bacteria, fungal spores, and geogenic dust were not included in the emission inventory used and/or the applied emissions were greatly under-estimated. Unlike the observed patterns that are more complex, modeled PM10–2.5 concentrations show the similar seasonal, weekly, and diurnal pattern; the temporal allocations in the modeling system need improvement. CMAQ does

  4. Characterization of coarse particulate matter in the western United States: a comparison between observation and modeling

    Science.gov (United States)

    Li, R.; Wiedinmyer, C.; Baker, K. R.; Hannigan, M. P.

    2013-02-01

    We provide a regional characterization of coarse particulate matter (PM10-2.5) spanning the western United States based on the analysis of measurements from 50 sites reported in the US EPA Air Quality System (AQS) and two state agencies. We found that the observed PM10-2.5 concentrations show significant spatial variability and distinct spatial patterns, associated with the distributions of land use/land cover and soil moisture. The highest concentrations were observed in the southwestern US, where sparse vegetation, shrublands or barren lands dominate with lower soil moistures, whereas the lowest concentrations were observed in areas dominated by grasslands, forest, or croplands with higher surface soil moistures. The observed PM10-2.5 concentrations also show variable seasonal, weekly, and diurnal patterns, indicating a variety of sources and their relative importance at different locations. The observed results were compared to modeled PM10-2.5 concentrations from an annual simulation using the Community Multiscale Air Quality modeling system (CMAQ) that has been designed for regulatory or policy assessments of a variety of pollutants including PM10, which consists of PM10-2.5 and fine particulate matter (PM2.5). The model under-predicts PM10-2.5 observations at 49 of 50 sites, among which 14 sites have annual observation means that are at least five times greater than model means. Model results also fail to reproduce their spatial patterns. Important sources (e.g. pollen, bacteria, fungal spores, and geogenic dust) were not included in the emission inventory used and/or the applied emissions were greatly under-estimated. Unlike the observed patterns that are more complex, modeled PM10-2.5 concentrations show the similar seasonal, weekly, and diurnal pattern; the temporal allocations in the modeling system need improvement. CMAQ does not include organic materials in PM10-2.5; however, speciation measurements show that organics constitute a significant component

  5. Total cloud cover from satellite observations and climate models

    Directory of Open Access Journals (Sweden)

    P. Probst

    2010-09-01

    Full Text Available Global and zonal monthly means of cloud cover fraction for total cloudiness (CF from the ISCCP D2 dataset are compared to same quantity produced by the 20th century simulations of 21 climate models from the World Climate Research Programme's (WCRP's Coupled Model Intercomparison Project phase 3 (CMIP3 multi-model dataset archived by the Program for Climate Model Diagnosis and Intercomparison (PCMDI. The comparison spans the time frame from January 1984 to December 1999 and the global and zonal average of CF are studied. The restriction to total cloudiness depends on the output of some models that does not include the 3D cloud structure. It is shown that the global mean of CF for the PCMDI/CMIP3 models, averaged over the whole period, exhibits a considerable variance and generally underestimates the ISCCP value. Very large discrepancies among models, and between models and observations, are found in the polar areas, where both models and satellite observations are less reliable, and especially near Antarctica. For this reason the zonal analysis is focused over the 60° S–60° N latitudinal belt, which includes the tropical area and mid latitudes. The two hemispheres are analyzed separately to show the variation of the amplitude of the seasonal cycle. Most models overestimate the yearly averaged values of CF over all of the analysed areas, while differences emerge in their ability to capture the amplitude of the seasonal cycle. The models represent, in a qualitatively correct way, the magnitude and the weak sign of the seasonal cycle over the whole geographical domain, but overestimate the strength of the signal in the tropical areas and at mid-latitudes, when taken separately. The interannual variability of the two yearly averages and of the amplitude of the seasonal cycle is greatly underestimated by all models in each area analysed. This work shows that the climate models have an heterogeneous behaviour in simulating the CF over

  6. The Szekeres Swiss Cheese model and the CMB observations

    CERN Document Server

    Bolejko, Krzysztof

    2008-01-01

    This paper presents the application of the Szekeres Swiss Cheese model to observations of the cosmic microwave background (CMB) radiation. It aims to study the CMB temperature fluctuations by the means of the exact inhomogeneous Szekeres model. So far the impact of inhomogeneous matter distribution on the CMB observations has been almost exclusively studied within the linear perturbations of the Friedmann model. However, since the density contrast of cosmic structures is larger than 1 this issue is worth studying using another approach. The Szekeres model is an inhomogeneous, non-symmetrical and exact solution of the Einstein equations. In this model, light propagation and matter evolution can be exactly calculated, without approximations such as small amplitude of the density contrast. This will allow us to examine the impact of light propagation effects on the CMB temperature fluctuations. The results of such analysis show that small-scale, non-linear inhomogeneities introduce - via light propagation effect...

  7. Obs4MIPS: Satellite Observations for Model Evaluation

    Science.gov (United States)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2015-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review recent additions to the obs4MIPs collection, and provide updated download statistics. We will also provide an update on changes to submission and documentation guidelines, the work of the WCRP Data Advisory Council (WDAC) Observations for Model Evaluation Task Team, and engagement with the CMIP6 MIP experiments.

  8. Modeling of diffuse molecular gas applied to HD 102065 observations

    CERN Document Server

    Nehme, Cyrine; Boulanger, Francois; Forets, Guillaume Pineau des; Gry, Cecile

    2008-01-01

    Aims. We model a diffuse molecular cloud present along the line of sight to the star HD 102065. We compare our modeling with observations to test our understanding of physical conditions and chemistry in diffuse molecular clouds. Methods. We analyze an extensive set of spectroscopic observations which characterize the diffuse molecular cloud observed toward HD 102065. Absorption observations provide the extinction curve, H2, C I, CO, CH, and CH+ column densities and excitation. These data are complemented by observations of CII, CO and dust emission. Physical conditions are determined using the Meudon PDR model of UV illuminated gas. Results. We find that all observational results, except column densities of CH, CH+ and H2 in its excited (J > 2) levels, are consistent with a cloud model implying a Galactic radiation field (G~0.4 in Draine's unit), a density of 80 cm-3 and a temperature (60-80 K) set by the equilibrium between heating and cooling processes. To account for excited (J >2) H2 levels column densit...

  9. Observations that polar climate modelers use and want

    Science.gov (United States)

    Kay, J. E.; de Boer, G.; Hunke, E. C.; Bailey, D. A.; Schneider, D. P.

    2012-12-01

    Observations are essential for motivating and establishing improvement in the representation of polar processes within climate models. We believe that explicitly documenting the current methods used to develop and evaluate climate models with observations will help inform and improve collaborations between the observational and climate modeling communities. As such, we will present the current strategy of the Polar Climate Working Group (PCWG) to evaluate polar processes within Community Earth System Model (CESM) using observations. Our presentation will focus primarily on PCWG evaluation of atmospheric, sea ice, and surface oceanic processes. In the future, we hope to expand to include land surface, deep ocean, and biogeochemical observations. We hope our presentation, and a related working document developed by the PCWG (https://docs.google.com/document/d/1zt0xParsFeMYhlihfxVJhS3D5nEcKb8A41JH0G1Ic-E/edit) inspires new and useful interactions that lead to improved climate model representation of polar processes relevant to polar climate.

  10. Is Grannum grading of the placenta reproducible?

    Science.gov (United States)

    Moran, Mary; Ryan, John; Brennan, Patrick C.; Higgins, Mary; McAuliffe, Fionnuala M.

    2009-02-01

    Current ultrasound assessment of placental calcification relies on Grannum grading. The aim of this study was to assess if this method is reproducible by measuring inter- and intra-observer variation in grading placental images, under strictly controlled viewing conditions. Thirty placental images were acquired and digitally saved. Five experienced sonographers independently graded the images on two separate occasions. In order to eliminate any technological factors which could affect data reliability and consistency all observers reviewed images at the same time. To optimise viewing conditions ambient lighting was maintained between 25-40 lux, with monitors calibrated to the GSDF standard to ensure consistent brightness and contrast. Kappa (κ) analysis of the grades assigned was used to measure inter- and intra-observer reliability. Intra-observer agreement had a moderate mean κ-value of 0.55, with individual comparisons ranging from 0.30 to 0.86. Two images saved from the same patient, during the same scan, were each graded as I, II and III by the same observer. A mean κ-value of 0.30 (range from 0.13 to 0.55) indicated fair inter-observer agreement over the two occasions and only one image was graded consistently the same by all five observers. The study findings confirmed the lack of reproducibility associated with Grannum grading of the placenta despite optimal viewing conditions and highlight the need for new methods of assessing placental health in order to improve neonatal outcomes. Alternative methods for quantifying placental calcification such as a software based technique and 3D ultrasound assessment need to be explored.

  11. MHD models compared with Artemis observations at -60 Re

    Science.gov (United States)

    Gencturk Akay, Iklim; Sibeck, David; Angelopoulos, Vassilis; Kaymaz, Zerefsan; Kuznetsova, Maria

    2016-07-01

    The distant magnetotail has been one of the least studied magnetic regions of the Earth's magnetosphere compared to the other near Earth both dayside and nightside magnetospheric regions owing to the limited number of spacecraft observations. Since 2011, ARTEMIS spacecraft give an excellent opportunity to study the magnetotail at lunar distances in terms of data quality and parameter space. This also gives opportunities to improve the magnetotail models at -60 Re and encourages the modelling studies of the distant magnetotail. Using ARTEMIS data in distant magnetotail, we create magnetic field and plasma flow vector maps in different planes and separated with IMF orientation to understand the magnetotail dynamics at this distance. For this study, we use CCMC's Run-on-Request resources of the MHD models; specifically SWMF-BATS-R-US, OpenGGCM, and LFM and perform the similar analysis with the models. Our main purpose in this study is to measure the performance of the MHD models at -60 Re distant magnetotail by comparing the model results with Artemis observations. In the literature, such a comprehensive comparative study is lacking in the distant tail. Preliminary results show that in general all three models underestimate the magnetic field structure while overestimating the flow speed. In the cross-sectional view, LFM seems to produce the better agreement with the observations. A clear dipolar magnetic field structure is seen with dawn-dusk asymmetry in all models owing to slight positive IMF By but the effect was found to be exaggerated. All models show tailward flows at this distance of the magnetotail, most possibly owing to the magnetic reconnection at the near Earth tail distances. A detailed comparison of several tail characteristics from the models will be presented and discussions will be given with respect to the observations from Artemis at this distance.

  12. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  13. Foundation observation of teaching project--a developmental model of peer observation of teaching.

    Science.gov (United States)

    Pattison, Andrew Timothy; Sherwood, Morgan; Lumsden, Colin James; Gale, Alison; Markides, Maria

    2012-01-01

    Peer observation of teaching is important in the development of educators. The foundation curriculum specifies teaching competencies that must be attained. We created a developmental model of peer observation of teaching to help our foundation doctors achieve these competencies and develop as educators. A process for peer observation was created based on key features of faculty development. The project consisted of a pre-observation meeting, the observation, a post-observation debrief, writing of reflective reports and group feedback sessions. The project was evaluated by completion of questionnaires and focus groups held with both foundation doctors and the students they taught to achieve triangulation. Twenty-one foundation doctors took part. All completed reflective reports on their teaching. Participants described the process as useful in their development as educators, citing specific examples of changes to their teaching practice. Medical students rated the sessions as better or much better quality as their usual teaching. The study highlights the benefits of the project to individual foundation doctors, undergraduate medical students and faculty. It acknowledges potential anxieties involved in having teaching observed. A structured programme of observation of teaching can deliver specific teaching competencies required by foundation doctors and provides additional benefits.

  14. Observation- and model-based estimates of particulate dry nitrogen deposition to the oceans

    Science.gov (United States)

    Baker, Alex R.; Kanakidou, Maria; Altieri, Katye E.; Daskalakis, Nikos; Okin, Gregory S.; Myriokefalitakis, Stelios; Dentener, Frank; Uematsu, Mitsuo; Sarin, Manmohan M.; Duce, Robert A.; Galloway, James N.; Keene, William C.; Singh, Arvind; Zamora, Lauren; Lamarque, Jean-Francois; Hsu, Shih-Chieh; Rohekar, Shital S.; Prospero, Joseph M.

    2017-07-01

    TM4, while TM4 gives access to speciated parameters (NO3- and NH4+) that are more relevant to the observed parameters and which are not available in ACCMIP. Dry deposition fluxes (CalDep) were calculated from the observed concentrations using estimates of dry deposition velocities. Model-observation ratios (RA, n), weighted by grid-cell area and number of observations, were used to assess the performance of the models. Comparison in the three study regions suggests that TM4 overestimates NO3- concentrations (RA, n = 1.4-2.9) and underestimates NH4+ concentrations (RA, n = 0.5-0.7), with spatial distributions in the tropical Atlantic and northern Indian Ocean not being reproduced by the model. In the case of NH4+ in the Indian Ocean, this discrepancy was probably due to seasonal biases in the sampling. Similar patterns were observed in the various comparisons of CalDep to ModDep (RA, n = 0.6-2.6 for NO3-, 0.6-3.1 for NH4+). Values of RA, n for NHx CalDep-ModDep comparisons were approximately double the corresponding values for NH4+ CalDep-ModDep comparisons due to the significant fraction of gas-phase NH3 deposition incorporated in the TM4 and ACCMIP NHx model products. All of the comparisons suffered due to the scarcity of observational data and the large uncertainty in dry deposition velocities used to derive deposition fluxes from concentrations. These uncertainties have been a major limitation on estimates of the flux of material to the oceans for several decades. Recommendations are made for improvements in N deposition estimation through changes in observations, modelling and model-observation comparison procedures. Validation of modelled dry deposition requires effective comparisons to observable aerosol-phase species' concentrations, and this cannot be achieved if model products only report dry deposition flux over the ocean.

  15. A Generalized Ideal Observer Model for Decoding Sensory Neural Responses

    Directory of Open Access Journals (Sweden)

    Gopathy ePurushothaman

    2013-09-01

    Full Text Available We show that many ideal observer models used to decode neural activity can be generalizedto a conceptually and analytically simple form. This enables us to study the statisticalproperties of this class of ideal observer models in a unified manner. We consider in detailthe problem of estimating the performance of this class of models. We formulate the problemde novo by deriving two equivalent expressions for the performance and introducing the correspondingestimators. We obtain a lower bound on the number of observations (N requiredfor the estimate of the model performance to lie within a specified confidence interval at aspecified confidence level. We show that these estimators are unbiased and consistent, withvariance approaching zero at the rate of 1/N. We find that the maximum likelihood estimatorfor the model performance is not guaranteed to be the minimum variance estimator even forsome simple parametric forms (e.g., exponential of the underlying probability distributions.We discuss the application of these results for designing and interpreting neurophysiologicalexperiments that employ specific instances of this ideal observer model.

  16. A hybrid double-observer sightability model for aerial surveys

    Science.gov (United States)

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  17. Observational constraints on the generalized $\\alpha$ attractor model

    CERN Document Server

    Shahalam, M; Myrzakul, Shynaray; Wang, Anzhong

    2016-01-01

    We study the generalized $\\alpha$ attractor model in context of late time cosmic acceleration; the model interpolates between freezing and thawing dark energy models. In the slow roll regime, the originally potential is modified whereas the modification ceases in the asymptotic regime and the effective potential behaves as quadratic. In our setting, field rolls slowly around the present epoch and mimics dark matter in future. We put observational constraints on the model parameters for which we use an integrated data base (SN+Hubble+BAO+CMB) for carrying out the data analysis.

  18. Global warming and tropical Pacific sea surface temperature: Why models and observations do not agree

    Science.gov (United States)

    Coats, Sloan; Karnauskas, Kristopher

    2017-04-01

    The pattern of sea surface temperature (SST) in the tropical Pacific Ocean provides an important control on global climate, necessitating an understanding of how this pattern will change in response to anthropogenic radiative forcing. State-of-the-art climate models from the Coupled Model Intercomparison Project phase 5 (CMIP5) overwhelmingly project a decrease in the tropical Pacific zonal SST gradient over the coming century. This decrease is, in part, a response of the ocean to a weakening Walker circulation in the CMIP5 models, a consequence of the mass and energy balances of the hydrologic cycle identified by Held and Soden (2006). CMIP5 models, however, are not able to reproduce the observed increase in the zonal SST gradient between 1900-2013 C.E., which we argue to be robust using advanced statistical techniques and new observational datasets. While this increase is suggestive of the ocean dynamical thermostat mechanism of Clement et al. (1996), we provide evidence that a strengthening Equatorial Undercurrent (EUC) also contributes to eastern equatorial Pacific cooling. Importantly, the strengthening EUC is a response of the ocean to a weakening Walker circulation and thus can help to reconcile the range of opposing theories and observations of anthropogenic climate change in the tropical Pacific Ocean. Because of a newly identified bias in their simulation of equatorial coupled atmosphere-ocean dynamics, however, CMIP5 models do not capture the magnitude of the response of the EUC to anthropogenic radiative forcing. Consequently, they project a continuation of the opposite to what has been observed in the real world, with potentially serious consequences for projected climate impacts that are influenced by the tropical Pacific Ocean.

  19. Global sand and dust storms in 2008: Observation and HYSPLIT model verification

    Science.gov (United States)

    Wang, Yaqiang; Stein, Ariel F.; Draxler, Roland R.; de la Rosa, Jesús D.; Zhang, Xiaoye

    2011-11-01

    The HYSPLIT model has been applied to simulate the global dust distribution for 2008 using two different dust emission schemes. The first one assumes that emissions could occur from any land-use grid cell defined in the model as desert. The second emission approach uses an empirically derived algorithm based on satellite observations. To investigate the dust storm features and verify the model performance, a global dataset of Integrated Surface Hourly (ISH) observations has been analyzed to map the spatial distribution and seasonal variation of sand and dust storms. Furthermore, the PM 10 concentration data at four stations in Northern China and two stations in Southern Spain, and the AOD data from a station located at the center of the Sahara Desert have been compared with the model results. The spatial distribution of observed dust storm frequency from ISH shows the known high frequency areas located in North Africa, the Middle East, Mongolia and Northwestern China. Some sand and dust storms have also been observed in Australia, Mexico, Argentina, and other sites in South America. Most of the dust events in East Asia occur in the spring, however this seasonal feature is not so evident in other dust source regions. In general, the model reproduces the dust storm frequency for most of the regions for the two emission approaches. Also, a good quantitative performance is achieved at the ground stations in Southern Spain and Western China when using the desert land-use based emissions, although HYSPLIT overestimates the dust concentration at downwind areas of East Asia and underestimates the column in the center of the Saharan Desert. On the other hand, the satellite based emission approach improves the dust forecast performance in the Sahara, but underestimates the dust concentrations in East Asia.

  20. Testing protostellar disk formation models with ALMA observations

    CERN Document Server

    Harsono, Daniel; Bruderer, Simon; Li, Zhi-Yun; Jorgensen, Jes

    2015-01-01

    Abridged: Recent simulations have explored different ways to form accretion disks around low-mass stars. We aim to present observables to differentiate a rotationally supported disk from an infalling rotating envelope toward deeply embedded young stellar objects and infer their masses and sizes. Two 3D magnetohydrodynamics (MHD) formation simulations and 2D semi-analytical model are studied. The dust temperature structure is determined through continuum radiative transfer RADMC3D modelling. A simple temperature dependent CO abundance structure is adopted and synthetic spectrally resolved submm rotational molecular lines up to $J_{\\rm u} = 10$ are simulated. All models predict similar compact components in continuum if observed at the spatial resolutions of 0.5-1$"$ (70-140 AU) typical of the observations to date. A spatial resolution of $\\sim$14 AU and high dynamic range ($> 1000$) are required to differentiate between RSD and pseudo-disk in the continuum. The peak-position velocity diagrams indicate that the...

  1. Modeling the Compton Hump Reverberation Observed in Active Galactic Nuclei

    Science.gov (United States)

    Hoormann, Janie; Beheshtipour, Banafsheh; Krawczynski, Henric

    2016-04-01

    In recent years, observations of the Iron K alpha reverberation in supermassive black holes have provided a new way to probe the inner accretion flow. Furthermore, a time lag between the direct coronal emission and the reprocessed emission forming the Compton Hump in AGN has been observed. In order to model this Compton Hump reverberation we performed general relativistic ray tracing studies of the accretion disk surrounding supermassive black holes, taking into account both the radial and angular dependence of the ionization parameter. We are able to model emission not only from a lamp-post corona but also implementing 3D corona geometries. Using these results we are able to model the observed data to gain additional insight into the geometry of the corona and the structure of the inner accretion disk.

  2. Model-independent inference on compact-binary observations

    CERN Document Server

    Mandel, Ilya; Colonna, Andrea; Stevenson, Simon; Tiňo, Peter; Veitch, John

    2016-01-01

    The recent advanced LIGO detections of gravitational waves from merging binary black holes enhance the prospect of exploring binary evolution via gravitational-wave observations of a population of compact-object binaries. In the face of uncertainty about binary formation models, model-independent inference provides an appealing alternative to comparisons between observed and modelled populations. We describe a procedure for clustering in the multi-dimensional parameter space of observations that are subject to significant measurement errors. We apply this procedure to a mock data set of population-synthesis predictions for the masses of merging compact binaries convolved with realistic measurement uncertainties, and demonstrate that we can accurately distinguish subpopulations of binary neutron stars, binary black holes, and mixed black hole -- neutron star binaries.

  3. Solar Spectral Irradiance Variability in Cycle 24: Observations and Models

    CERN Document Server

    Marchenko, S V; Lean, J L

    2016-01-01

    Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI), we characterize both short-term (solar rotation) and long-term (solar cycle) changes of the solar spectral irradiance (SSI) between 265-500 nm during the on-going Cycle 24. We supplement the OMI data with concurrent observations from the GOME-2 and SORCE instruments and find fair-to-excellent, depending on wavelength, agreement among the observations and predictions of the NRLSSI2 and SATIRE-S models.

  4. A climatology of surface ozone in the extra tropics: cluster analysis of observations and model results

    Directory of Open Access Journals (Sweden)

    O. A. Tarasova

    2007-08-01

    Full Text Available Important aspects of the seasonal variations of surface ozone are discussed. The underlying analysis is based on the long-term (1990–2004 ozone records of Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (EMEP and the World Data Center of Greenhouse Gases which do have a strong Northern Hemisphere bias. Seasonal variations are pronounced at most of the 114 locations for any time of the day. Seasonal-diurnal variability classification using hierarchical agglomeration clustering reveals 5 distinct clusters: clean/rural, semi-polluted non-elevated, semi-polluted semi-elevated, elevated and polar/remote marine types. For the cluster "clean/rural" the seasonal maximum is observed in April, both for night and day. For those sites with a double maximum or a wide spring-summer maximum, the one in spring appears both for day and night, while the one in summer is more pronounced for daytime and hence can be attributed to photochemical processes. For the spring maximum photochemistry is a less plausible explanation as no dependence of the maximum timing is observed. More probably the spring maximum is caused by dynamical/transport processes. Using data from the 3-D atmospheric chemistry general circulation model ECHAM5/MESSy1 covering the period of 1998–2005 a comparison has been performed for the identified clusters. For the model data four distinct classes of variability are detected. The majority of cases are covered by the regimes with a spring seasonal maximum or with a broad spring-summer maximum (with prevailing summer. The regime with winter–early spring maximum is reproduced by the model for southern hemispheric locations. Background and semi-polluted sites appear in the model in the same cluster. The seasonality in this model cluster is characterized by a pronounced spring (May maximum. For the model cluster that covers partly semi-elevated semi-polluted sites the role of the

  5. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    Science.gov (United States)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  6. Diagnostic Modeling of PAMS VOC Observation on Regional Scale Environment

    Science.gov (United States)

    Chen, S.; Liu, T.; Chen, T.; Ou Yang, C.; Wang, J.; Chang, J. S.

    2008-12-01

    While a number of gas-phase chemical mechanisms, such as CBM-Z, RADM2, SAPRC-07 had been successful in studying gas-phase atmospheric chemical processes they all used some lumped organic species to varying degrees. Photochemical Assessment Monitoring Stations (PAMS) has been in use for over ten years and yet it is not clear how the detailed organic species measured by PAMS compare to the lumped model species under regional-scale transport and chemistry interactions. By developing a detailed mechanism specifically for the PAMS organics and embedding this diagnostic model within a regional-scale transport and chemistry model we can then directly compare PAMS observation with regional-scale model simulations. We modify one regional-scale chemical transport model (Taiwan Air Quality Model, TAQM) by adding a submodel with chemical mechanism for interactions of the 56 species observed by PAMS. This submodel then calculates the time evolution of these 56 PAMS species within the environment established by TAQM. It is assumed that TAQM can simulate the overall regional-scale environment including impact of regional-scale transport and time evolution of oxidants and radicals. Therefore we can scale these influences to the PAMS organic species and study their time evolution with their species-specific source functions, meteorological transport, and chemical interactions. Model simulations of each species are compared with PAMS hourly surface measurements. A case study located in a metropolitan area in central Taiwan showed that with wind speeds lower than 3 m/s, when meteorological simulation is comparable with observation, the diurnal pattern of each species performs well with PAMS data. It is found that for many observations meteorological transport is an influence and that local emissions of specific species must be represented correctly. At this time there are still species that cannot be modeled properly. We suspect this is mostly due to lack of information on local

  7. Optimal designs for the Michaelis Menten model with correlated observations

    OpenAIRE

    Dette, Holger; Kunert, Joachim

    2012-01-01

    In this paper we investigate the problem of designing experiments for weighted least squares analysis in the Michaelis Menten model. We study the structure of exact D-optimal designs in a model with an autoregressive error structure. Explicit results for locally D-optimal are derived for the case where 2 observations can be taken per subject. Additionally standardized maximin D-optimal designs are obtained in this case. The results illustrate the enormous difficulties to find e...

  8. The Martian Plasma Environment: Model Calculations and Observations

    Science.gov (United States)

    Lichtenegger, H. I. M.; Dubinin, E.; Schwingenschuh, K.; Riedler, W.

    Based on a modified version of the model of an induced martian magnetosphere developed by Luhmann (1990), the dynamics and spatial distribution of different planetary ion species is examined. Three main regions are identified: A cloud of ions travelling along cycloidal trajectories, a plasma mantle and a plasma sheet. The latter predominantly consists of oxygen ions of ionospheric origin with minor portions of light particles. Comparison of model results with Phobos-2 observations shows reasonable agreement.

  9. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State.

    Science.gov (United States)

    Liu, Ying; Yen, Hai-Yun; Austria, Theresa; Pettersson, Jonas; Peti-Peterdi, Janos; Maxson, Robert; Widschwendter, Martin; Dubeau, Louis

    2015-10-01

    Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r) promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr) promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30%) rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001), attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  10. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2015-10-01

    Full Text Available Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30% rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001, attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  11. Scale-free distribution of Dead Sea sinkholes--observations and modeling

    CERN Document Server

    Yizhaq, Hezi; Raz, Eli; Ashkenazy, Yosef

    2016-01-01

    There are currently more than 5500 sinkholes along the Dead Sea in Israel. These were formed due to dissolution of subsurface salt layers as a result of the replacement of hypersaline groundwater by fresh brackish groundwater. This process was associated with a sharp decline in the Dead Sea level, currently more than one meter per year, resulting in a lower water table that has allowed the intrusion of fresher brackish water. We studied the distribution of the sinkholes sizes and found that it is scale-free with a power-law exponent close to 2. We constructed a stochastic cellular automata model to understand the observed scale-free behavior and the growth of the sinkholes area in time. The model consists of a lower salt layer and an upper soil layer in which cavities that develop in the lower layer lead to collapses in the upper layer. The model reproduces the observed power-law distribution without entailing the threshold behavior commonly associated with criticality.

  12. Scale invariant cosmology III: dynamical models and comparisons with observations

    CERN Document Server

    Maeder, Andre

    2016-01-01

    We examine the properties of the scale invariant cosmological models, also making the specific hypothesis of the scale invariance of the empty space at large scales. Numerical integrations of the cosmological equations for different values of the curvature parameter k and of the density parameter Omega_m are performed. We compare the dynamical properties of the models to the observations at different epochs. The main numerical data and graphical representations are given for models computed with different curvatures and density parameters. The models with non-zero density start explosively with first a braking phase followed by a continuously accelerating expansion. The comparison of the models with the recent observations from supernovae SN Ia, BAO and CMB data from Planck 2015 shows that the scale invariant model with k=0 and Omega_m=0.30 very well fits the observations in the usual Omega_m vs. Omega_Lambda plane and consistently accounts for the accelerating expansion or dark energy. The expansion history ...

  13. Linking Geomechanical Models with Observations of Microseismicity during CCS Operations

    Science.gov (United States)

    Verdon, J.; Kendall, J.; White, D.

    2012-12-01

    During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for

  14. Towards global empirical upscaling of FLUXNET eddy covariance observations: validation of a model tree ensemble approach using a biosphere model

    Directory of Open Access Journals (Sweden)

    M. Jung

    2009-05-01

    Full Text Available Global, spatially and temporally explicit estimates of carbon and water fluxes derived from empirical up-scaling eddy covariance measurements would constitute a new and possibly powerful data stream to study the variability of the global terrestrial carbon and water cycle. This paper introduces and validates a machine learning approach dedicated to the upscaling of observations from the current global network of eddy covariance towers (FLUXNET. We present a new model TRee Induction ALgorithm (TRIAL that performs hierarchical stratification of the data set into units where particular multiple regressions for a target variable hold. We propose an ensemble approach (Evolving tRees with RandOm gRowth, ERROR where the base learning algorithm is perturbed in order to gain a diverse sequence of different model trees which evolves over time.

    We evaluate the efficiency of the model tree ensemble approach using an artificial data set derived from the the Lund-Potsdam-Jena managed Land (LPJmL biosphere model. We aim at reproducing global monthly gross primary production as simulated by LPJmL from 1998–2005 using only locations and months where high quality FLUXNET data exist for the training of the model trees. The model trees are trained with the LPJmL land cover and meteorological input data, climate data, and the fraction of absorbed photosynthetic active radiation simulated by LPJmL. Given that we know the "true result" in the form of global LPJmL simulations we can effectively study the performance of the model tree ensemble upscaling and associated problems of extrapolation capacity.

    We show that the model tree ensemble is able to explain 92% of the variability of the global LPJmL GPP simulations. The mean spatial pattern and the seasonal variability of GPP that constitute the largest sources of variance are very well reproduced (96% and 94% of variance explained respectively while the monthly interannual anomalies which occupy

  15. June 13, 2013 U.S. East Coast Meteotsunami: Comparing a Numerical Model With Observations

    Science.gov (United States)

    Wang, D.; Becker, N. C.; Weinstein, S.; Whitmore, P.; Knight, W.; Kim, Y.; Bouchard, R. H.; Grissom, K.

    2013-12-01

    On June 13, 2013, a tsunami struck the U.S. East Coast and caused several reported injuries. This tsunami occurred after a derecho moved offshore from North America into the Atlantic Ocean. The presence of this storm, the lack of a seismic source, and the fact that tsunami arrival times at tide stations and deep ocean-bottom pressure sensors cannot be attributed to a 'point-source' suggest this tsunami was caused by atmospheric forces, i.e., a meteotsunami. In this study we attempt to reproduce the observed phenomenon using a numerical model with idealized atmospheric pressure forcing resembling the propagation of the observed barometric anomaly. The numerical model was able to capture some observed features of the tsunami at some tide stations, including the time-lag between the time of pressure jump and the time of tsunami arrival. The model also captures the response at a deep ocean-bottom pressure gauge (DART 44402), including the primary wave and the reflected wave. There are two components of the oceanic response to the propagating pressure anomaly, inverted barometer response and dynamic response. We find that the dynamic response over the deep ocean to be much smaller than the inverted barometer response. The time lag between the pressure jump and tsunami arrival at tide stations is due to the dynamic response: waves generated and/or reflected at the shelf-break propagate shoreward and amplify due to the shoaling effect. The evolution of the derecho over the deep ocean (propagation direction and intensity) is not well defined, however, because of the lack of data so the forcing used for this study is somewhat speculative. Better definition of the pressure anomaly through increased observation or high resolution atmospheric models would improve meteotsunami forecast capabilities.

  16. A decade of progress in observing and modelling Antarctic subglacial water systems.

    Science.gov (United States)

    Fricker, Helen A; Siegfried, Matthew R; Carter, Sasha P; Scambos, Ted A

    2016-01-28

    In the decade since the discovery of active Antarctic subglacial water systems by detection of subtle surface displacements, much progress has been made in our understanding of these dynamic systems. Here, we present some of the key results of observations derived from ICESat laser altimetry, CryoSat-2 radar altimetry, Operation IceBridge airborne laser altimetry, satellite image differencing and ground-based continuous Global Positioning System (GPS) experiments deployed in hydrologically active regions. These observations provide us with an increased understanding of various lake systems in Antarctica: Whillans/Mercer Ice Streams, Crane Glacier, Recovery Ice Stream, Byrd Glacier and eastern Wilkes Land. In several cases, subglacial water systems are shown to control ice flux through the glacier system. For some lake systems, we have been able to construct more than a decade of continuous lake activity, revealing internal variability on time scales ranging from days to years. This variability indicates that continuous, accurate time series of altimetry data are critical to understanding these systems. On Whillans Ice Stream, our results from a 5-year continuous GPS record demonstrate that subglacial lake flood events significantly change the regional ice dynamics. We also show how models for subglacial water flow have evolved since the availability of observations of lake volume change, from regional-scale models of water routeing to process models of channels carved into the subglacial sediment instead of the overlying ice. We show that progress in understanding the processes governing lake drainage now allows us to create simulated lake volume time series that reproduce time series from satellite observations. This transformational decade in Antarctic subglacial water research has moved us significantly closer to understanding the processes of water transfer sufficiently for inclusion in continental-scale ice-sheet models.

  17. Calibration of a numerical ionospheric model with EISCAT observations

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    Full Text Available A set of EISCAT UHF and VHF observations is used for calibrating a coupled fluid-kinetic model of the ionosphere. The data gathered in the period 1200- 2400 UT on 24 March 1995 had various intervals of interest for such a calibration. The magnetospheric activity was very low during the afternoon, allowing for a proper examination of a case of quiet ionospheric conditions. The radars entered the auroral oval just after 1900 UT: a series of dynamic events probably associated with rapidly moving auroral arcs was observed until after 2200 UT. No attempts were made to model the dynamical behaviour during the 1900–2200 UT period. In contrast, the period 2200–2400 UT was characterised by quite steady precipitation: this latter period was then chosen for calibrating the model during precipitation events. The adjustment of the model on the four primary parameters observed by the radars (namely the electron concentration and temperature and the ion temperature and velocity needed external inputs (solar fluxes and magnetic activity index and the adjustments of a neutral atmospheric model in order to reach a good agreement. It is shown that for the quiet ionosphere, only slight adjustments of the neutral atmosphere models are needed. In contrast, adjusting the observations during the precipitation event requires strong departures from the model, both for the atomic oxygen and hydrogen. However, it is argued that this could well be the result of inadequately representing the vibrational states of N2 during precipitation events, and that these factors have to be considered only as ad hoc corrections.

  18. Observation-based correction of dynamical models using thermostats

    Science.gov (United States)

    Frank, Jason; Leimkuhler, Benedict

    2017-01-01

    Models used in simulation may give accurate short-term trajectories but distort long-term (statistical) properties. In this work, we augment a given approximate model with a control law (a ‘thermostat’) that gently perturbs the dynamical system to target a thermodynamic state consistent with a set of prescribed (possibly evolving) observations. As proof of concept, we provide an example involving a point vortex fluid model on the sphere, for which we show convergence of equilibrium quantities (in the stationary case) and the ability of the thermostat to dynamically track a transient state. PMID:28265197

  19. Unsteady aerodynamic modeling based on POD-observer method

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A new hybrid approach to constructing reduced-order models(ROM)of unsteady aerodynamics applicable to aeroelastic analysis is presented by using proper orthogonal decomposition(POD)in combination with observer techniques.Fluid modes are generated through POD by sampling observations of solutions derived from the full-order model.The response in the POD training is projected onto the fluid modes to determine the time history of the modal amplitudes.The resulting data are used to extract the Markov parameters of the low-dimensional model for modal amplitudes using a related deadbeat observer.The state-space realization is synthesized from the system’s Markov parameters that are processed with the eigensystem realization algorithm.The POD-observer method is applied to a two-dimensional airfoil system in subsonic flow field.The results predicted by the ROM are in general agreement with those from the full-order system.The ROM obtained by the hybrid approach captures the essence of a fluid system and produces vast reduction in both degrees of freedom and computational time relative to the full-order model.

  20. Sixth International Workshop on the Mars Atmosphere: Modelling and Observations

    Science.gov (United States)

    Forget, F.; Millour, M.

    2017-01-01

    The scope of this workshop is to bring together experts in observations and modelling of the present and past Mars climate systems and discuss the nature of the atmospheric circulation and the photochemistry (up to the thermosphere), the dust cycle, the water cycle (vapor, clouds and frost) and the carbon dioxide cycle (polar caps).

  1. Observations and models for needle-tissue interactions

    NARCIS (Netherlands)

    Misra, Sarthak; Reed, Kyle B.; Schafer, Benjamin W.; Ramesh, K.T.; Okamura, Allison M.

    2009-01-01

    The asymmetry of a bevel-tip needle results in the needle naturally bending when it is inserted into soft tissue. In this study we present a mechanics-based model that calculates the deflection of the needle embedded in an elastic medium. Microscopic observations for several needle- gel interactions

  2. S-AMP for non-linear observation models

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2015-01-01

    Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  3. Are classifications of proximal radius fractures reproducible?

    Directory of Open Access Journals (Sweden)

    dos Santos João BG

    2009-10-01

    Full Text Available Abstract Background Fractures of the proximal radius need to be classified in an appropriate and reproducible manner. The aim of this study was to assess the reliability of the three most widely used classification systems. Methods Elbow radiographs images of patients with proximal radius fractures were classified according to Mason, Morrey, and Arbeitsgemeinschaft für osteosynthesefragen/Association for the Study of Internal Fixation (AO/ASIF classifications by four observers with different experience with this subject to assess their intra- and inter-observer agreement. Each observer analyzed the images on three different occasions on a computer with numerical sequence randomly altered. Results We found that intra-observer agreement of Mason and Morrey classifications were satisfactory (κ = 0.582 and 0.554, respectively, while the AO/ASIF classification had poor intra-observer agreement (κ = 0.483. Inter-observer agreement was higher in the Mason (κ = 0.429-0.560 and Morrey (κ = 0.319-0.487 classifications than in the AO/ASIF classification (κ = 0.250-0.478, which showed poor reliability. Conclusion Inter- and intra-observer agreement of the Mason and Morey classifications showed overall satisfactory reliability when compared to the AO/ASIF system. The Mason classification is the most reliable system.

  4. Some observational tests of a minimal galaxy formation model

    CERN Document Server

    Cohn, J D

    2016-01-01

    Dark matter simulations can serve as a basis for creating galaxy histories via the galaxy-dark matter connection. Here, one such model by Becker (2015) is implemented with several variations on three different dark matter simulations. Stellar mass and star formation rates are assigned to all simulation subhalos at all times, using subhalo mass gain to determine stellar mass gain. The observational properties of the resulting galaxy distributions are compared to each other and observations for a range of redshifts from 0-2. Although many of the galaxy distributions seem reasonable, there are noticeable differences as simulations, subhalo mass gain definitions, or subhalo mass definitions are altered, suggesting that the model should change as these properties are varied. Agreement with observations may improve by including redshift dependence in the added-by-hand random contribution to star formation rate. There appears to be an excess of faint quiescent galaxies as well (perhaps due in part to differing defin...

  5. Testing the Empirical Shock Arrival Model using Quadrature Observations

    CERN Document Server

    Gopalswamy, N; Xie, H; Yashiro, S

    2013-01-01

    The empirical shock arrival (ESA) model was developed based on quadrature data from Helios (in-situ) and P-78 (remote-sensing) to predict the Sun-Earth travel time of coronal mass ejections (CMEs) [Gopalswamy et al. 2005a]. The ESA model requires earthward CME speed as input, which is not directly measurable from coronagraphs along the Sun-Earth line. The Solar Terrestrial Relations Observatory (STEREO) and the Solar and Heliospheric Observatory (SOHO) were in quadrature during 2010 - 2012, so the speeds of Earth-directed CMEs were observed with minimal projection effects. We identified a set of 20 full halo CMEs in the field of view of SOHO that were also observed in quadrature by STEREO. We used the earthward speed from STEREO measurements as input to the ESA model and compared the resulting travel times with the observed ones from L1 monitors. We find that the model predicts the CME travel time within about 7.3 hours, which is similar to the predictions by the ENLIL model. We also find that CME-CME and CME...

  6. Linear system identification via backward-time observer models

    Science.gov (United States)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  7. Solar irradiance variability: A six-year comparison between SORCE observations and the SATIRE model

    CERN Document Server

    Ball, Will T; Krivova, Natalie A; Solanki, Sami; Harder, Jerald W

    2011-01-01

    Aims: We investigate how well modeled solar irradiances agree with measurements from the SORCE satellite, both for total solar irradiance and broken down into spectral regions on timescales of several years. Methods: We use the SATIRE model and compare modeled total solar irradiance (TSI) with TSI measurements between 2003 and 2009. Spectral solar irradiance over 200-1630nm is compared with the SIM instrument on SORCE between 2004 and 2009 during a period of decline from moderate activity to the recent solar minimum in 10 nm bands and for three spectral regions of significant interest: the UV integrated over 200-300nm, the visible over 400-691nm and the IR between 972-1630 nm. Results: The model captures 97% of observed TSI variation. In the spectral comparison, rotational variability is well reproduced, especially between 400 and 1200 nm. The magnitude of change in the long-term trends is many times larger in SIM at almost all wavelengths while trends in SIM oppose SATIRE in the visible between 500 and 700nm...

  8. Hybrid modeling of the lower corona using Faraday rotation observations and a MHD thermodynamic simulation

    Science.gov (United States)

    Wexler, David B.; Hollweg, Joseph V.; Jensen, Elizabeth; Lionello, Roberto; Macneice, Peter J.; Coster, Anthea J.

    2017-08-01

    Study of coronal MHD wave energetics relies upon accurate representation of plasma particle number densities (ne) and magnetic field strengths. In the lower corona, these parameters are obtained indirectly, and typically presented as empirical equations as a function of heliocentric radial distance (solar offset, SO). The development of coronal global models using synoptic solar surface magnetogram inputs has provided refined characterization of the coronal plasma organization and magnetic field. We present a cross-analysis between a MHD thermodynamic simulation and Faraday rotation (FR) observations over SO 1.63-1.89 solar radii (Rs) near solar minimum. MESSENGER spacecraft radio signals with a line of sight (LOS) passing through the lower corona were recorded in dual polarization using the Green Bank Telescope in November 2009. Polarization position angle changes were obtained from Stokes parameters. The magnetic field vector (B) and ne along the LOS were obtained from a MHD thermodynamic simulation provided by the Community Coordinated Modeling Center. The modeled FR was computed as the integrated product of ne and LOS-aligned B component. The observations over the given SO range yielded an FR change of 7 radians. The simulation reproduced this change when the modeled ne was scaled up by 2.8x, close to values obtained using the Allen-Baumbach equation. No scaling of B from the model was necessary. A refined fit to the observations was obtained when the observationally based total electron content (TEC) curves were introduced. Changes in LOS TEC were determined from radio frequency shifts as the signal passed to successively lower electron concentrations during egress. A good fit to the observations was achieved with an offset of 7e21 m-2 added. Back-calculating ne along the LOS from the TEC curves, we found that the equivalent ne scaling compared to the model output was higher by a factor of 3. The combination of solar surface magnetogram-based MHD coronal

  9. Applying direct observation to model workflow and assess adoption.

    Science.gov (United States)

    Unertl, Kim M; Weinger, Matthew B; Johnson, Kevin B

    2006-01-01

    Lack of understanding about workflow can impair health IT system adoption. Observational techniques can provide valuable information about clinical workflow. A pilot study using direct observation was conducted in an outpatient chronic disease clinic. The goals of the study were to assess workflow and information flow and to develop a general model of workflow and information behavior. Over 55 hours of direct observation showed that the pilot site utilized many of the features of the informatics systems available to them, but also employed multiple non-electronic artifacts and workarounds. Gaps existed between clinic workflow and informatics tool workflow, as well as between institutional expectations of informatics tool use and actual use. Concurrent use of both paper-based and electronic systems resulted in duplication of effort and inefficiencies. A relatively short period of direct observation revealed important information about workflow and informatics tool adoption.

  10. Inferring effective field observables from a discrete model

    Science.gov (United States)

    Bény, Cédric

    2017-01-01

    A spin system on a lattice can usually be modeled at large scales by an effective quantum field theory. A key mathematical result relating the two descriptions is the quantum central limit theorem, which shows that certain spin observables satisfy an algebra of bosonic fields under certain conditions. Here, we show that these particular observables and conditions are the relevant ones for an observer with certain limited abilities to resolve spatial locations as well as spin values. This is shown by computing the asymptotic behaviour of a quantum Fisher information metric as function of the resolution parameters. The relevant observables characterise the state perturbations whose distinguishability does not decay too fast as a function of spatial or spin resolution.

  11. Tidal Movement of Nioghalvfjerdsfjorden Glacier, Northeast Greenland: Observations and Modelling

    DEFF Research Database (Denmark)

    Reeh, Niels; Mayer, C.; Olesen, O. B.

    2000-01-01

    , 1997 and 1998. As part of this work, tidal-movement observations were carried out by simultaneous differential global positioning system (GPS) measurements at several locations distributed on the glacier surface. The GPS observations were performed continuously over several tidal cycles. At the same....... The observations show that the main part of the glacier tongue responds as a freely floating plate to the phase and amplitude of the local tide in the sea. However, kilometre-wide flexure zones exist along the marginal and upstream grounding lines. Attempts to model the observed tidal deflection and tilt patterns...... in the flexure zone by elastic-beam theory are unsuccessful, in contrast to previous findings by other investigators. The strongest disagreement between our measurements and results derived from elastic-beam theory is a significant variation of the phase of the tidal records with distance from the grounding line...

  12. Tectonic stressing in California modeled from GPS observations

    Science.gov (United States)

    Parsons, T.

    2006-01-01

    What happens in the crust as a result of geodetically observed secular motions? In this paper we find out by distorting a finite element model of California using GPS-derived displacements. A complex model was constructed using spatially varying crustal thickness, geothermal gradient, topography, and creeping faults. GPS velocity observations were interpolated and extrapolated across the model and boundary condition areas, and the model was loaded according to 5-year displacements. Results map highest differential stressing rates in a 200-km-wide band along the Pacific-North American plate boundary, coinciding with regions of greatest seismic energy release. Away from the plate boundary, GPS-derived crustal strain reduces modeled differential stress in some places, suggesting that some crustal motions are related to topographic collapse. Calculated stressing rates can be resolved onto fault planes: useful for addressing fault interactions and necessary for calculating earthquake advances or delays. As an example, I examine seismic quiescence on the Garlock fault despite a calculated minimum 0.1-0.4 MPa static stress increase from the 1857 M???7.8 Fort Tejon earthquake. Results from finite element modeling show very low to negative secular Coulomb stress growth on the Garlock fault, suggesting that the stress state may have been too low for large earthquake triggering. Thus the Garlock fault may only be stressed by San Andreas fault slip, a loading pattern that could explain its erratic rupture history.

  13. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  14. Heterogeneity of intracellular polymer storage states in enhanced biological phosphorus removal (EBPR)--observation and modeling.

    Science.gov (United States)

    Bucci, Vanni; Majed, Nehreen; Hellweger, Ferdi L; Gu, April Z

    2012-03-20

    A number of agent-based models (ABMs) for biological wastewater treatment processes have been developed, but their skill in predicting heterogeneity of intracellular storage states has not been tested against observations due to the lack of analytical methods for measuring single-cell intracellular properties. Further, several mechanisms can produce and maintain heterogeneity (e.g., different histories, uneven division) and their relative importance has not been explored. This article presents an ABM for the enhanced biological phosphorus removal (EBPR) treatment process that resolves heterogeneity in three intracellular polymer storage compounds (i.e., polyphosphate, polyhydroxybutyrate, and glycogen) in three functional microbial populations (i.e., polyphosphate-accumulating, glycogen-accumulating, and ordinary heterotrophic organisms). Model predicted distributions were compared to those based on single-cell estimates obtained using a Raman microscopy method for a laboratory-scale sequencing batch reactor (SBR) system. The model can reproduce many features of the observed heterogeneity. Two methods for introducing heterogeneity were evaluated. First, biological variability in individual cell behavior was simulated by randomizing model parameters (e.g., maximum acetate uptake rate) at division. This method produced the best fit to the data. An optimization algorithm was used to determine the best variability (i.e., coefficient of variance) for each parameter, which suggests large variability in acetate uptake. Second, biological variability in individual cell states was simulated by randomizing state variables (e.g., internal nutrient) at division, which was not able to maintain heterogeneity because the memory in the internal states is too short. These results demonstrate the ability of ABM to predict heterogeneity and provide insights into the factors that contribute to it. Comparison of the ABM with an equivalent population-level model illustrates the effect

  15. An observer model for quantifying panning artifacts in digital pathology

    Science.gov (United States)

    Avanaki, Ali R. N.; Espig, Kathryn S.; Xthona, Albert; Lanciault, Christian; Kimpe, Tom R. L.

    2017-03-01

    Typically, pathologists pan from one region of a slide to another, choosing areas of interest for closer inspection. Due to finite frame rate and imperfect zero-order hold reconstruction (i.e., the non-zero time to reach the target brightness after a change in pixel drive), panning in whole slide images (WSI) cause visual artifacts. It is important to study the impact of such artifacts since research suggests that 49% of navigation is conducted in low-power/overview with digital pathology (Molin et al., Histopathology 2015). In this paper, we explain what types of medical information may be harmed by panning artifacts, propose a method to simulate panning artifacts, and design an observer model to predict the impact of panning artifacts on typical human observers' performance in basic diagnostically relevant visual tasks. The proposed observer model is based on derivation of perceived object border maps from luminance and chrominance information and may be tuned to account for visual acuity of the human observer to be modeled. Our results suggest that increasing the contrast (e.g., using a wide gamut display) with a slow response panel may not mitigate the panning artifacts which mostly affect visual tasks involving spatial discrimination of objects (e.g., normal vs abnormal structure, cell type and spatial relationships between them, and low-power nuclear morphology), and that the panning artifacts worsen with increasing panning speed. The proposed methods may be used as building blocks in an automatic WSI quality assessment framework.

  16. Observing the observer (I): meta-bayesian models of learning and decision-making.

    Science.gov (United States)

    Daunizeau, Jean; den Ouden, Hanneke E M; Pessiglione, Matthias; Kiebel, Stefan J; Stephan, Klaas E; Friston, Karl J

    2010-12-14

    In this paper, we present a generic approach that can be used to infer how subjects make optimal decisions under uncertainty. This approach induces a distinction between a subject's perceptual model, which underlies the representation of a hidden "state of affairs" and a response model, which predicts the ensuing behavioural (or neurophysiological) responses to those inputs. We start with the premise that subjects continuously update a probabilistic representation of the causes of their sensory inputs to optimise their behaviour. In addition, subjects have preferences or goals that guide decisions about actions given the above uncertain representation of these hidden causes or state of affairs. From a Bayesian decision theoretic perspective, uncertain representations are so-called "posterior" beliefs, which are influenced by subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility") function, which measures the cost incurred by making any admissible decision for any given (hidden) state of affair. By assuming that subjects make optimal decisions on the basis of updated (posterior) beliefs and utility (loss) functions, one can evaluate the likelihood of observed behaviour. Critically, this enables one to "observe the observer", i.e. identify (context- or subject-dependent) prior beliefs and utility-functions using psychophysical or neurophysiological measures. In this paper, we describe the main theoretical components of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions). In a companion paper ('Observing the observer (II): deciding when to decide'), we describe a concrete implementation of it and demonstrate its utility by applying it to simulated and real reaction time data from an associative learning task.

  17. Observing the observer (I: meta-bayesian models of learning and decision-making.

    Directory of Open Access Journals (Sweden)

    Jean Daunizeau

    Full Text Available In this paper, we present a generic approach that can be used to infer how subjects make optimal decisions under uncertainty. This approach induces a distinction between a subject's perceptual model, which underlies the representation of a hidden "state of affairs" and a response model, which predicts the ensuing behavioural (or neurophysiological responses to those inputs. We start with the premise that subjects continuously update a probabilistic representation of the causes of their sensory inputs to optimise their behaviour. In addition, subjects have preferences or goals that guide decisions about actions given the above uncertain representation of these hidden causes or state of affairs. From a Bayesian decision theoretic perspective, uncertain representations are so-called "posterior" beliefs, which are influenced by subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility" function, which measures the cost incurred by making any admissible decision for any given (hidden state of affair. By assuming that subjects make optimal decisions on the basis of updated (posterior beliefs and utility (loss functions, one can evaluate the likelihood of observed behaviour. Critically, this enables one to "observe the observer", i.e. identify (context- or subject-dependent prior beliefs and utility-functions using psychophysical or neurophysiological measures. In this paper, we describe the main theoretical components of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions. In a companion paper ('Observing the observer (II: deciding when to decide', we describe a concrete implementation of it and demonstrate its utility by applying it to simulated and real reaction time data from an associative learning task.

  18. The Szekeres Swiss Cheese model and the CMB observations

    Science.gov (United States)

    Bolejko, Krzysztof

    2009-08-01

    This paper presents the application of the Szekeres Swiss Cheese model to the analysis of observations of the cosmic microwave background (CMB) radiation. The impact of inhomogeneous matter distribution on the CMB observations is in most cases studied within the linear perturbations of the Friedmann model. However, since the density contrast and the Weyl curvature within the cosmic structures are large, this issue is worth studying using another approach. The Szekeres model is an inhomogeneous, non-symmetrical and exact solution of the Einstein equations. In this model, light propagation and matter evolution can be exactly calculated, without such approximations as small amplitude of the density contrast. This allows to examine in more realistic manner the contribution of the light propagation effect to the measured CMB temperature fluctuations. The results of such analysis show that small-scale, non-linear inhomogeneities induce, via Rees-Sciama effect, temperature fluctuations of amplitude 10-7-10-5 on angular scale ϑ 750). This is still much smaller than the measured temperature fluctuations on this angular scale. However, local and uncompensated inhomogeneities can induce temperature fluctuations of amplitude as large as 10-3, and thus can be responsible the low multipoles anomalies observed in the angular CMB power spectrum.

  19. Observational constraints on new generalized Chaplygin gas model

    CERN Document Server

    Liao, Kai; Zhu, Zong-Hong

    2012-01-01

    We use the latest data to investigate observational constraints on the new generalized Chaplygin gas (NGCG) model. Using the Markov Chain Monte Carlo (MCMC) method, we constrain the NGCG model with the type Ia supernovae (SNe Ia) from Union2 set (557 data), the usual baryonic acoustic oscillation (BAO) observation from the spectroscopic Sloan Digital Sky Survey (SDSS) data release 7 (DR7) galaxy sample, the cosmic microwave background (CMB) observation from the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) results, the newly revised $H(z)$ data, as well as a value of $\\theta_{BAO} (z=0.55) = (3.90 \\pm 0.38)^{\\circ}$ for the angular BAO scale. The constraint results for NGCG model are $\\omega_X = -1.0510_{-0.1685}^{+0.1563}(1\\sigma)_{-0.2398}^{+0.2226}(2\\sigma)$, $\\eta = 1.0117_{-0.0502}^{+0.0469}(1\\sigma)_{-0.0716}^{+0.0693}(2\\sigma)$, and $\\Omega_X = 0.7297_{-0.0276}^{+0.0229}(1\\sigma)_{-0.0402}^{+0.0329}(2\\sigma)$, which give a rather stringent constraint. From the results, we can see a phantom model ...

  20. Comparing theoretical models of our galaxy with observations

    Directory of Open Access Journals (Sweden)

    Johnston K.V.

    2012-02-01

    Full Text Available With the advent of large scale observational surveys to map out the stars in our galaxy, there is a need for an efficient tool to compare theoretical models of our galaxy with observations. To this end, we describe here the code Galaxia, which uses efficient and fast algorithms for creating a synthetic survey of the Milky Way, and discuss its uses. Given one or more observational constraints like the color-magnitude bounds, a survey size and geometry, Galaxia returns a catalog of stars in accordance with a given theoretical model of the Milky Way. Both analytic and N-body models can be sampled by Galaxia. For N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. In future, we plan to release the code publicly at http://galaxia.sourceforge.net. As an application of the code, we study the prospect of identifying structures in the stellar halo with future surveys that will have velocity information about the stars.

  1. Observational constraints on the new generalized Chaplygin gas model

    Institute of Scientific and Technical Information of China (English)

    Kai Liao; Yu Pan; Zong-Hong Zhu

    2013-01-01

    We use the latest data to investigate observational constraints on the new generalized Chaplygin gas (NGCG) model.Using the Markov Chain Monte Carlo method,we constrain the NGCG model with type Ⅰa supernovae from the Union2 set (557 data),the usual baryonic acoustic oscillation (BAO) observation from the spectroscopic Sloan Digital Sky Survey data release 7 galaxy sample,the cosmic microwave background observation from the 7-year Wilkinson Microwave Anisotropy Probe results,newly revised data on H(z),as well as a value of θBAO (z =0.55) =(3.90° ± 0.38°) for the angular BAO scale.The constraint results for the NGCG model are ωx=-1.0510(-0.1685)(+0.1563)(1σ)(-0.2398)(+0.2226)(2σ),η=1.0117(-0.0502)(+0.0469)(1σ)(-0.0716)(+0.0693)(2σ) and Ωx=0.7297(-0.0276)(+0.0229)(1σ)(-0.0402)(+0.0329)(2σ),which give a rather stringent constraint.From the results,we can see that a phantom model is slightly favored and the proba-bility that energy transfers from dark matter to dark energy is a little larger than the inverse.

  2. Observations of CMEs and Models of the Eruptive Corona

    Science.gov (United States)

    Gopalswamy, Nat

    2012-01-01

    It is now realized that coronal mass ejections (CMEs) are the most energetic phenomenon in the heliosphere. Although early observations (in the 1970s and 19805) revealed most of the properties of CMEs, it is the extended and uniform data set from the Solar and Heliospheric Observatory (SOHO) mission that helped us consolidate our knowledge on CMEs. The Solar Terrestrial Relations Observatory (STEREO) mission has provided direct confirmation of the three-dimensional structure of CMEs. The broadside view provided by the STEREO coronagraphs helped us estimate the width of the halo CMEs and hence validate CME cone models. Current theoretical ideas on the internal structure of CMEs suggest that a flux rope is central to the CME structure, which has considerable observational support both from remote-sensing and in-situ observations. The flux-rope nature is also consistent with the post-eruption arcades with high-temperature plasma and the charge states observed within CMEs arriving at Earth. The quadrature observations also helped us understand the relation between the radial and expansion speeds of CMEs, which were only known from empirical relations in the past. This paper highlights some of these results obtained during solar cycle 23 and 24 and discusses implications for CME models.

  3. Networking Sensor Observations, Forecast Models & Data Analysis Tools

    Science.gov (United States)

    Falke, S. R.; Roberts, G.; Sullivan, D.; Dibner, P. C.; Husar, R. B.

    2009-12-01

    This presentation explores the interaction between sensor webs and forecast models and data analysis processes within service oriented architectures (SOA). Earth observation data from surface monitors and satellite sensors and output from earth science models are increasingly available through open interfaces that adhere to web standards, such as the OGC Web Coverage Service (WCS), OGC Sensor Observation Service (SOS), OGC Web Processing Service (WPS), SOAP-Web Services Description Language (WSDL), or RESTful web services. We examine the implementation of these standards from the perspective of forecast models and analysis tools. Interoperable interfaces for model inputs, outputs, and settings are defined with the purpose of connecting them with data access services in service oriented frameworks. We review current best practices in modular modeling, such as OpenMI and ESMF/Mapl, and examine the applicability of those practices to service oriented sensor webs. In particular, we apply sensor-model-analysis interfaces within the context of wildfire smoke analysis and forecasting scenario used in the recent GEOSS Architecture Implementation Pilot. Fire locations derived from satellites and surface observations and reconciled through a US Forest Service SOAP web service are used to initialize a CALPUFF smoke forecast model. The results of the smoke forecast model are served through an OGC WCS interface that is accessed from an analysis tool that extract areas of high particulate matter concentrations and a data comparison tool that compares the forecasted smoke with Unattended Aerial System (UAS) collected imagery and satellite-derived aerosol indices. An OGC WPS that calculates population statistics based on polygon areas is used with the extract area of high particulate matter to derive information on the population expected to be impacted by smoke from the wildfires. We described the process for enabling the fire location, smoke forecast, smoke observation, and

  4. Some observational tests of a minimal galaxy formation model

    Science.gov (United States)

    Cohn, J. D.

    2017-04-01

    Dark matter simulations can serve as a basis for creating galaxy histories via the galaxy-dark matter connection. Here, one such model by Becker is implemented with several variations on three different dark matter simulations. Stellar mass and star formation rates are assigned to all simulation subhaloes at all times, using subhalo mass gain to determine stellar mass gain. The observational properties of the resulting galaxy distributions are compared to each other and observations for a range of redshifts from 0 to 2. Although many of the galaxy distributions seem reasonable, there are noticeable differences as simulations, subhalo mass gain definitions or subhalo mass definitions are altered, suggesting that the model should change as these properties are varied. Agreement with observations may improve by including redshift dependence in the added-by-hand random contribution to star formation rate. There appears to be an excess of faint quiescent galaxies as well (perhaps due in part to differing definitions of quiescence). The ensemble of galaxy formation histories for these models tend to have more scatter around their average histories (for a fixed final stellar mass) than the two more predictive and elaborate semi-analytic models of Guo et al. and Henriques et al., and require more basis fluctuations (using principal component analysis) to capture 90 per cent of the scatter around their average histories. The codes to plot model predictions (in some cases alongside observational data) are publicly available to test other mock catalogues at https://github.com/jdcphysics/validation/. Information on how to use these codes is in Appendix A.

  5. Quantitative comparisons of satellite observations and cloud models

    Science.gov (United States)

    Wang, Fang

    Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46.9% in the one month 1D-Var retrievals examined. To attain better constrained cloud/rain ratios and improved retrieval quality, this study suggests the implementation of higher microwave frequency channels in the 1D-Var algorithm. Cloud Resolving Models (CRMs) offer an important pathway to interpret satellite observations of microphysical properties of storms. High frequency microwave brightness temperatures (Tbs) respond to precipitating-sized ice particles and can, therefore, be compared with simulated Tbs at the same frequencies. By clustering the Tb vectors at these frequencies, the scene can be classified into distinct microphysical regimes, in other words, cloud types. The properties for each cloud type in the simulated scene are compared to those in the observation scene to identify the discrepancies in microphysics within that cloud type. A convective storm over the Amazon observed by the Tropical Rainfall Measuring Mission (TRMM) is simulated using the Regional Atmospheric Modeling System (RAMS) in a semi-ideal setting, and four regimes are defined within the scene using cluster analysis: the 'clear sky/thin cirrus' cluster, the 'cloudy' cluster, the 'stratiform anvil' cluster and the 'convective' cluster. The relationship between Tb difference of 37 and 85 GHz and Tb at 85 GHz is found to contain important information of microphysical properties such as hydrometeor species and size distributions. Cluster

  6. Solar spectral irradiance variability in cycle 24: observations and models

    Directory of Open Access Journals (Sweden)

    Marchenko Sergey V.

    2016-01-01

    Full Text Available Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI, we characterize both short-term (solar rotation and long-term (solar cycle changes of the solar spectral irradiance (SSI between 265 and 500 nm during the ongoing cycle 24. We supplement the OMI data with concurrent observations from the Global Ozone Monitoring Experiment-2 (GOME-2 and Solar Radiation and Climate Experiment (SORCE instruments and find fair-to-excellent, depending on wavelength, agreement among the observations, and predictions of the Naval Research Laboratory Solar Spectral Irradiance (NRLSSI2 and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S models.

  7. Atomic oxygen distributions in the Venus thermosphere: Comparisons between Venus Express observations and global model simulations

    Science.gov (United States)

    Brecht, A. S.; Bougher, S. W.; Gérard, J.-C.; Soret, L.

    2012-02-01

    Nightglow emissions provide insight into the global thermospheric circulation, specifically in the transition region (˜70-120 km). The O 2 IR nightglow statistical map created from Venus Express (VEx) Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) observations has been used to deduce a three-dimensional atomic oxygen density map. In this study, the National Center of Atmospheric Research (NCAR) Venus Thermospheric General Circulation Model (VTGCM) is utilized to provide a self-consistent global view of the atomic oxygen density distribution. More specifically, the VTGCM reproduces a 2D nightside atomic oxygen density map and vertical profiles across the nightside, which are compared to the VEx atomic oxygen density map. Both the simulated map and vertical profiles are in close agreement with VEx observations within a ˜30° contour of the anti-solar point. The quality of agreement decreases past ˜30°. This discrepancy implies the employment of Rayleigh friction within the VTGCM may be an over-simplification for representing wave drag effects on the local time variation of global winds. Nevertheless, the simulated atomic oxygen vertical profiles are comparable with the VEx profiles above 90 km, which is consistent with similar O 2 ( 1Δ) IR nightglow intensities. The VTGCM simulations demonstrate the importance of low altitude trace species as a loss for atomic oxygen below 95 km. The agreement between simulations and observations provides confidence in the validity of the simulated mean global thermospheric circulation pattern in the lower thermosphere.

  8. Observations and modeling of a tidal inlet dye tracer plume

    Science.gov (United States)

    Feddersen, Falk; Olabarrieta, Maitane; Guza, R. T.; Winters, D.; Raubenheimer, Britt; Elgar, Steve

    2016-10-01

    A 9 km long tracer plume was created by continuously releasing Rhodamine WT dye for 2.2 h during ebb tide within the southern edge of the main tidal channel at New River Inlet, NC on 7 May 2012, with highly obliquely incident waves and alongshore winds. Over 6 h from release, COAWST (coupled ROMS and SWAN, including wave, wind, and tidal forcing) modeled dye compares well with (aerial hyperspectral and in situ) observed dye concentration. Dye first was transported rapidly seaward along the main channel and partially advected across the ebb-tidal shoal until reaching the offshore edge of the shoal. Dye did not eject offshore in an ebb-tidal jet because the obliquely incident breaking waves retarded the inlet-mouth ebb-tidal flow and forced currents along the ebb shoal. The dye plume largely was confined to <4 m depth. Dye was then transported downcoast in the narrow (few 100 m wide) surfzone of the beach bordering the inlet at 0.3 m s-1 driven by wave breaking. Over 6 h, the dye plume is not significantly affected by buoyancy. Observed dye mass balances close indicating all released dye is accounted for. Modeled and observed dye behaviors are qualitatively similar. The model simulates well the evolution of the dye center of mass, lateral spreading, surface area, and maximum concentration, as well as regional ("inlet" and "ocean") dye mass balances. This indicates that the model represents well the dynamics of the ebb-tidal dye plume. Details of the dye transport pathways across the ebb shoal are modeled poorly perhaps owing to low-resolution and smoothed model bathymetry. Wave forcing effects have a large impact on the dye transport.

  9. New Cosmological Model and Its Implications on Observational Data Interpretation

    Directory of Open Access Journals (Sweden)

    Vlahovic Branislav

    2013-09-01

    Full Text Available The paradigm of ΛCDM cosmology works impressively well and with the concept of inflation it explains the universe after the time of decoupling. However there are still a few concerns; after much effort there is no detection of dark matter and there are significant problems in the theoretical description of dark energy. We will consider a variant of the cosmological spherical shell model, within FRW formalism and will compare it with the standard ΛCDM model. We will show that our new topological model satisfies cosmological principles and is consistent with all observable data, but that it may require new interpretation for some data. Considered will be constraints imposed on the model, as for instance the range for the size and allowed thickness of the shell, by the supernovae luminosity distance and CMB data. In this model propagation of the light is confined along the shell, which has as a consequence that observed CMB originated from one point or a limited space region. It allows to interpret the uniformity of the CMB without inflation scenario. In addition this removes any constraints on the uniformity of the universe at the early stage and opens a possibility that the universe was not uniform and that creation of galaxies and large structures is due to the inhomogeneities that originated in the Big Bang.

  10. Can atmospheric reanalysis datasets be used to reproduce flood characteristics?

    Science.gov (United States)

    Andreadis, K.; Schumann, G.; Stampoulis, D.

    2014-12-01

    Floods are one of the costliest natural disasters and the ability to understand their characteristics and their interactions with population, land cover and climate changes is of paramount importance. In order to accurately reproduce flood characteristics such as water inundation and heights both in the river channels and floodplains, hydrodynamic models are required. Most of these models operate at very high resolutions and are computationally very expensive, making their application over large areas very difficult. However, a need exists for such models to be applied at regional to global scales so that the effects of climate change with regards to flood risk can be examined. We use the LISFLOOD-FP hydrodynamic model to simulate a 40-year history of flood characteristics at the continental scale, particularly over Australia. LISFLOOD-FP is a 2-D hydrodynamic model that solves the approximate Saint-Venant equations at large scales (on the order of 1 km) using a sub-grid representation of the river channel. This implementation is part of an effort towards a global 1-km flood modeling framework that will allow the reconstruction of a long-term flood climatology. The components of this framework include a hydrologic model (the widely-used Variable Infiltration Capacity model) and a meteorological dataset that forces it. In order to extend the simulated flood climatology to 50-100 years in a consistent manner, reanalysis datasets have to be used. The objective of this study is the evaluation of multiple atmospheric reanalysis datasets (ERA, NCEP, MERRA, JRA) as inputs to the VIC/LISFLOOD-FP model. Comparisons of the simulated flood characteristics are made with both satellite observations of inundation and a benchmark simulation of LISFLOOD-FP being forced by observed flows. Finally, the implications of the availability of a global flood modeling framework for producing flood hazard maps and disseminating disaster information are discussed.

  11. A sliding mode observer for hemodynamic characterization under modeling uncertainties

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial knowledge of the so-called balloon model describing the hemodynamic behavior of the brain. To overcome this difficulty, a High Order Sliding Mode observer is applied to the balloon system, where the unknown coupling is considered as an internal perturbation. The effectiveness of the proposed method is illustrated through a set of synthetic data that mimic fMRI experiments.

  12. Constraining interacting dark energy models with latest cosmological observations

    Science.gov (United States)

    Xia, Dong-Mei; Wang, Sai

    2016-11-01

    The local measurement of H0 is in tension with the prediction of Λ cold dark matter model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on cosmic microwave background, the baryon acoustic oscillation, large-scale structure, supernovae, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  13. Constraining interacting dark energy models with latest cosmological observations

    CERN Document Server

    Xia, Dong-Mei

    2016-01-01

    The local measurement of $H_0$ is in tension with the prediction of $\\Lambda$CDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, $H(z)$ and $H_0$ to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The $H_0$ tension can be moderately alleviated, but not totally released.

  14. Altitude dependence of atmospheric temperature trends: Climate models versus observation

    CERN Document Server

    Douglass, D H; Singer, F

    2004-01-01

    As a consequence of greenhouse forcing, all state of the art general circulation models predict a positive temperature trend that is greater for the troposphere than the surface. This predicted positive trend increases in value with altitude until it reaches a maximum ratio with respect to the surface of as much as 1.5 to 2.0 at about 200 to 400 hPa. However, the temperature trends from several independent observational data sets show decreasing as well as mostly negative values. This disparity indicates that the three models examined here fail to account for the effects of greenhouse forcings.

  15. The s Process: Nuclear Physics, Stellar Models, Observations

    CERN Document Server

    Kaeppeler, Franz; Bisterzo, Sara; Aoki, Wako

    2010-01-01

    Nucleosynthesis in the s process takes place in the He burning layers of low mass AGB stars and during the He and C burning phases of massive stars. The s process contributes about half of the element abundances between Cu and Bi in solar system material. Depending on stellar mass and metallicity the resulting s-abundance patterns exhibit characteristic features, which provide comprehensive information for our understanding of the stellar life cycle and for the chemical evolution of galaxies. The rapidly growing body of detailed abundance observations, in particular for AGB and post-AGB stars, for objects in binary systems, and for the very faint metal-poor population represents exciting challenges and constraints for stellar model calculations. Based on updated and improved nuclear physics data for the s-process reaction network, current models are aiming at ab initio solution for the stellar physics related to convection and mixing processes. Progress in the intimately related areas of observations, nuclear...

  16. The reproducible radio outbursts of SS Cygni

    Science.gov (United States)

    Russell, T. D.; Miller-Jones, J. C. A.; Sivakoff, G. R.; Altamirano, D.; O'Brien, T. J.; Page, K. L.; Templeton, M. R.; Körding, E. G.; Knigge, C.; Rupen, M. P.; Fender, R. P.; Heinz, S.; Maitra, D.; Markoff, S.; Migliari, S.; Remillard, R. A.; Russell, D. M.; Sarazin, C. L.; Waagen, E. O.

    2016-08-01

    We present the results of our intensive radio observing campaign of the dwarf nova SS Cyg during its 2010 April outburst. We argue that the observed radio emission was produced by synchrotron emission from a transient radio jet. Comparing the radio light curves from previous and subsequent outbursts of this system (including high-resolution observations from outbursts in 2011 and 2012) shows that the typical long and short outbursts of this system exhibit reproducible radio outbursts that do not vary significantly between outbursts, which is consistent with the similarity of the observed optical, ultraviolet and X-ray light curves. Contemporaneous optical and X-ray observations show that the radio emission appears to have been triggered at the same time as the initial X-ray flare, which occurs as disc material first reaches the boundary layer. This raises the possibility that the boundary region may be involved in jet production in accreting white dwarf systems. Our high spatial resolution monitoring shows that the compact jet remained active throughout the outburst with no radio quenching.

  17. The reproducible radio outbursts of SS Cygni

    CERN Document Server

    Russell, T D; Sivakoff, G R; Altamirano, D; O'Brien, T J; Page, K L; Templeton, M R; Koerding, E G; Knigge, C; Rupen, M P; Fender, R P; Heinz, S; Maitra, D; Markoff, S; Migliari, S; Remillard, R A; Russell, D M; Sarazin, C L; Waagen, E O

    2016-01-01

    We present the results of our intensive radio observing campaign of the dwarf nova SS Cyg during its 2010 April outburst. We argue that the observed radio emission was produced by synchrotron emission from a transient radio jet. Comparing the radio light curves from previous and subsequent outbursts of this system (including high-resolution observations from outbursts in 2011 and 2012) shows that the typical long and short outbursts of this system exhibit reproducible radio outbursts that do not vary significantly between outbursts, which is consistent with the similarity of the observed optical, ultraviolet and X-ray light curves. Contemporaneous optical and X-ray observations show that the radio emission appears to have been triggered at the same time as the initial X-ray flare, which occurs as disk material first reaches the boundary layer. This raises the possibility that the boundary region may be involved in jet production in accreting white dwarf systems. Our high spatial resolution monitoring shows th...

  18. Space-based Observational Constraints for 1-D Plume Rise Models

    Science.gov (United States)

    Martin, Maria Val; Kahn, Ralph A.; Logan, Jennifer A.; Paguam, Ronan; Wooster, Martin; Ichoku, Charles

    2012-01-01

    We use a space-based plume height climatology derived from observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the NASA Terra satellite to evaluate the ability of a plume-rise model currently embedded in several atmospheric chemical transport models (CTMs) to produce accurate smoke injection heights. We initialize the plume-rise model with assimilated meteorological fields from the NASA Goddard Earth Observing System and estimated fuel moisture content at the location and time of the MISR measurements. Fire properties that drive the plume-rise model are difficult to estimate and we test the model with four estimates for active fire area and four for total heat flux, obtained using empirical data and Moderate Resolution Imaging Spectroradiometer (MODIS) re radiative power (FRP) thermal anomalies available for each MISR plume. We show that the model is not able to reproduce the plume heights observed by MISR over the range of conditions studied (maximum r2 obtained in all configurations is 0.3). The model also fails to determine which plumes are in the free troposphere (according to MISR), key information needed for atmospheric models to simulate properly smoke dispersion. We conclude that embedding a plume-rise model using currently available re constraints in large-scale atmospheric studies remains a difficult proposition. However, we demonstrate the degree to which the fire dynamical heat flux (related to active fire area and sensible heat flux), and atmospheric stability structure influence plume rise, although other factors less well constrained (e.g., entrainment) may also be significant. Using atmospheric stability conditions, MODIS FRP, and MISR plume heights, we offer some constraints on the main physical factors that drive smoke plume rise. We find that smoke plumes reaching high altitudes are characterized by higher FRP and weaker atmospheric stability conditions than those at low altitude, which tend to remain confined

  19. Reproducing the Kinematics of Damped Lyman-alpha Systems

    CERN Document Server

    Bird, Simeon; Neeleman, Marcel; Genel, Shy; Vogelsberger, Mark; Hernquist, Lars

    2014-01-01

    We examine the kinematic structure of Damped Lyman-alpha Systems (DLAs) in a series of cosmological hydrodynamic simulations using the AREPO code. We are able to match the distribution of velocity widths of associated low ionisation metal absorbers substantially better than earlier work. Our simulations produce a population of DLAs dominated by halos with virial velocities around 70 km/s, consistent with a picture of relatively small, faint objects. In addition, we reproduce the observed correlation between velocity width and metallicity and the equivalent width distribution of SiII. Some discrepancies of moderate statistical significance remain; too many of our spectra show absorption concentrated at the edge of the profile and there are slight differences in the exact shape of the velocity width distribution. We show that the improvement over previous work is mostly due to our strong feedback from star formation and our detailed modelling of the metal ionisation state.

  20. Modeling Dust and Starlight in Galaxies Observed by Spitzer and Herschel: NGC 628 and NGC 6946

    CERN Document Server

    Aniano, G; Calzetti, D; Dale, D A; Engelbracht, C W; Gordon, K D; Hunt, L K; Kennicutt, R C; Krause, O; Leroy, A K; Rix, H-W; Roussel, H; Sandstrom, K; Sauvage, M; Walter, F; Armus, L; Bolatto, A D; Crocker, A; Meyer, J Donovan; Galametz, M; Helou, G; Hinz, J; Johnson, B D; Koda, J; Montiel, E; Murphy, E J; Skibba, R; Smith, J -D T; Wolfire, M G

    2012-01-01

    We characterize the dust in NGC628 and NGC6946, two nearby spiral galaxies in the KINGFISH sample. With data from 3.6um to 500um, dust models are strongly constrained. Using the Draine & Li (2007) dust model, (amorphous silicate and carbonaceous grains), for each pixel in each galaxy we estimate (1) dust mass surface density, (2) dust mass fraction contributed by polycyclic aromatic hydrocarbons (PAH)s, (3) distribution of starlight intensities heating the dust, (4) total infrared (IR) luminosity emitted by the dust, and (5) IR luminosity originating in regions with high starlight intensity. We obtain maps for the dust properties, which trace the spiral structure of the galaxies. The dust models successfully reproduce the observed global and resolved spectral energy distributions (SEDs). The overall dust/H mass ratio is estimated to be 0.0082+/-0.0017 for NGC628, and 0.0063+/-0.0009 for NGC6946, consistent with what is expected for galaxies of near-solar metallicity. Our derived dust masses are larger (by...

  1. Modeling Spitzer observations of VV Ser. I. The circumstellar disk of a UX Orionis star

    CERN Document Server

    Pontoppidan, K M; Blake, G A; Boogert, A C A; Van Dishoeck, E F; Evans, N J; Kessler-Silacci, J; Lahuis, F; Pontoppidan, Klaus M.; Dullemond, Cornelis P.; Blake, Geoffrey A.; Dishoeck, Ewine F. van; Evans, Neal J.; Kessler-Silacci, Jacqueline; Lahuis, Fred

    2006-01-01

    We present mid-infrared Spitzer-IRS spectra of the well-known UX Orionis star VV Ser. We combine the Spitzer data with interferometric and spectroscopic data from the literature covering UV to submillimeter wavelengths. The full set of data are modeled by a two-dimensional axisymmetric Monte Carlo radiative transfer code. The model is used to test the prediction of (Dullemond et al. 2003) that disks around UX Orionis stars must have a self-shadowed shape, and that these disks are seen nearly edge-on, looking just over the edge of a puffed-up inner rim, formed roughly at the dust sublimation radius. We find that a single, relatively simple model is consistent with all the available observational constraints spanning 4 orders of magnitude in wavelength and spatial scales, providing strong support for this interpretation of UX Orionis stars. The grains in the upper layers of the puffed-up inner rim must be small (0.01-0.4 micron) to reproduce the colors (R_V ~ 3.6) of the extinction events, while the shape and s...

  2. Observations, Thermochemical Calculations, and Modeling of Exoplanetary Atmospheres

    CERN Document Server

    Blecic, Jasmina

    2016-01-01

    This dissertation as a whole aims to provide means to better understand hot-Jupiter planets through observing, performing thermochemical calculations, and modeling their atmospheres. We used Spitzer multi-wavelength secondary-eclipse observations and targets with high signal-to-noise ratios, as their deep eclipses allow us to detect signatures of spectral features and assess planetary atmospheric structure and composition with greater certainty. Chapter 1 gives a short introduction. Chapter 2 presents the Spitzer secondary-eclipse analysis and atmospheric characterization of WASP-14b. WASP-14b is a highly irradiated, transiting hot Jupiter. By applying a Bayesian approach in the atmospheric analysis, we found an absence of thermal inversion contrary to theoretical predictions. Chapter 3 describes the infrared observations of WASP-43b Spitzer secondary eclipses, data analysis, and atmospheric characterization. WASP-43b is one of the closest-orbiting hot Jupiters, orbiting one of the coolest stars with a hot Ju...

  3. Mock Observations of Blue Stragglers in Globular Cluster Models

    CERN Document Server

    Sills, Alison; Chatterjee, Sourav; Rasio, Frederic A

    2013-01-01

    We created artificial color-magnitude diagrams of Monte Carlo dynamical models of globular clusters, and then used observational methods to determine the number of blue stragglers in those clusters. We compared these blue stragglers to various cluster properties, mimicking work that has been done for blue stragglers in Milky Way globular clusters to determine the dominant formation mechanism(s) of this unusual stellar population. We find that a mass-based prescription for selecting blue stragglers will choose approximately twice as many blue stragglers than a selection criterion that was developed for observations of real clusters. However, the two numbers of blue stragglers are well-correlated, so either selection criterion can be used to characterize the blue straggler population of a cluster. We confirm previous results that the simplified prescription for the evolution of a collision or merger product in the BSE code overestimates the lifetime of collision products. Because our observationally-motivated s...

  4. A Comparison of TWP-ICE Observational Data with Cloud-Resolving Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Fridlind, A. M.; Ackerman, Andrew; Chaboureau, Jean-Pierre; Fan, Jiwen; Grabowski, Wojciech W.; Hill, A.; Jones, T. R.; Khaiyer, M. M.; Liu, G.; Minnis, Patrick; Morrison, H.; Nguyen, L.; Park, S.; Petch, Jon C.; Pinty, Jean-Pierre; Schumacher, Courtney; Shipway, Ben; Varble, A. C.; Wu, Xiaoqing; Xie, Shaocheng; Zhang, Minghua

    2012-03-13

    Observations made during the TWP-ICE campaign are used to drive and evaluate thirteen cloud-resolving model simulations with periodic lateral boundary conditions. The simulations employ 2D and 3D dynamics, one- and two-moment microphysics, several variations on large-scale forcing, and the use of observationally derived aerosol properties to prognose droplet numbers. When domain means are averaged over a 6-day active monsoon period, all simulations reproduce observed surface precipitation rate but not its structural distribution. Simulated fractional areas covered by convective and stratiform rain are uncorrelated with one another, and are both variably overpredicted by up to a factor of {approx}2. Stratiform area fractions are strongly anticorrelated with outgoing longwave radiation (OLR) but are negligibly correlated with ice water path (IWP), indicating that ice spatial distribution controls OLR more than mean IWP. Overpredictions of OLR tend to be accompanied by underpredictions of reflected shortwave radiation (RSR). When there are two simulations differing only in microphysics scheme or large-scale forcing, the one with smaller stratiform area tends to exhibit greater OLR and lesser RSR by similar amounts. After {approx}10 days, simulations reach a suppressed monsoon period with a wide range of mean precipitable water vapor, attributable in part to varying overprediction of cloud-modulated radiative flux divergence compared with observationally derived values. Differences across the simulation ensemble arise from multiple sources, including dynamics, microphysics, and radiation treatments. Close agreement of spatial and temporal averages with observations may not be expected, but the wide spreads of predicted stratiform fraction and anticorrelated OLR indicate a need for more rigorous observation-based evaluation of the underlying micro- and macrophysical properties of convective and stratiform structures.

  5. Coronal Loops: Observations and Modeling of Confined Plasma

    Directory of Open Access Journals (Sweden)

    Fabio Reale

    2014-07-01

    Full Text Available Coronal loops are the building blocks of the X-ray bright solar corona. They owe their brightness to the dense confined plasma, and this review focuses on loops mostly as structures confining plasma. After a brief historical overview, the review is divided into two separate but not independent parts: the first illustrates the observational framework, the second reviews the theoretical knowledge. Quiescent loops and their confined plasma are considered and, therefore, topics such as loop oscillations and flaring loops (except for non-solar ones, which provide information on stellar loops are not specifically addressed here. The observational section discusses the classification, populations, and the morphology of coronal loops, its relationship with the magnetic field, and the loop stranded structure. The section continues with the thermal properties and diagnostics of the loop plasma, according to the classification into hot, warm, and cool loops. Then, temporal analyses of loops and the observations of plasma dynamics, hot and cool flows, and waves are illustrated. In the modeling section, some basics of loop physics are provided, supplying fundamental scaling laws and timescales, a useful tool for consultation. The concept of loop modeling is introduced and models are divided into those treating loops as monolithic and static, and those resolving loops into thin and dynamic strands. More specific discussions address modeling the loop fine structure and the plasma flowing along the loops. Special attention is devoted to the question of loop heating, with separate discussion of wave (AC and impulsive (DC heating. Large-scale models including atmosphere boxes and the magnetic field are also discussed. Finally, a brief discussion about stellar coronal loops is followed by highlights and open questions.

  6. Tropospheric Distribution of Trace Species during the Oxidation Mechanism Observations (OMO-2015) campaign: Model Evaluation and sensitivity simulations

    Science.gov (United States)

    Ojha, Narendra; Pozzer, Andrea; Jöckel, Patrick; Fischer, Horst; Zahn, Andreas; Tomsche, Laura; Lelieveld, Jos

    2017-04-01

    The Asian monsoon convection redistributes trace species, affecting the tropospheric chemistry and radiation budget over Asia and downwind as far as the Mediterranean. It remains challenging to model these impacts due to uncertainties, e.g. associated with the convection parameterization and input emissions. Here, we perform a series of numerical experiments using the global ECHAM5/MESSy atmospheric chemistry model (EMAC) to investigate the tropospheric distribution of O3 and related tracers measured during the Oxidation Mechanism Observations (OMO) conducted during July-August 2015. The reference simulation can reproduce the spatio-temporal variations to some extent (e.g. r2 = 0.7 for O3, 0.6 for CO). However, this simulation underestimates mean CO in the lower troposphere by about 30 ppbv and overestimates mean O3 up to 35 ppbv, especially in the middle-upper troposphere. Interestingly, sensitivity simulations with 50% higher biofuel emissions of CO over South Asia had insignificant effect on CO underestimation, pointing to sources upwind of South Asia. Use of an alternative convection parameterization is found to significantly improve simulated O3. The study reveals the abilities as well as the limitations of the model to reproduce observations and study atmospheric chemistry and climate implications of the monsoon.

  7. On The Reproducibility of Seasonal Land-surface Climate

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J

    2004-10-22

    The sensitivity of the continental seasonal climate to initial conditions is estimated from an ensemble of decadal simulations of an atmospheric general circulation model with the same specifications of radiative forcings and monthly ocean boundary conditions, but with different initial states of atmosphere and land. As measures of the ''reproducibility'' of continental climate for different initial conditions, spatio-temporal correlations are computed across paired realizations of eleven model land-surface variables in which the seasonal cycle is either included or excluded--the former case being pertinent to climate simulation, and the latter to seasonal anomaly prediction. It is found that the land-surface variables which include the seasonal cycle are impacted only marginally by changes in initial conditions; moreover, their seasonal climatologies exhibit high spatial reproducibility. In contrast, the reproducibility of a seasonal land-surface anomaly is generally low, although it is substantially higher in the Tropics; its spatial reproducibility also markedly fluctuates in tandem with warm and cold phases of the El Nino/Southern Oscillation. However, the overall degree of reproducibility depends strongly on the particular land-surface anomaly considered. It is also shown that the predictability of a land-surface anomaly implied by its reproducibility statistics is consistent with what is inferred from more conventional predictability metrics. Implications of these results for climate model intercomparison projects and for operational forecasts of seasonal continental climate also are elaborated.

  8. Seasonal patterns of Saharan dust over Cape Verde – a combined approach using observations and modelling

    Directory of Open Access Journals (Sweden)

    Carla Gama

    2015-02-01

    Full Text Available A characterisation of the dust transported from North Africa deserts to the Cape Verde Islands, including particle size distribution, concentrations and optical properties, for a complete annual cycle (the year 2011, is presented and discussed. The present analysis includes annual simulations of the BSC-DREAM8b and the NMMB/BSC-Dust models, 1-yr of surface aerosol measurements performed within the scope of the CV-DUST Project, AERONET direct-sun observations, and back-trajectories. A seasonal intrusion of dust from North West Africa affects Cape Verde at surface levels from October till March when atmospheric concentrations in Praia are very high (PM10 observed concentrations reach hourly values up to 710 µg/m3. The air masses responsible for the highest aerosol concentrations in Cape Verde describe a path over the central Saharan desert area in Algeria, Mali and Mauritania before reaching the Atlantic Ocean. During summer, dust from North Africa is transported towards the region at higher altitudes, yielding to high aerosol optical depths. The BSC-DREAM8b and the NMMB/BSC-Dust models, which are for the first time evaluated for surface concentration and size distribution in Africa for an annual cycle, are able to reproduce the majority of the dust episodes. Results from NMMB/BSC-Dust are in better agreement with observed particulate matter concentrations and aerosol optical depth throughout the year. For this model, the comparison between observed and modelled PM10 daily averaged concentrations yielded a correlation coefficient of 0.77 and a 29.0 µg/m3 ‘bias’, while for BSC-DREAM8b the correlation coefficient was 0.63 and ‘bias’ 32.9 µg/m3. From this value, 12–14 µg/m3 is due to the sea salt contribution, which is not considered by the model. In addition, the model does not take into account biomass-burning particles, secondary pollutants and local sources (i.e., resuspension. These results roughly allow for the establishment of a

  9. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2009-01-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  10. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Directory of Open Access Journals (Sweden)

    M. Zavala

    2008-08-01

    Full Text Available The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3, carbon monoxide (CO and nitrogen oxides (NOx suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio.

    This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM and the standard Brute Force Method (BFM in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with

  11. Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City

    Science.gov (United States)

    Zavala, M.; Lei, W.; Molina, M. J.; Molina, L. T.

    2009-01-01

    The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the

  12. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  13. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  14. Magneto-frictional Modeling of Coronal Nonlinear Force-free Fields. II. Application to Observations

    Science.gov (United States)

    Guo, Y.; Xia, C.; Keppens, R.

    2016-09-01

    A magneto-frictional module has been implemented and tested in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC) in the first paper of this series. Here, we apply the magneto-frictional method to observations to demonstrate its applicability in both Cartesian and spherical coordinates, and in uniform and block-adaptive octree grids. We first reconstruct a nonlinear force-free field (NLFFF) on a uniform grid of 1803 cells in Cartesian coordinates, with boundary conditions provided by the vector magnetic field observed by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) at 06:00 UT on 2010 November 11 in active region NOAA 11123. The reconstructed NLFFF successfully reproduces the sheared and twisted field lines and magnetic null points. Next, we adopt a three-level block-adaptive grid to model the same active region with a higher spatial resolution on the bottom boundary and a coarser treatment of regions higher up. The force-free and divergence-free metrics obtained are comparable to the run with a uniform grid, and the reconstructed field topology is also very similar. Finally, a group of active regions, including NOAA 11401, 11402, 11405, and 11407, observed at 03:00 UT on 2012 January 23 by SDO/HMI is modeled with a five-level block-adaptive grid in spherical coordinates, where we reach a local resolution of 0\\buildrel{\\circ}\\over{.} 06 pixel-1 in an area of 790 Mm × 604 Mm. Local high spatial resolution and a large field of view in NLFFF modeling can be achieved simultaneously in parallel and block-adaptive magneto-frictional relaxations.

  15. Observational constraints on non-minimally coupled Galileon model

    CERN Document Server

    Jamil, Mubasher; Myrzakulov, Ratbay; 10.1140/epjc/s10052-013-2300-6

    2013-01-01

    As an extension of Dvali-Gabadadze-Porrati (DGP) model, the Galileon theory has been proposed to explain the "self-accelerating problem" and "ghost instability problem". In this Paper, we extend the Galileon theory by considering a non-minimally coupled Galileon scalar with gravity. We find that crossing of phantom divide line is possible for such model. Moreover we perform the statefinder analysis and $Om(z)$ diagnostic and constraint the model parameters from the latest Union 2 type Ia Supernova (SNe Ia) set and the baryonic acoustic oscillation (BAO). Using these data sets, we obtain the constraints $\\Omega_\\text{m0}=0.263_{-0.031}^{+0.031}$, $n=1.53_{-0.37}^{+0.21}$ (at the 95% confidence level) with $\\chi^2_{\\text{min}}=473.376$. Further we study the evolution of the equation of state parameter for the effective dark energy and observe that SNe Ia + BAO prefers a phantom-like dark energy.

  16. Observations and model calculations of trace gas scavenging in a dense Saharan dust plume during MINATROC

    Directory of Open Access Journals (Sweden)

    M. de Reus

    2005-01-01

    Full Text Available An intensive field measurement campaign was performed in July/August 2002 at the Global Atmospheric Watch station Izaña on Tenerife to study the interaction of mineral dust aerosol and tropospheric chemistry (MINATROC. A dense Saharan dust plume, with aerosol masses exceeding 500 µg m-3, persisted for three days. During this dust event strongly reduced mixing ratios of ROx (HO2, CH3O2 and higher organic peroxy radicals, H2O2, NOx (NO and NO2 and O3 were observed. A chemistry boxmodel, constrained by the measurements, has been used to study gas phase and heterogeneous chemistry. It appeared to be difficult to reproduce the observed HCHO mixing ratios with the model, possibly related to the representation of precursor gas concentrations or the absence of dry deposition. The model calculations indicate that the reduced H2O2 mixing ratios in the dust plume can be explained by including the heterogeneous removal reaction of HO2 with an uptake coefficient of 0.2, or by assuming heterogeneous removal of H2O2 with an accommodation coefficient of 5x10-4. However, these heterogeneous reactions cannot explain the low ROx mixing ratios observed during the dust event. Whereas a mean daytime net ozone production rate (NOP of 1.06 ppbv/hr occurred throughout the campaign, the reduced ROx and NOx mixing ratios in the Saharan dust plume contributed to a reduced NOP of 0.14-0.33 ppbv/hr, which likely explains the relatively low ozone mixing ratios observed during this event.

  17. Runoff-generated debris flows: observations and modeling of surge initiation, magnitude, and frequency

    Science.gov (United States)

    Kean, Jason W.; McCoy, Scott W.; Tucker, Gregory E.; Staley, Dennis M.; Coe, Jeffrey A.

    2013-01-01

    Runoff during intense rainstorms plays a major role in generating debris flows in many alpine areas and burned steeplands. Yet compared to debris flow initiation from shallow landslides, the mechanics by which runoff generates a debris flow are less understood. To better understand debris flow initiation by surface water runoff, we monitored flow stage and rainfall associated with debris flows in the headwaters of two small catchments: a bedrock-dominated alpine basin in central Colorado (0.06 km2) and a recently burned area in southern California (0.01 km2). We also obtained video footage of debris flow initiation and flow dynamics from three cameras at the Colorado site. Stage observations at both sites display distinct patterns in debris flow surge characteristics relative to rainfall intensity (I). We observe small, quasiperiodic surges at low I; large, quasiperiodic surges at intermediate I; and a single large surge followed by small-amplitude fluctuations about a more steady high flow at high I. Video observations of surge formation lead us to the hypothesis that these flow patterns are controlled by upstream variations in channel slope, in which low-gradient sections act as “sediment capacitors,” temporarily storing incoming bed load transported by water flow and periodically releasing the accumulated sediment as a debris flow surge. To explore this hypothesis, we develop a simple one-dimensional morphodynamic model of a sediment capacitor that consists of a system of coupled equations for water flow, bed load transport, slope stability, and mass flow. This model reproduces the essential patterns in surge magnitude and frequency with rainfall intensity observed at the two field sites and provides a new framework for predicting the runoff threshold for debris flow initiation in a burned or alpine setting.

  18. Testing the Caustic Ring Dark Matter Halo Model Against Observations in the Milky Way

    Science.gov (United States)

    Dumas, Julie; Newberg, Heidi Jo; Niedzielski, Bethany; Susser, Adam; Thompson, Jeffery M.; Weiss, Jake; Lewis, Kim M.

    2016-06-01

    One prediction of axion dark matter models is they can form Bose-Einstein condensates and rigid caustic rings as a halo collapses in the non-linear regime. In this thesis, we undertake the first study of a caustic ring model for the Milky Way halo (Duffy & Sikivie 2008), paying particular attention to observational consequences. We first present the formalism for calculating the gravitational acceleration of a caustic ring halo. The caustic ring dark matter theory reproduces a roughly logarithmic halo, with large perturbations near the rings. We show that this halo can reasonably match the known Galactic rotation curve. We are not able to confirm or rule out an association between the positions of the caustic rings and oscillations in the observed rotation curve, due to insufficient rotation curve data. We explore the effects of dark matter caustic rings on dwarf galaxy tidal disruption with N-body simulations. Simulations of the Sagittarius (Sgr) dwarf galaxy in a caustic ring halo potential, with disk and bulge parameters that are tuned to match the Galactic rotation curve, match observations of the Sgr trailing tidal tails as far as 90 kpc from the Galactic center. Like the Navarro-Frenk-White (NFW) halo, they are, however, unable to match the leading tidal tail. None of the caustic, NFW, or triaxial logarithmic halos are able to simultaneously match observations of the leading and trailing arms of the Sagittarius stream. We further show that simulations of dwarf galaxies that move through caustic rings are qualitatively similar to those moving in a logarithmic halo. This research was funded by NSF grant AST 10-09670, the NASA-NY Space Grant, and the American Fellowship from AAUW.

  19. Meteor layers in the Martian ionosphere: Observations and Modelling

    Science.gov (United States)

    Peter, Kerstin; Molina Cuberos, Gregorio J.; Witasse, Olivier; Paetzold, Martin

    Observations by the radio science experiments MaRS on Mars Express and VeRa on Venus Express revealed the appearance of additional electron density layers in the Martian and Venu-sian ionosphere below the common secondary layers in some of the ionospheric profiles. This may be an indicator for the signature of meteoric particles in the Martian atmosphere. There are two main sources of meteoric flux into planetary atmospheres: the meteoroid stream com-ponent whose origin is related to comets, and the sporadic meteoroid component which has its source in body collisions i.e. in the Kuiper belt or the asteoroid belt. This paper will present the detection status for the Martian meteor layers in MaRS electron density profiles and the first steps towards modelling this feature. The presented meteor layer model will show the influence of the sporadic meteoric component on the Martian ionosphere. Input param-eters to this model are the ablation profiles of atomic Magnesium and Iron in the Martian atmosphere caused by sporadic meteoric influx, the neutral atmosphere which is taken from the Mars Climate Database and electron density profiles for an undisturbed ionosphere from a simple photochemical model. The meteor layer model includes the effects of molecular and eddy diffusion processes of metallic species and contains chemical reaction schemes for atomic Magnesium and Iron. It calculates the altitude-density-profiles for several metallic species on the basis of Mg and Fe in chemical equilibrium by analytical solution of the reaction equations. A first comparison of model and observed meteoric structures in the Martian ionosphere will be presented.

  20. Modelling of sea salt concentrations over Europe: key uncertainties and comparison with observations

    Directory of Open Access Journals (Sweden)

    S. Tsyro

    2011-10-01

    Full Text Available Sea salt aerosol can significantly affect the air quality. Sea salt can cause enhanced concentrations of particulate matter and change particle chemical composition, in particular in coastal areas, and therefore should be accounted for in air quality modelling. We have used an EMEP Unified model to calculate sea salt concentrations and depositions over Europe, focusing on studying the effects of uncertainties in sea salt production and lifetime on calculation results. Model calculations of sea salt have been compared with EMEP observations of sodium concentrations in air and precipitation for a four year period, from 2004 to 2007, including size (fine/coarse resolved EMEP intensive measurements in 2006 and 2007. In the presented calculations, sodium air concentrations are between 8% and 46% overestimated, whereas concentrations in precipitation are systematically underestimated by 65–70% for years 2004–2007. A series of model tests have been performed to investigate the reasons for this underestimation, but further studies are needed.